CN112232175B - Method and device for identifying state of operation object - Google Patents

Method and device for identifying state of operation object Download PDF

Info

Publication number
CN112232175B
CN112232175B CN202011090028.0A CN202011090028A CN112232175B CN 112232175 B CN112232175 B CN 112232175B CN 202011090028 A CN202011090028 A CN 202011090028A CN 112232175 B CN112232175 B CN 112232175B
Authority
CN
China
Prior art keywords
face
image
face region
region image
operation object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011090028.0A
Other languages
Chinese (zh)
Other versions
CN112232175A (en
Inventor
吴翔
余程鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Leading Technology Co Ltd
Original Assignee
Nanjing Leading Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Leading Technology Co Ltd filed Critical Nanjing Leading Technology Co Ltd
Priority to CN202011090028.0A priority Critical patent/CN112232175B/en
Publication of CN112232175A publication Critical patent/CN112232175A/en
Application granted granted Critical
Publication of CN112232175B publication Critical patent/CN112232175B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness

Abstract

The application relates to the technical field of state recognition, in particular to a method and a device for recognizing the state of an operation object, wherein each face area image of a face containing the operation object is obtained from a video to be recognized; respectively determining a connecting line between two specific pixel points in the face region image aiming at any one face region image in the face region images, and obtaining a deflection angle of the face region image by calculating an included angle between the connecting line and a preset standard horizontal connecting line; based on the deflection angle, carrying out angle correction on the face region image; identifying the face state category of an operation object in the face region image after angle correction; and determining whether the operation object in the video to be recognized is in a fatigue state or not according to the face state type of the operation object in each face region image, so that the state of the operation object is recognized based on the corrected face region image, and the problem of low recognition precision caused by an overlarge face angle can be prevented.

Description

Method and device for identifying state of operation object
Technical Field
The present application relates to the field of state identification technologies, and in particular, to a method and an apparatus for identifying a state of an operation object.
Background
At present, in the driving process of a vehicle, driver fatigue is one of important reasons causing a malignant traffic accident, and seriously jeopardizes traffic safety, so how to realize recognition of the fatigue state of the driver becomes a problem to be solved urgently.
In order to solve the above problems, in the prior art, an image of a driver is usually captured, and a fatigue state of the driver is recognized according to the captured image, but if a camera is installed right in front of the driver, a driving sight line of the driver is shielded, and if the camera is installed on a steering wheel right in front of the driver, the driver shields the camera, so that the image of the driver cannot be better acquired, therefore, the camera is often installed on a left a-pillar of a vehicle, but because the camera is installed on the a-pillar of the vehicle, an angle of an acquired face image of the driver is large, when the face key point is detected, the detection precision is low, the face key point is difficult to be accurately positioned, and thus, the accuracy of recognition of the fatigue state of the driver is reduced.
Disclosure of Invention
The embodiment of the application provides an operation object state identification method and device, so that the accuracy of fatigue state identification of a driver is improved.
The embodiment of the application provides the following specific technical scheme:
an operation object state identification method comprises the following steps:
respectively carrying out face detection on each image to be recognized contained in a video to be recognized collected by a target camera to obtain each face region image containing the face of an operation object, wherein the target camera represents a camera arranged on a column between a front windshield and a front door of a vehicle;
respectively executing the following steps aiming at any one face area image in all the face area images to obtain the state class of the operation object in each face area image: determining a connecting line between two specific pixel points in the face region image, and calculating an included angle between the connecting line and a preset standard horizontal connecting line to obtain a deflection angle of the face region image; based on the deflection angle, carrying out angle correction on the face region image to obtain a face region image after angle correction; identifying the face state category of an operation object in the face region image after angle correction;
and determining whether the operation object in the video to be identified is in a fatigue state or not according to the face state category of the operation object in each face region image.
Optionally, each face region image including the face of the operation object is obtained according to the following manner:
acquiring a video to be identified, which is acquired by a target camera;
respectively aiming at any image to be recognized in the images to be recognized contained in the video to be recognized, carrying out face detection on the image to be recognized based on a trained face detection model and taking the image to be recognized as an input parameter, determining the position information of the face of the operation object on the image to be recognized, and obtaining a face region image containing the face of the operation object based on the position information, wherein the face detection model is obtained by carrying out iterative training according to a face region image sample set, and the face region image sample set contains a plurality of face region image samples.
Optionally, if the two specific pixel points are the left-eye center point and the right-eye center point, determining a connection line between the two specific pixel points in the face region image, and obtaining a deflection angle of the face region image by calculating an included angle between the connection line and a preset standard horizontal connection line, specifically including:
determining a binocular connecting line between a left eye central point and a right eye central point in the face region image;
and calculating an included angle between the binocular connecting line and a preset horizontal binocular connecting line, and taking the included angle as the deflection angle of the face region image.
Optionally, based on the deflection angle, performing angle correction on the face region image to obtain an angle-corrected face region image, which specifically includes:
with the deflection angle as a parameter of a preset affine transformation matrix, respectively multiplying coordinates corresponding to each pixel point in the face region image by the affine transformation matrix to obtain corrected coordinates of each pixel point;
and obtaining the face region image after angle correction based on the corrected coordinates of the pixel points.
Optionally, the identifying the face state category of the operation object in the face region image after the angle correction specifically includes:
based on a trained face key point detection model, taking a face region image after angle correction as an input parameter, and carrying out face key point detection on the face region image after angle correction to obtain each face key point image of the face region image after angle correction;
and identifying the facial state category of the operation object according to a specific human face key point image in the human face key point images.
Optionally, if the specific face key point images are left-eye key point images and right-eye key point images, identifying the face state category of the operation object according to the specific face key point images in the face key point images, specifically including:
determining a left eye state category of the left eye key point image and a right eye state category of the right eye key point image by taking the left eye key point image and the right eye key point image as input parameters based on a trained first recognition model;
and determining the face state category of the operation object according to the left eye state category and the right eye state category.
Optionally, if the specific face key point image is a mouth key point image, identifying the facial state category of the operation object according to the specific face key point image in each face key point image, specifically including:
determining a mouth state category of the mouth key point image by taking the mouth key point image as an input parameter based on the trained second recognition model;
and determining the facial state category of the operation object according to the mouth state category.
Optionally, if the facial state category is a fatigue state category and an antifatigue state category, determining whether the operation object in the video to be identified is in a fatigue state according to the facial state category of the operation object in each face region image, specifically including:
and if the facial state type of the operation object in the N continuous personal facial area images is determined to be the fatigue state type, determining that the operation object in the video to be identified is in the fatigue state, wherein N is greater than or equal to a preset threshold value, and is a positive integer.
An operation object state recognition apparatus comprising:
the system comprises a detection module, a processing module and a display module, wherein the detection module is used for respectively carrying out face detection on each image to be recognized contained in a video to be recognized collected by a target camera to obtain each face area image containing the face of an operation object, and the target camera represents a camera arranged on a column between a front windshield and a front door of a vehicle;
a processing module, configured to perform the following steps for any one of the face region images, respectively, to obtain a state category of an operation object in each of the face region images: determining a connecting line between two specific pixel points in the face region image, and calculating an included angle between the connecting line and a preset standard horizontal connecting line to obtain a deflection angle of the face region image; based on the deflection angle, carrying out angle correction on the face region image to obtain a face region image after angle correction; identifying the face state category of an operation object in the face region image after angle correction;
and the determining module is used for determining whether the operation object in the video to be identified is in a fatigue state according to the face state category of the operation object in each face area image.
Optionally, when obtaining each face region image including a face of the operation object, the detection module is specifically configured to:
acquiring a video to be identified, which is acquired by a target camera;
respectively carrying out face detection on any image to be recognized in the images to be recognized contained in the video to be recognized based on a trained face detection model and taking the image to be recognized as an input parameter, determining the position information of the face of the operation object on the image to be recognized, and obtaining a face region image containing the face of the operation object based on the position information, wherein the face detection model is obtained by carrying out iterative training according to a face region image sample set, and the face region image sample set contains a plurality of face region image samples.
Optionally, if the two specific pixel points are the left-eye center point and the right-eye center point respectively, determining a connection line between the two specific pixel points in the face region image, and obtaining a deflection angle of the face region image by calculating an included angle between the connection line and a preset standard horizontal connection line, where the processing module is specifically configured to:
determining a binocular connecting line between a left eye central point and a right eye central point in the face region image;
and calculating an included angle between the binocular connecting line and a preset horizontal binocular connecting line, and taking the included angle as the deflection angle of the face region image.
Optionally, based on the deflection angle, performing angle correction on the face region image, and when obtaining the face region image after angle correction, the processing module is specifically configured to:
with the deflection angle as a parameter of a preset affine transformation matrix, respectively multiplying coordinates corresponding to each pixel point in the face region image by the affine transformation matrix to obtain corrected coordinates of each pixel point;
and obtaining the face region image after angle correction based on the corrected coordinates of the pixel points.
Optionally, when the face state category of the operation object in the face region image after angle correction is identified, the processing module is specifically configured to:
based on a trained face key point detection model, taking a face region image after angle correction as an input parameter, and carrying out face key point detection on the face region image after angle correction to obtain each face key point image of the face region image after angle correction;
and identifying the facial state category of the operation object according to the specific human face key point image in the human face key point images.
Optionally, if the specific face key point images are the left-eye key point image and the right-eye key point image, when the face state category of the operation object is identified according to the specific face key point image in each of the face key point images, the processing module is specifically configured to:
determining a left eye state category of the left eye key point image and a right eye state category of the right eye key point image by taking the left eye key point image and the right eye key point image as input parameters based on a trained first recognition model;
and determining the face state category of the operation object according to the left eye state category and the right eye state category.
Optionally, if the specific face key point image is a mouth key point image, the facial state category of the operation object is identified according to the specific face key point image in each face key point image, and the processing module is specifically configured to:
determining a mouth state category of the mouth key point image by taking the mouth key point image as an input parameter based on the trained second recognition model;
and determining the facial state category of the operation object according to the mouth state category.
Optionally, if the facial state category is a fatigue state category and a non-fatigue state category, the determining module is specifically configured to:
and if the facial state type of the operation object in the N continuous personal facial area images is determined to be the fatigue state type, determining that the operation object in the video to be identified is in the fatigue state, wherein N is greater than or equal to a preset threshold value, and is a positive integer.
An electronic device comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the steps of the operating object state identification method.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method of identifying a state of an operating object.
In the embodiment of the application, the face detection is respectively performed on each image to be recognized contained in the video to be recognized collected by the target camera, each face region image containing the face of the operation object is obtained, the following steps are respectively executed aiming at any one face region image in each face region image, and the state type of the operation object in each face region image is obtained: determining a connection line between two specific pixel points in the face region image, calculating an included angle between the connection line and a preset standard horizontal connection line to obtain a deflection angle of the face region image, performing angle correction on the face region image based on the deflection angle to obtain an angle-corrected face region image, identifying the facial state category of an operation object in the angle-corrected face region image, determining whether the operation object in the video to be identified is in a fatigue state or not according to the facial state category of the operation object in each face region image, thus, before identifying the face region image containing the face of the operation object, performing angle correction on the face region image according to the two specific pixel points in the face region image, and performing facial state identification on the operation object based on the corrected face region image, so that the precise identification of the fatigue state of the operation object of the face region image shot under a large angle can be improved And (4) degree.
Drawings
Fig. 1 is a flowchart of an operation object state identification method in an embodiment of the present application;
FIG. 2 is a schematic view of a point rotation in an embodiment of the present application;
FIG. 3 is another flowchart of a method for identifying a status of an operation object according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an operation object state identification device in the embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, in the driving process of a vehicle, driver fatigue is one of important reasons causing a malignant traffic accident, and seriously jeopardizes traffic safety, so how to realize recognition of the fatigue state of the driver becomes a problem to be solved urgently.
With the development of mobile internet technology, the face key point detection technology has been popularized in various fields such as driver state monitoring, smart home, smart phone application and the like, therefore, by performing face key point detection on an image containing the face of the driver, usually by providing a camera on the a-pillar of the vehicle, and the image containing the face of the driver is shot based on the camera arranged on the A column, and the face key point detection is carried out on the image containing the face of the driver, so that the state of the driver can be identified, however, because the camera is arranged on the A column of the vehicle, the acquired image of the driver often has a large angle, so the detection accuracy of the face key point detection algorithm in the prior art for the large-angle condition is low, the face key point is difficult to be accurately positioned, and the accuracy of identifying the fatigue state of the driver is reduced.
In order to solve the above problem, an embodiment of the present application provides an operation object state identification method, which performs face detection on each to-be-identified image included in a to-be-identified video acquired by a target camera, to obtain each face region image including a face of an operation object, and performs the following steps for any one face region image in each face region image, to obtain a state type of the operation object in each face region image: determining a connecting line between two specific pixel points in the face region image, obtaining a deflection angle of the face region image by calculating an included angle between the connecting line and a preset standard horizontal connecting line, carrying out angle correction on the face region image based on the deflection angle to obtain an angle-corrected face region image, identifying the facial state type of an operation object in the angle-corrected face region image, determining whether the operation object in a video to be identified is in a fatigue state or not according to the facial state type of the operation object in each face region image, thus determining the deflection angle of the face region image according to the two specific pixel points in the face region image, carrying out angle correction on the face region image based on the deflection angle, and improving the detection precision of the face key points when carrying out face key point detection based on the angle-corrected face region image, and the positioning precision of the key points of the human face under the condition of large angle is improved, the key point images of the human face of the operation object can be accurately positioned, and the state identification accuracy of the operation object can be improved.
Based on the foregoing embodiment, referring to fig. 1, a flowchart of an operation object state identification method in an embodiment of the present application is specifically included:
step 100: and respectively carrying out face detection on each image to be recognized contained in the video to be recognized collected by the target camera to obtain each face region image containing the face of the operation object.
Wherein the target camera represents a camera disposed on a pillar between a front windshield and a front door of the vehicle.
In the embodiment of the application, an a-pillar camera of a Driver Monitor System (DMS) collects a video to be recognized of an operation object, the video to be recognized includes continuous images to be recognized at multiple time points, the video to be recognized is transmitted to a background server, the background server performs face detection on each image to be recognized included in the video to be recognized collected by a target camera after receiving the video to be recognized, a face detection frame including a face of the operation object in each image to be recognized is determined, a face region image including the face of the operation object is intercepted from the image to be recognized based on the determined face detection frame, and a coordinate corresponding to a left-eye key point and a coordinate corresponding to a right-eye key point of the operation object which are preliminarily positioned can also be obtained.
The operation object in the embodiment of the present application may be, for example, a driver, and when the operation object is a driver, each to-be-recognized image includes one face area image.
Specifically, when step 100 is executed, the method specifically includes:
s1: and acquiring a video to be identified, which is acquired by the target camera.
In the embodiment of the application, the target camera shoots the to-be-identified video of the operation object, the to-be-identified video comprises to-be-identified images of a plurality of continuous time points, the shot to-be-identified video is transmitted to the background server, and then the background server obtains the to-be-identified video transmitted by the target camera.
S2: respectively aiming at any image to be recognized in the images to be recognized contained in the video to be recognized, carrying out face detection on the image to be recognized based on a trained face detection model and taking the image to be recognized as an input parameter, determining the position information of the face of an operation object on the image to be recognized, and obtaining a face region image containing the face of the operation object based on the position information.
The face detection model is obtained by performing iterative training according to a face region image sample set, and the face region image sample set comprises a plurality of face region image samples.
In the embodiment of the application, any image to be recognized in each image to be recognized contained in the video to be recognized is input into a trained face detection model, the position of an operation object on any image to be recognized is determined, the position information of the face of the operation object on any image to be recognized is obtained, a face detection frame containing the face of the operation object is obtained based on the determined position information, and then a face region image containing the face of the operation object is obtained.
Wherein, the face detection algorithm in the face detection model is based on reinaface.
And, the face area image sample set includes a plurality of face area image samples, the plurality of face area image samples are obtained by acquiring the face area image samples in the actual scene through the DMS camera, the face detection model is obtained by performing iterative training according to the face area image sample set, when the Loss function of the face detection model is minimized, the trained face detection model is obtained, the Loss function of the face detection model adopts CloU Loss, which can be expressed as:
Figure BDA0002721798620000101
where b represents the center point of the predicted face detection box, bgtThe central point of the real face detection frame is represented, rho represents the Euclidean distance between the central point of the prediction face detection frame and the central point of the real face detection frame, and c represents the diagonal distance of the minimum closure area which can simultaneously contain the prediction face detection frame and the real face detection frame.
μ is the distance between the aspect ratio of the predicted face detection box B and the real face detection box (groudtruth) G:
Figure BDA0002721798620000102
α is a weight coefficient, and can be specifically expressed as:
Figure BDA0002721798620000103
step 110: respectively executing the following steps aiming at any one face area image in each face area image to obtain the state class of an operation object in each face area image: determining a connecting line between two specific pixel points in the face region image, and calculating an included angle between the connecting line and a preset standard horizontal connecting line to obtain a deflection angle of the face region image; based on the deflection angle, carrying out angle correction on the face region image to obtain a face region image after angle correction; and identifying the face state category of the operation object in the face region image after angle correction.
In the embodiment of the application, when the operation object is the driver, the driving sight of the driver is blocked because the camera of the DMS is arranged right in front of the driver, moreover, when the camera of the DMS is arranged on the steering wheel right in front, the camera can be shielded, so that the video to be identified of the driver can not be acquired better, therefore, the DMS camera is often mounted on the A column on the left side of the vehicle, so that the face angle in the acquired video to be recognized of the driver is often very large, the positioning accuracy of the key points of the face is not high, according to actual measurement, when the DMS camera is installed on the A column of the vehicle, when the driver looks straight ahead, the angle of which relative to the a-pillar camera reaches 20 deg. to the right, and, the driver does not sit upright all the time, therefore, the acquired face of the driver can rotate at a certain angle, and the rotating angle can reach 20-30 degrees.
Therefore, in order to solve the problem, in the embodiment of the present application, angle correction is performed on a face region image, specifically, a connection line between two specific pixel points in any one face region image is determined for each face region image, a deflection angle of the face region image is determined by calculating an included angle between the connection line and a preset standard horizontal connection line, angle correction is performed on the face region image according to the determined deflection angle, the deflected face region image is converted into an angle-corrected face region image, the angle-corrected face region image is obtained, and a face state category of an operation object in the angle-corrected face region image is identified.
In the embodiment of the present application, a possible implementation manner is provided, and a deflection angle of a face region image may be determined according to a left eye central point and a right eye central point in the face region image, and when determining the deflection angle of any one face region image, the method specifically includes:
s1: and determining a binocular connecting line between the center point of the left eye and the center point of the right eye in the face area image.
In the embodiment of the application, after any one face region image is obtained, a two-dimensional coordinate system is established by taking the lower left corner of the face region image as an original point, and after the two-dimensional coordinate system is established, the left eye central point coordinate of the left eye central point and the right eye central point coordinate of the right eye central point of any one face region image are determined, and a binocular connecting line between the left eye central point and the right eye central point is determined.
It should be noted that, when any one face region image is obtained, coordinates corresponding to the preliminarily detected face key points, that is, the left-eye center point coordinate and the right-eye center point coordinate, are also obtained at the same time.
S2: and calculating an included angle between the binocular connecting line and a preset horizontal binocular connecting line, and taking the included angle as the deflection angle of the face region image.
In the embodiment of the application, after a connecting line between the left eye central point and the right eye central point is determined, an included angle between a binocular connecting line and a preset horizontal binocular connecting line is calculated, and the determined included angle is used as a deflection angle of the face region image.
For example, after connecting the left-eye center point and the right-eye center point, a connection line between the center points of the two eyes is obtained, and an included angle between the connection line and the connection line of the center points of the two eyes in the horizontal state is determined, and this included angle is a deflection angle of the face region image, and in a specific implementation, the calculation may be performed by using the left-eye center point coordinate and the right-eye center point coordinate, and assuming that the left-eye center point coordinate is (Lx, Ly) and the right-eye center point coordinate is (Rx, Ry), the deflection angle β may be expressed as:
β=tan-1(Ly-Ry/Lx-Rx)
after the deflection angle is obtained, the angle correction can be performed on the face region image according to the deflection angle, and the following elaborates the manner of performing the face correction on the face region image based on the deflection angle in the embodiment of the present application, specifically including:
s1: and multiplying the coordinates corresponding to each pixel point in the face region image by the affine transformation matrix respectively by taking the deflection angle as a parameter of a preset affine transformation matrix to obtain the corrected coordinates of each pixel point.
In the embodiment of the application, a preset affine transformation matrix is obtained, the deflection angle obtained through calculation is used as a parameter of the affine transformation matrix, and then the coordinates corresponding to each pixel point in the face region image are multiplied by the affine transformation matrix with the deflection angle as the parameter, so that the coordinates of each pixel point after correction are obtained.
The following describes in detail the steps of obtaining the preset affine transformation matrix in the embodiment of the present application.
Assuming rotation around the origin of the coordinate axes, see FIG. 2, which is a schematic diagram of the rotation of points from point (A, B) to point (A1, B1) in the embodiment of the present application.
According to the trigonometric function formula, the following can be obtained:
formula A:
Figure BDA0002721798620000121
formula B:
Figure BDA0002721798620000122
formula C: a. the2+B2=A12+B12
The formula D can be obtained from a trigonometric function formula, and can be specifically expressed as:
Figure BDA0002721798620000123
substituting the formula D into the formula a and the formula B may obtain the formula E, which may be specifically expressed as:
Figure BDA0002721798620000124
substituting the formula E into the formula C can obtain a formula F, which can be specifically expressed as:
A12=(Acosβ-Bsinβ)2
substituting the formula F into the formula C can obtain a formula G, which can be specifically expressed as:
B12=(Asinβ-Bcosβ)2
from formula F and formula G, one can derive:
Figure BDA0002721798620000131
from the above formula, the affine transformation matrix is:
Figure BDA0002721798620000132
considering only the ideal case above, generalizing it to any one rotation center (a0, B0), then:
Figure BDA0002721798620000133
s2: and obtaining the face region image after angle correction based on the corrected coordinates of the pixel points.
In the embodiment of the application, after the corrected pixel points are determined, the corrected pixel points can form the face region image, so that the face region image after angle correction is obtained.
After obtaining the angle-corrected face region image, identifying the face state category of the operation object in the angle-corrected face region image, which specifically includes:
s1: based on the trained face key point detection model, the face region image after angle correction is taken as an input parameter, face key point detection is carried out on the face region image after angle correction, and each face key point image of the face region image after angle correction is obtained.
In the embodiment of the application, after the face region image after angle correction is obtained, the face region image after angle correction is input into a trained face key point detection model, and face key point detection is performed on the face region image after angle correction to obtain each face key point of the corrected face region image and an image containing each face key point.
S2: and identifying the facial state category of the operation object according to the specific human face key point image in the human face key point images.
In the embodiment of the application, the face state category of the operation object is identified based on the specific face key point image in each face key point image.
For example, the face state category of the operation subject may be identified from the left-eye key point image and the right-eye key point image.
For another example, the face state category of the operation subject may be identified from the mouth key point image.
The following describes in detail the steps based on the left-eye and right-eye key point images, and on the face state category in which the mouth key point image is the operated object, respectively.
If the specific face key point images are the left-eye key point image and the right-eye key point image, the method specifically includes:
a1: and based on the trained first recognition model, determining the left eye state category of the left eye key point image and determining the right eye state category of the right eye key point image by taking the left eye key point image and the right eye key point image as input parameters.
In this embodiment, the first state recognition model may be, for example, a two-class model, and based on the trained first state recognition model, the left-eye state class of the left-eye keypoint image is determined to be open eye or closed eye by using the left-eye keypoint image as an input parameter, and the right-eye state class of the right-eye keypoint image is determined to be open eye or closed eye by using the right-eye keypoint image as an input parameter.
The left-eye state category may be, for example, open eyes and closed eyes, the right-eye state category may be, for example, open eyes and closed eyes, the left-eye key point image represents an image of a left eye including the operation object, and the right-eye key point icon represents an image of a right eye including the operation object.
A2: and determining the face state category of the operation object according to the left eye state category and the right eye state category.
In the embodiment of the application, the face state category of the operation object can be determined according to the left eye state category and the right eye state category.
For example, assuming that the left eye state category is closed eye and the right eye state category is closed eye in the face region image after angle correction, the face state category of the operation target in the face region image after angle correction is determined to be the fatigue state category.
For another example, assuming that the left eye state category is closed eye and the right eye state category is open eye in the face region image after angle correction, the face state category of the operation target in the face region image after angle correction is determined to be the non-fatigue state category.
If the specific face key point image is a mouth key point image, the method specifically comprises the following steps:
a1: and determining the mouth state category of the mouth key point image by taking the mouth key point image as an input parameter based on the trained second recognition model.
In the embodiment of the application, based on the trained second state recognition model, the mouth key point images are input into the trained second state recognition model, and the mouth state category of the mouth key point images is determined.
The mouth state category may be yawning and unopened, for example.
Further, in the embodiment of the application, the mouth state category of the mouth key point image may be further determined by calculating the height of the circumscribed rectangular frame of the mouth key point image.
For example, when the driver opens the mouth greatly when yawning and the mouth is closed when the driver is not yawning, the height value of the circumscribed rectangular frame of the mouth key point image may be large when the driver opens yawning, and the height value of the circumscribed rectangular frame of the mouth key point image may be small when the driver does not open yawning, and therefore, a height value threshold may be set, and when the height value of the circumscribed rectangular frame is greater than a preset height value threshold, the mouth state category of the mouth key point image may be determined to be yawning, and when the height value of the circumscribed rectangular frame is less than or equal to the preset height value threshold, the mouth state category of the mouth key point image may be determined to be unopened.
A2: according to the mouth state category, the face state category of the operation object is determined.
For example, assuming that the mouth state category of the face region image after angle correction is open, the face state category of the operation target in the face region image after angle correction is determined to be the fatigue state category.
For another example, assuming that the mouth state category of the face region image after the angle correction is not open, the face state category of the operation target in the face region image after the angle correction is determined to be the non-fatigue state category.
Step 120: and determining whether the operation object in the video to be identified is in a fatigue state or not according to the face state category of the operation object in each face region image.
In this embodiment of the application, when the step 120 is executed, the method specifically includes:
and if the facial state type of the operation object in the N continuous personal facial area images is determined to be the fatigue state type, determining that the operation object in the video to be identified is in the fatigue state.
Wherein, N is greater than or equal to a preset threshold value and is a positive integer.
In the embodiment of the application, if it is determined that the facial state type of the operation object is fatigue in a plurality of continuous preset face region images, it is determined that the operation object in the video to be recognized is in a fatigue state.
Furthermore, after the operation object is determined to be in the fatigue state, an alarm instruction is generated and sent to the vehicle-mounted control equipment, the vehicle-mounted control equipment carries out in-vehicle voice early warning according to the alarm instruction, prompts the operation object to drive safely, takes a rest, if the background server continuously determines that the operation object is in the fatigue state, the vehicle-mounted control equipment considers to transfer the camera in the vehicle and forces the driver to find a safe place for rest, and therefore the alarm mechanism adopts a voice alarm mode and a background interference mode, and accidents caused by fatigue of the operation object can be greatly reduced.
In the embodiment of the application, the face detection is respectively performed on each image to be recognized contained in the video to be recognized collected by the target camera, each face region image containing the face of the operation object is obtained, the following steps are respectively executed aiming at any one face region image in each face region image, and the state type of the operation object in each face region image is obtained: determining a connecting line between two specific pixel points in the face region image, calculating an included angle between the connecting line and a preset standard horizontal connecting line to obtain a deflection angle of the face region image, performing angle correction on the face region image based on the deflection angle to obtain an angle-corrected face region image, identifying the facial state category of an operation object in the angle-corrected face region image, determining whether the operation object in a video to be identified is in a fatigue state according to the facial state category of the operation object in each face region image, thus obtaining the deflection angle of the face region image according to a left eye central point and a right eye central point, and performing face angle correction on the face region image based on the deflection angle, so that the detection precision of the face key point after angle correction can be remarkably improved, and the left eye key point image can be more accurately obtained, The right-eye key point image and the mouth key point image, so that the accuracy of state recognition of the operation object under a large angle can be improved.
Based on the foregoing embodiment, referring to fig. 3, another flowchart of an operation object state identification method in the embodiment of the present application is specifically included:
step 300: the DMS camera acquires a video to be recognized, which contains the face of a driver.
Step 301: and respectively carrying out face detection on each image to be recognized contained in the video to be recognized collected by the DMS camera.
Step 302: and judging whether the face of the driver is detected, if so, executing step 303, and if not, executing step 300 again.
Step 303: and carrying out angle correction on the detected face region image containing the face to obtain the face region image after angle correction.
In the embodiment of the application, a connection line between two specific pixel points in a face region image is determined, an included angle between the connection line and a preset standard horizontal connection line is calculated to obtain a deflection angle of the face region image, and the face region image is subjected to angle correction based on the deflection angle to obtain the face region image after angle correction.
Step 304: and detecting key points of the human face.
In the embodiment of the application, based on a trained face key point detection model, a face region image after angle correction is taken as an input parameter, face key point detection is carried out on the face region image after angle correction, and each face key point image of the face region image after angle correction is obtained.
Step 305: and acquiring a left eye key point image, a right eye key point image and a mouth key point image in each face key point image.
Step 306: and identifying the facial state category of the driver according to the left eye key point image, the right eye key point image and the mouth key point image.
Step 307: and judging whether the driver in the video to be identified is in a fatigue state, if so, executing the step 308, and if not, executing the step 311.
In the embodiment of the application, whether the driver in the video to be identified is in a fatigue state or not is judged according to the face state type of the driver in each face area image.
Step 308: and voice alarm reminding.
Step 309: the TSP background continuously receives the alarm information.
Step 310: and calling the camera in the vehicle and carrying out voice conversation with the driver.
In this application embodiment, transfer camera in the car to carry out the speech dialogue, force the driver to seek safe local rest.
Step 311: and (6) ending.
In the embodiment of the application, the angle correction is carried out on the face region image, the face key point positioning precision can be improved, so that the accurate left eye position, right eye position and mouth position can be obtained, the real-time detection on the fatigue state of a driver is facilitated, and the accidents caused by the fatigue of the driver are reduced.
Based on the same inventive concept, the embodiment of the present application further provides an operation object state identification device, where the operation object state identification device may be a hardware structure, a software module, or a hardware structure plus a software module. Based on the foregoing embodiments, referring to fig. 4, a schematic structural diagram of an operation object state identification apparatus in an embodiment of the present application is shown, which specifically includes:
the detection module 400 is configured to perform face detection on each image to be recognized included in a video to be recognized, which is acquired by a target camera, respectively, to obtain each face region image including a face of an operation object, where the target camera represents a camera provided on a pillar between a front windshield and a front door of a vehicle;
a processing module 410, configured to perform the following steps for any one of the face region images, respectively, to obtain a state category of an operation object in each face region image: determining a connecting line between two specific pixel points in the face region image, and calculating an included angle between the connecting line and a preset standard horizontal connecting line to obtain a deflection angle of the face region image; based on the deflection angle, carrying out angle correction on the face region image to obtain a face region image after angle correction; identifying the face state category of an operation object in the face region image after angle correction;
a determining module 420, configured to determine whether an operation object in the video to be recognized is in a fatigue state according to the face state category of the operation object in each face region image.
Optionally, when obtaining each face region image including a face of an operation object, the detection module 400 is specifically configured to:
acquiring a video to be identified, which is acquired by a target camera;
respectively carrying out face detection on any image to be recognized in the images to be recognized contained in the video to be recognized based on a trained face detection model and taking the image to be recognized as an input parameter, determining the position information of the face of the operation object on the image to be recognized, and obtaining a face region image containing the face of the operation object based on the position information, wherein the face detection model is obtained by carrying out iterative training according to a face region image sample set, and the face region image sample set contains a plurality of face region image samples.
Optionally, if the two specific pixel points are the left-eye center point and the right-eye center point, respectively, determining a connection line between the two specific pixel points in the face region image, and obtaining a deflection angle of the face region image by calculating an included angle between the connection line and a preset standard horizontal connection line, the processing module 410 is specifically configured to:
determining a binocular connecting line between a left eye central point and a right eye central point in the face region image;
and calculating an included angle between the binocular connecting line and a preset horizontal binocular connecting line, and taking the included angle as the deflection angle of the face region image.
Optionally, based on the deflection angle, when performing angle correction on the face region image to obtain an angle-corrected face region image, the processing module 410 is specifically configured to:
with the deflection angle as a parameter of a preset affine transformation matrix, respectively multiplying coordinates corresponding to each pixel point in the face region image by the affine transformation matrix to obtain corrected coordinates of each pixel point;
and obtaining the face region image after angle correction based on the corrected coordinates of the pixel points.
Optionally, when the face state category of the operation object in the face region image after angle correction is identified, the processing module 410 is specifically configured to:
based on a trained face key point detection model, taking a face region image after angle correction as an input parameter, and carrying out face key point detection on the face region image after angle correction to obtain each face key point image of the face region image after angle correction;
and identifying the facial state category of the operation object according to the specific human face key point image in the human face key point images.
Optionally, if the specific face key point images are the left-eye key point image and the right-eye key point image, when the face state category of the operation object is identified according to the specific face key point image in each of the face key point images, the processing module 410 is specifically configured to:
determining a left eye state category of the left eye key point image and a right eye state category of the right eye key point image by taking the left eye key point image and the right eye key point image as input parameters based on a trained first recognition model;
and determining the face state category of the operation object according to the left eye state category and the right eye state category.
Optionally, if the specific face key point image is a mouth key point image, the facial state category of the operation object is identified according to the specific face key point image in each face key point image, and the processing module 410 is specifically configured to:
determining a mouth state category of the mouth key point image by taking the mouth key point image as an input parameter based on the trained second recognition model;
and determining the facial state category of the operation object according to the mouth state category.
Optionally, if the facial state category is a fatigue state category and an antifatigue state category, the determining module 420 is specifically configured to:
and if the facial state type of the operation object in the N continuous personal facial area images is determined to be the fatigue state type, determining that the operation object in the video to be identified is in the fatigue state, wherein N is greater than or equal to a preset threshold value, and is a positive integer.
Based on the above embodiments, fig. 5 is a schematic structural diagram of an electronic device in an embodiment of the present application.
An embodiment of the present application provides an electronic device, which may include a processor 510 (CPU), a memory 520, an input device 530, an output device 540, and the like, wherein the input device 530 may include a keyboard, a mouse, a touch screen, and the like, and the output device 540 may include a Display device, such as a Liquid Crystal Display (LCD), a Cathode Ray Tube (CRT), and the like.
Memory 520 may include Read Only Memory (ROM) and Random Access Memory (RAM), and provides processor 510 with program instructions and data stored in memory 520. In the embodiment of the present application, the memory 520 may be used to store a program of any one of the methods for identifying a state of an operation object in the embodiment of the present application.
The processor 510 is configured to execute any one of the methods for identifying the status of the operand according to the obtained program instructions by calling the program instructions stored in the memory 520 by the processor 510.
Based on the above embodiments, in the embodiments of the present application, a computer-readable storage medium is provided, on which a computer program is stored, which, when executed by a processor, implements the method for identifying a state of an operation object in any of the above method embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. An operation object state identification method is characterized by comprising the following steps:
respectively carrying out face detection on each image to be recognized contained in a video to be recognized collected by a target camera to obtain each face region image containing the face of an operation object, wherein the target camera represents a camera arranged on a column between a front windshield and a front door of a vehicle;
respectively executing the following steps aiming at any one face area image in all the face area images to obtain the state class of the operation object in each face area image: determining a connecting line between two specific pixel points in the face region image, and calculating an included angle between the connecting line and a preset standard horizontal connecting line to obtain a deflection angle of the face region image; based on the deflection angle, carrying out angle correction on the face region image to obtain a face region image after angle correction; identifying the face state category of an operation object in the face region image after angle correction;
determining whether the operation object in the video to be identified is in a fatigue state or not according to the face state category of the operation object in each face region image;
based on the deflection angle, performing angle correction on the face region image to obtain an angle-corrected face region image, which specifically includes:
with the deflection angle as a parameter of a preset affine transformation matrix, respectively multiplying coordinates corresponding to each pixel point in the face region image by the affine transformation matrix to obtain corrected coordinates of each pixel point;
and obtaining the face region image after angle correction based on the corrected coordinates of the pixel points.
2. The method according to claim 1, wherein each face area image containing a face of the operation subject is obtained according to:
acquiring a video to be identified, which is acquired by a target camera;
respectively carrying out face detection on any image to be recognized in the images to be recognized contained in the video to be recognized based on a trained face detection model and taking the image to be recognized as an input parameter, determining the position information of the face of the operation object on the image to be recognized, and obtaining a face region image containing the face of the operation object based on the position information, wherein the face detection model is obtained by carrying out iterative training according to a face region image sample set, and the face region image sample set contains a plurality of face region image samples.
3. The method according to claim 1, wherein if the two specific pixels are the left-eye center point and the right-eye center point, respectively, determining a connection line between the two specific pixels in the face region image, and obtaining the deflection angle of the face region image by calculating an included angle between the connection line and a preset standard horizontal connection line, specifically comprising:
determining a binocular connecting line between a left eye central point and a right eye central point in the face region image;
and calculating an included angle between the binocular connecting line and a preset horizontal binocular connecting line, and taking the included angle as the deflection angle of the face region image.
4. The method of claim 1, wherein identifying the facial state class of the operation object in the angle-corrected face region image specifically comprises:
based on a trained face key point detection model, taking a face region image after angle correction as an input parameter, and carrying out face key point detection on the face region image after angle correction to obtain each face key point image of the face region image after angle correction;
and identifying the facial state category of the operation object according to the specific human face key point image in the human face key point images.
5. The method according to claim 4, wherein if the specific face keypoint images are a left-eye keypoint image and a right-eye keypoint image, identifying the face state category of the operation object according to the specific face keypoint image in each of the face keypoint images, specifically comprises:
determining a left eye state category of the left eye key point image and a right eye state category of the right eye key point image by taking the left eye key point image and the right eye key point image as input parameters based on a trained first recognition model;
and determining the face state category of the operation object according to the left eye state category and the right eye state category.
6. The method according to claim 4, wherein if the specific face key point image is a mouth key point image, identifying the face state category of the operation object according to the specific face key point image in each face key point image specifically comprises:
determining a mouth state category of the mouth key point image by taking the mouth key point image as an input parameter based on the trained second recognition model;
and determining the facial state category of the operation object according to the mouth state category.
7. The method according to any one of claims 4 to 6, wherein if the facial state categories are a fatigue state category and a non-fatigue state category, determining whether an operation object in the video to be recognized is in a fatigue state according to the facial state categories of the operation object in each face region image, specifically includes:
and if the facial state type of the operation object in the N continuous personal facial area images is determined to be the fatigue state type, determining that the operation object in the video to be identified is in the fatigue state, wherein N is greater than or equal to a preset threshold value, and is a positive integer.
8. An operation object state recognition apparatus, comprising:
the system comprises a detection module, a processing module and a display module, wherein the detection module is used for respectively carrying out face detection on each image to be recognized contained in a video to be recognized collected by a target camera to obtain each face area image containing the face of an operation object, and the target camera represents a camera arranged on a column between a front windshield and a front door of a vehicle;
a processing module, configured to perform the following steps for any one of the face region images, respectively, to obtain a state category of an operation object in each of the face region images: determining a connecting line between two specific pixel points in the face region image, and calculating an included angle between the connecting line and a preset standard horizontal connecting line to obtain a deflection angle of the face region image; based on the deflection angle, carrying out angle correction on the face region image to obtain a face region image after angle correction; identifying the face state category of an operation object in the face region image after angle correction;
the determining module is used for determining whether the operation object in the video to be identified is in a fatigue state or not according to the face state category of the operation object in each face region image;
based on the deflection angle, performing angle correction on the face region image, and when obtaining the face region image after angle correction, the processing module is specifically configured to:
with the deflection angle as a parameter of a preset affine transformation matrix, respectively multiplying coordinates corresponding to each pixel point in the face region image by the affine transformation matrix to obtain corrected coordinates of each pixel point;
and obtaining the face region image after angle correction based on the corrected coordinates of the pixel points.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of any of claims 1-7 are implemented when the program is executed by the processor.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program when executed by a processor implements the steps of the method of any one of claims 1 to 7.
CN202011090028.0A 2020-10-13 2020-10-13 Method and device for identifying state of operation object Active CN112232175B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011090028.0A CN112232175B (en) 2020-10-13 2020-10-13 Method and device for identifying state of operation object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011090028.0A CN112232175B (en) 2020-10-13 2020-10-13 Method and device for identifying state of operation object

Publications (2)

Publication Number Publication Date
CN112232175A CN112232175A (en) 2021-01-15
CN112232175B true CN112232175B (en) 2022-06-07

Family

ID=74112454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011090028.0A Active CN112232175B (en) 2020-10-13 2020-10-13 Method and device for identifying state of operation object

Country Status (1)

Country Link
CN (1) CN112232175B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112528978B (en) * 2021-02-10 2021-05-14 腾讯科技(深圳)有限公司 Face key point detection method and device, electronic equipment and storage medium
CN113837020B (en) * 2021-08-31 2024-02-02 北京新氧科技有限公司 Cosmetic progress detection method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109800675A (en) * 2018-12-29 2019-05-24 上海依图网络科技有限公司 A kind of method and device of the identification image of determining face object
CN110826521A (en) * 2019-11-15 2020-02-21 爱驰汽车有限公司 Driver fatigue state recognition method, system, electronic device, and storage medium
CN111488855A (en) * 2020-04-24 2020-08-04 上海眼控科技股份有限公司 Fatigue driving detection method, device, computer equipment and storage medium
CN111626240A (en) * 2020-05-29 2020-09-04 歌尔科技有限公司 Face image recognition method, device and equipment and readable storage medium
CN111645695A (en) * 2020-06-28 2020-09-11 北京百度网讯科技有限公司 Fatigue driving detection method and device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109800675A (en) * 2018-12-29 2019-05-24 上海依图网络科技有限公司 A kind of method and device of the identification image of determining face object
CN110826521A (en) * 2019-11-15 2020-02-21 爱驰汽车有限公司 Driver fatigue state recognition method, system, electronic device, and storage medium
CN111488855A (en) * 2020-04-24 2020-08-04 上海眼控科技股份有限公司 Fatigue driving detection method, device, computer equipment and storage medium
CN111626240A (en) * 2020-05-29 2020-09-04 歌尔科技有限公司 Face image recognition method, device and equipment and readable storage medium
CN111645695A (en) * 2020-06-28 2020-09-11 北京百度网讯科技有限公司 Fatigue driving detection method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN112232175A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
US11182592B2 (en) Target object recognition method and apparatus, storage medium, and electronic device
CN108725440B (en) Forward collision control method and apparatus, electronic device, program, and medium
CN112232175B (en) Method and device for identifying state of operation object
CN102783144B (en) The periphery monitoring apparatus of vehicle
US20210133468A1 (en) Action Recognition Method, Electronic Device, and Storage Medium
CN110826370B (en) Method and device for identifying identity of person in vehicle, vehicle and storage medium
EP2698762B1 (en) Eyelid-detection device, eyelid-detection method, and program
CN107392139B (en) Lane line detection method based on Hough transform and terminal equipment
JP5790762B2 (en) 瞼 Detection device
EP3611652A1 (en) Pedestrian identification device and method, and driving assistance device
CN112349144A (en) Monocular vision-based vehicle collision early warning method and system
CN112598922B (en) Parking space detection method, device, equipment and storage medium
CN108615014B (en) Eye state detection method, device, equipment and medium
CN112987002B (en) Obstacle risk identification method, system and device
CN110738078A (en) face recognition method and terminal equipment
CN111798521A (en) Calibration method, calibration device, storage medium and electronic equipment
JPWO2005024754A1 (en) In-vehicle image processing device
CN115082901A (en) Vehicle import detection method, device and equipment based on algorithm fusion
WO2023241358A1 (en) Fatigue driving determination method and apparatus, and electronic device
CN110837810A (en) Face attention judging method, device, equipment and storage medium
CN108399357B (en) Face positioning method and device
CN113361441B (en) Sight line area estimation method and system based on head posture and space attention
CN113807119B (en) Personnel gazing position detection method and device
JPH11259639A (en) Travel path recognizing device
JP5430213B2 (en) Vehicle periphery monitoring device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant