CN109241842B - Fatigue driving detection method, device, computer equipment and storage medium - Google Patents

Fatigue driving detection method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN109241842B
CN109241842B CN201810871611.1A CN201810871611A CN109241842B CN 109241842 B CN109241842 B CN 109241842B CN 201810871611 A CN201810871611 A CN 201810871611A CN 109241842 B CN109241842 B CN 109241842B
Authority
CN
China
Prior art keywords
face
picture
facial
driver
facial feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810871611.1A
Other languages
Chinese (zh)
Other versions
CN109241842A (en
Inventor
刘胜坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201810871611.1A priority Critical patent/CN109241842B/en
Priority to PCT/CN2018/106394 priority patent/WO2020024395A1/en
Publication of CN109241842A publication Critical patent/CN109241842A/en
Application granted granted Critical
Publication of CN109241842B publication Critical patent/CN109241842B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a fatigue driving detection method, a device, computer equipment and a storage medium, which are used for acquiring facial feature pictures obtained by a driver during driving in real time, extracting facial feature points on the facial feature pictures, inputting the positions of the facial feature points on the face into a pre-trained classifier to obtain facial feature actions of the driver, determining that the driver enters a fatigue state when the movement amplitude of the facial feature actions exceeds a preset first threshold value and the duration time of the obtained facial feature actions exceeds a preset time, and determining that the driver is in the fatigue state when the total duration time of the fatigue state of the driver exceeds a preset second threshold value and the occurrence frequency of the fatigue state exceeds a preset third threshold value within a preset time period.

Description

Fatigue driving detection method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of financial science and technology, and in particular, to a method and apparatus for detecting fatigue driving, a computer device, and a storage medium.
Background
In the current age, the vehicles are developed, the travel modes are selected in various modes such as airplane taking, high-speed rail taking or automobile taking, and people who choose to drive the automobile during travel still occupy most of the modes.
When people drive out, the driver is likely to enter a driving fatigue state when the continuous driving duration is not reached under the condition that the driver is late in the night the day before or the driver suffers from heavy cold and the like and is easy to cause physical fatigue. The existing mode generally judges whether the driver enters a fatigue driving state by judging whether the driver reaches the continuous driving time, and the existing mode is easy to misjudge when facing the situation, so that hidden danger is brought to safe driving.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a fatigue driving detection method, apparatus, computer device, and storage medium that can improve driving safety.
A fatigue driving detection method comprises the following steps:
acquiring a face five-sense organ picture obtained by a driver during driving in real time, and extracting each facial feature point on the face five-sense organ picture, wherein the facial feature points are external dimension outline basic feature points of the five-sense organ on the face;
Inputting the positions of the facial feature points on the face into a pre-trained classifier to obtain facial features of the driver, wherein the facial features are generated by the facial features of the driver according to reflection, and the facial features comprise the movement range of the facial features;
when the movement amplitude of the five sense organs in the facial five sense organ actions exceeds a preset first threshold value and the recorded duration time of the five sense organ actions exceeds a preset time, determining that the driver enters a fatigue state;
and determining that the driver is tired to drive if the total duration of the fatigue state of the driver exceeds a preset second threshold value and the occurrence frequency of the fatigue state exceeds a preset third threshold value within a preset time period.
A fatigue driving detection device, comprising:
the extraction module is used for acquiring the facial feature pictures obtained by the driver during driving in real time and extracting each facial feature point on the facial feature pictures, wherein the facial feature points are the basic feature points of the external dimension outline of the facial features;
the input module is used for inputting the positions of the facial feature points on the face of the driver into a pre-trained classifier to obtain facial features of the driver, wherein the facial features are generated by the facial features of the driver according to reflection, and the facial features comprise the movement range of the facial features;
The judging module is used for determining that the driver enters a fatigue state when the movement amplitude of the five sense organs in the facial five sense organ actions exceeds a preset first threshold value and the recorded duration time of the five sense organ actions is preset;
and the determining module is used for determining the fatigue driving of the driver when the total duration of the fatigue state of the driver exceeds a preset second threshold value and the occurrence frequency of the fatigue state exceeds a preset third threshold value within a preset time period.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above-mentioned fatigue driving detection method when executing the computer program.
A computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above-described fatigue driving detection method.
According to the fatigue driving detection method, the device, the computer equipment and the storage medium, because the facial feature picture is a picture of the face obtained by collecting the driver in real time during driving, the extracted facial feature points are also external dimension outline basic feature points of the facial features on the face of the driver in real time during driving, so that the facial feature actions of the driver obtained by inputting the positions of the facial feature points on the face into the pre-trained classifier are also real-time, the facial feature actions are actions generated by the facial features of the driver according to reflection, next, the duration of the facial feature actions obtained by statistics is also real-time due to the movement amplitude of the facial feature, and when the movement amplitude of the feature in the facial feature actions exceeds a preset first threshold value and the recorded duration of the feature actions exceeds a preset time, the driver is determined to enter a fatigue state, and finally, when the total duration of the fatigue state of the driver in the preset time period exceeds a preset second threshold value and the fatigue state is determined to be generated, the fatigue driving frequency exceeds a third threshold value. The method determines the fatigue driving by judging whether the driver enters the fatigue state or not, rather than determining the fatigue driving by judging whether the continuous driving time of the driver exceeds the specified fatigue driving time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an application environment of a fatigue driving detection method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a fatigue driving detection method according to an embodiment of the invention;
FIG. 3 is a flow chart of training a face average model in a fatigue driving detection method according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for obtaining facial features in a fatigue driving detection method according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method for recognizing facial features by using a model in a fatigue driving detection method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a fatigue driving detection apparatus according to an embodiment of the invention;
FIG. 7 is a schematic diagram of a computer device in accordance with an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The fatigue driving detection method provided by the application can be applied to an application environment as shown in fig. 1, wherein the computer equipment communicates with a server through a network. The method comprises the steps that a service end collects facial feature pictures obtained by a driver during driving in real time and extracts each facial feature point on the facial feature pictures, the service end inputs the positions of each facial feature point on a human face into a pre-trained classifier to obtain facial feature actions of the driver, when the movement amplitude of the facial feature in the facial feature actions exceeds a preset first threshold value, and the recorded duration time of the facial feature actions exceeds a preset time, the service end determines that the driver enters a fatigue state, and when the duration time of the fatigue state of the driver exceeds a preset second threshold value within a preset time period, and the occurrence frequency of the fatigue state exceeds a preset third threshold value, the service end determines that the driver is in fatigue driving. The computer device may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, among others. The server may be implemented by a stand-alone server or a server cluster formed by a plurality of servers.
In an embodiment, as shown in fig. 2, a method for detecting fatigue driving is provided, and the method is applied in the financial and scientific industry, and is illustrated by taking a server side in fig. 1 as an example, and includes the following steps:
s10: acquiring face five sense organ pictures obtained by a driver during driving in real time and extracting each facial feature point on the face five sense organ pictures;
in this embodiment, in order to obtain each facial feature point of the driver at risk of the insurance company during driving, an acquisition tool is required to acquire a facial feature picture of the driver during driving in real time, and then each facial feature point on the acquired facial feature picture is extracted.
It should be noted that, the collecting tool may be a digital camera or a video recorder, and the facial feature points are the basic feature points of the external contour of the five sense organs on the face.
S20: inputting the positions of the facial feature points on the human face into a pre-trained classifier to obtain the facial five-sense organ actions of the driver;
in this embodiment, in order to obtain facial features of the driver at risk of the insurer, the obtained positions of the facial feature points on the face need to be input into a pre-trained classifier.
It should be noted that, the facial feature motion is a motion generated by the facial feature on the face of the driver according to reflection, such as a blinking motion, the facial feature motion includes a movement range of the facial feature, and the movement range of the facial feature is a displacement generated by the driver turning from a normal facial feature state to a current facial feature state, and the displacement includes a longitudinal displacement and a lateral displacement. The training process of the classifier is that firstly, preset facial feature points are extracted from preset facial feature pictures, and then the positions of the extracted preset facial feature points on the faces are used as training samples for training, so that the classifier is obtained. The classifier can be a regression classifier, a naive Bayesian classifier, a neural network classifier or the like, and the specific content of the classifier can be set according to practical application without limitation.
The classifier firstly extracts the preset facial feature points from the preset facial feature pictures, and then trains the positions of the extracted facial feature points on the human face as training sets, namely, the classifier stores the pairing relation between the positions of the input facial feature points on the human face and the facial feature actions of the driver as output, so that the facial feature actions of the driver are taken as accurate facial feature actions, and the accuracy of acquiring the facial feature actions is improved.
S30: when the movement amplitude of the five sense organs in facial five sense organ actions exceeds a preset first threshold value and the duration time of the recorded five sense organ actions exceeds a preset time, determining that the driver enters a fatigue state;
in this embodiment, first, a starting time point of the five sense organ action of the driver in danger of the insurance company is recorded, and at the same time, an ending time point of the five sense organ action of the driver in danger of the insurance company is recorded, then, the starting time point is subtracted from the ending time point to obtain the duration of the five sense organ action of the driver in danger of the insurance company in driving, next, whether the movement amplitude of the five sense organ in the facial five sense organ action of the driver in driving exceeds a preset first threshold value or not is judged, whether the duration of the recorded five sense organ action exceeds the preset time or not is judged, and when the movement amplitude of the five sense organ in the facial five sense organ action of the driver in driving exceeds the preset first threshold value and the duration of the recorded five sense organ action exceeds the preset time, the driver is determined to enter a fatigue state. The preset first threshold may be 1 cm or 3 cm, the preset time may be 3 seconds or 5 seconds, and the specific content of the preset first threshold and the preset time may be set according to the actual requirement, which is not limited herein.
And when the movement amplitude of the five sense organs in the facial five sense organ actions of the driver during driving does not exceed a preset first threshold value and the duration of the recorded five sense organ actions does not exceed a preset time, determining that the driver is in a wakefulness state.
The movement range of the five sense organs can be obtained by analyzing facial muscle characteristics and characteristic functions of drivers at risk according to insurance companies during driving.
S40: and determining the fatigue driving of the driver when the total duration of the fatigue state of the driver exceeds a preset second threshold value and the occurrence frequency of the fatigue state exceeds a preset third threshold value within a preset time period.
In this embodiment, first, it is determined whether the total duration of the fatigue state of the driver in the preset time period exceeds a preset second threshold value, and whether the occurrence frequency of the fatigue state exceeds a preset third threshold value, after determining that the driver enters the fatigue state, when the total duration of the fatigue state of the driver in the preset time period exceeds the preset second threshold value, and the occurrence frequency of the fatigue state exceeds the preset third threshold value, determining that the driver is in fatigue driving; and when the total duration of the fatigue state of the driver in the preset time period does not exceed the preset second threshold value and the occurrence frequency of the fatigue state does not exceed the preset third threshold value, determining that the driver is awake, wherein the awake driving is that the driver is in an awake state and drives the vehicle.
Further, when the total time from the start of driving to the present of the driver is greater than a preset fourth threshold value, and when the time interval from the awake state to the fatigue state of the driver is less than a preset fifth threshold value, the driver fatigue driving is also determined. When the driver is in the awake driving state, a warning message for preventing the fatigue driving is sent to the driver, and the step S30 is returned, so that the driver can be prevented from suffering from the fatigue driving.
The specific content of the preset fourth threshold value and the preset fifth threshold value may be set according to the actual application, which is not limited herein.
In the embodiment corresponding to fig. 2, through the above steps S10 to S40, since the facial feature picture is a picture of the face obtained by collecting the driver in real time during driving, the extracted facial feature points are also the external contour basic feature points of the facial features on the face of the driver in real time during driving, so that the facial feature actions of the driver obtained by inputting the positions of the facial feature points on the face into the classifier trained in advance are also real time, the facial feature actions are actions generated by the facial features on the face of the driver according to the reflections, then, since the movement amplitude of the facial feature is also real time, the duration of the facial feature actions obtained by statistics is also real time, when the movement amplitude of the facial feature in the facial feature actions exceeds a preset first threshold value, and the duration of the recorded feature actions exceeds a preset time, the driver is determined to enter a fatigue state, and finally, when the total preset fatigue state of the driver exceeds a preset second frequency within a preset time period, the fatigue state of the driver is determined to be generated, and the fatigue state is determined to be exceeded. The method determines the fatigue driving by judging whether the driver enters the fatigue state or not, rather than determining the fatigue driving by judging whether the continuous driving time of the driver exceeds the specified fatigue driving time.
Further, in an embodiment, the fatigue driving detection method is applied in the financial and scientific industry, and the extracting of each facial feature point on the facial feature picture specifically includes: analyzing by adopting a pre-trained face average model according to the facial feature pictures to obtain each facial feature point; fig. 3 is a flowchart of training a face average model in an application scenario in the fatigue driving detection method according to the embodiment of fig. 2, where the face average model is obtained by training the following steps:
s101: establishing a sample library, wherein the sample library contains a first number of facial feature pictures, and marking a second number of facial feature points in each facial sample picture to obtain each marked facial sample picture;
s102: and (3) taking each marked face sample picture as a sample, inputting the sample picture into a pre-trained face feature recognition model for training, and obtaining a face average model.
For the above step S101, it may be understood that, in order to obtain face sample pictures, a sample library needs to be established first, where a first number of face facial feature pictures are stored in the sample library, and then a second number of facial feature points are marked in each face sample picture, so as to obtain each marked face sample picture.
For the above step S102, it may be understood that the obtained face sample pictures after each marking are input as samples to a pre-trained face feature recognition model for training, so as to obtain a face average model.
It should be noted that the face feature recognition model is obtained by training face sample pictures as a sample set. The first number may be 100, the second number may be 20, and the specific contents of the first number and the second number may be set according to practical applications, which are not limited herein.
If the first number is 200 and the second number is 76, the face sample picture marks 76 facial feature points, which are sequentially marked as P1-P76, and the coordinates of the 76 facial feature points are respectively: (x 1, y 1), (x 2, y 2), …, (x 76, y 76). The outer contour of the face is provided with 17 characteristic points (P1-P17, which are uniformly distributed on the outer contour of the face), the left and right eyebrows are respectively provided with 5 characteristic points (P18-P22, P23-P27, which are uniformly distributed on the upper end of the eyebrows), the nose is provided with 9 characteristic points (P28-P36), the left and right eyeboxes are respectively provided with 6 characteristic points (respectively marked as P37-P42, P43-P48), the left and right eyeballs are respectively provided with 4 characteristic points (respectively marked as P49-P52, P53-P56), the lips are respectively provided with 20 characteristic points (P57-P76), the upper and lower lips of the lips are respectively provided with 8 characteristic points (respectively marked as P57-P64, P65-P72), and the left and right lips are respectively provided with 2 characteristic points (respectively marked as P73-P74, P75-P76).
In the embodiment corresponding to fig. 3, through the above steps S101 and S102, the face feature recognition model is obtained by training the face sample picture as the sample set, that is, the matching relationship between the input face picture and the output face average model is stored in the face feature recognition model, so that the face average model matched with the input face sample picture can be obtained through the face feature recognition model, and the accuracy of generating the face average model is improved.
Further, in an embodiment, the fatigue driving detection method is applied to the financial and scientific industry, and a face feature recognition model in the fatigue driving detection method is obtained through training the following gradient lifting decision tree algorithm formula:
p ^m+1 =p ^m +k m (i,p ^m )
wherein m represents a concatenation number, k m Representing the current level of regressors, each consisting of a number of regression trees,for shape estimation of the current model, each regressor is based on the current picture entered +.>And->To predict an incrementIn the model training process, training a first regression tree by adopting a second number of characteristic points of the first face sample picture according to the sequence of the first number of marked face sample pictures, determining the first regression tree as a current regression tree, training a second regression tree by adopting residual errors of a predicted value of the current regression tree and a normal value of the second number of characteristic points, determining a next regression tree as a new current regression tree, repeatedly executing the actions of training the next regression tree by adopting the residual errors of the predicted value of the current regression tree and the normal value of the second number of characteristic points, determining the next regression tree as the new current regression tree until the predicted value of the last regression tree and the normal value of the second number of partial characteristic points are close to 0, obtaining each regression tree of a gradient lifting decision tree algorithm, and generating a face characteristic identification model according to each regression tree.
In this embodiment, through the iterative training process of the gradient lifting decision tree algorithm formula, the error between the predicted value and the normal value of each regression tree is close to zero, that is, the predicted value of each regression tree is equal to the normal value, so that an accurate face feature recognition model can be generated according to each regression tree, and the accuracy of generating the face feature recognition model is improved.
Further, in an embodiment, the fatigue driving detection method is applied in the financial and scientific industry, as shown in fig. 4, in a flowchart of acquiring a facial feature picture in an application scene in the fatigue driving detection method in the embodiment corresponding to fig. 2 or fig. 3, the facial feature picture is acquired through the following steps:
s50: acquiring an original face picture;
s60: reducing the original face picture to a preset size to obtain a first face picture;
s70: and carrying out feature recognition on the first face picture by adopting a pre-trained convolutional neural network model to obtain a face five-sense organ picture.
For the above step S50, it may be understood that the original face picture of the driver at risk of the insurance company during driving may be collected in real time through the collecting device, where the original face picture is a picture of the whole face of the driver during driving. The acquisition device has been described in step S10 and will not be described here in order to avoid repetition.
Further, when the original face picture of the driver during driving is acquired, the original face picture can be transmitted to the cloud service end in real time through a wireless or wired network for analysis, the wireless can be a mobile data network of a vehicle-mounted wifi or sim card, and the wired can be an optical cable or a network cable.
For the above step S60, it may be understood that, in order to conveniently input the face picture into the convolutional neural network model trained in advance, the original face picture needs to be reduced to a preset size to obtain the first face picture. The preset size may be 10, and the specific content of the preset size may be set according to practical application, which is not limited herein.
For the above step S70, it may be understood that, in order to obtain the facial feature picture, the feature recognition needs to be performed on the first facial picture by using a convolutional neural network model trained in advance. The convolutional neural network model is obtained by training a first face picture as a sample set.
In the embodiment corresponding to fig. 4, through the steps S50 to S70, the convolutional neural network model is obtained by training the first face picture as the sample set, that is, the matching relationship between the input first face picture and the output facial feature picture is stored in the convolutional neural network model, so that the facial feature picture matched with the input first face picture can be obtained through the convolutional neural network model, and the accuracy of obtaining the facial feature picture is improved.
Further, in an embodiment, the fatigue driving detection method is applied in the financial and scientific industry, the first face picture is divided into a first region and a second region, the first region and the second region are mutually disjoint, as shown in fig. 5, which is a flowchart of step S70 in an application scenario in the fatigue driving detection method in the embodiment corresponding to fig. 4, where the step S70 specifically includes:
s701: dividing a first region in a first face picture to obtain a third region and a fourth region of a second face picture, and dividing the second region in the first face picture to obtain a fifth region and a sixth region of the second face picture;
s702: and generating the facial five-sense organ picture according to the third region, the fourth region, the fifth region and the sixth region in the second facial picture.
For step S701, it may be understood that the first region in the obtained first face picture is segmented to obtain the third region and the fourth region of the second face picture, and the second region in the obtained first face picture is segmented to obtain the fifth region and the sixth region of the second face picture. Namely, one scheme is as follows: and inputting the obtained first face picture into a first full convolution network to obtain a second face picture.
The third region and the fourth region do not intersect each other, and the fifth region and the sixth region do not intersect each other. The first area corresponds to the first type organ, the second area corresponds to the second type organ, the proportion of the area of the first area to the total area of the first face picture is larger than or equal to a preset fifth threshold value, and the proportion of the area of the second area to the total area of the first face picture is smaller than the preset fifth threshold value. The first region occupies a larger face area, and the first region comprises: the hair area or the face area, the second area occupies a smaller face area, and the second area comprises: left eye and left eyebrow region, right eye and right eyebrow region, nose region or mouth region. The first full convolution network comprises a first input layer, a first combination layer, a first deconvolution layer and a first output layer in sequence, wherein the first combination layer comprises: a plurality of first convolutional layers and a first pooling layer sandwiched between the plurality of first convolutional layers.
Another scheme is as follows: amplifying the third region and the fourth region respectively, and accurately dividing and positioning the amplified third region and fourth region to obtain a fifth region and a sixth region.
The other scheme specifically inputs the amplified third region and fourth region into a second full convolution network to obtain a fifth region and a sixth region. Namely, the pictures of the organs included in the fifth area and the sixth area are extracted, the picture corresponding to each organ is amplified, then the amplified picture corresponding to each organ is accurately segmented and positioned, and a corresponding second face picture is obtained, namely, the segmentation and positioning of the picture are realized through a full convolution network.
It should be noted that the second full convolution network includes, in order, a second input layer, a second combination layer, a second deconvolution layer, and a second output layer. The second combined layer comprises: a plurality of second convolution layers and a second pooling layer, optionally the number of layers of the first convolution layer is greater than the number of layers of the second convolution layer, the second pooling layer sandwiched between the plurality of second convolution layers.
For step S702, it may be understood that a third face picture is generated according to the third region, the fourth region, the fifth region, and the sixth region in the second face picture, and the third face picture is determined as a facial feature picture.
For a better explanation of step S701 and step S702, explanation is made below by way of an example as follows:
For example: first, face pictures of 128 x 128 size are input, and the first full convolution network is divided into the following categories, background, hair, face, left eye and left eyebrow, right eye and right eyebrow, nose and mouth. Wherein, three types of background, hair and face are precisely segmented and positioned, because the three types account for larger specific gravity of the whole face area; the other five sense organs are in fuzzy positioning, which is only responsible for approximate estimation of the position and does not bear the aim of accurate segmentation. The second full convolution network subsection processes the left eye region, the right eye region, the nose and the mouth. Acquiring an accessory region according to the position obtained by the first full convolution network, and obtaining a more accurate segmentation result. The model of the eyes outputs two types of eyebrows and eyes, and the left eye and the right eye share the same model. The nose model outputs a nose class. The mouth model outputs three classes of upper lip, lower lip and middle region of mouth, for example, the eye region is 64 x 64, the nose region is 64 x 64, and the mouth region is 32 x 64.
In the embodiment corresponding to fig. 5, through the step S701 and the step S702, the accurate segmentation and positioning of the first area with a larger occupied face area and the accurate segmentation and positioning of the second area with a smaller occupied face area are realized, compared with the prior art, the method has the advantages that the number of network layers required for better crossing in the face segmentation process is more, the calculation time is longer, the problems of low accuracy and long calculation time in the prior art are solved, the complexity of the subnetwork is reduced, the calculation time required by face segmentation is reduced, and the accuracy and the efficiency of face facial feature recognition are improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In an embodiment, a fatigue driving detection device is provided, where the fatigue driving detection device corresponds to the fatigue driving detection method in the above embodiment one by one. As shown in fig. 6, the fatigue driving detection apparatus includes an extraction module 71, an input module 72, a judgment module 73, and a determination module 74. The functional modules are described in detail as follows:
the extraction module 71 is configured to collect, in real time, a facial feature picture obtained by a driver during driving and extract each facial feature point on the facial feature picture, where the facial feature points are basic feature points of an external dimension outline of a facial feature;
an input module 72, configured to input the positions of the facial feature points on the face into a pre-trained classifier, to obtain facial features of the driver, where the facial features are generated by the facial features of the face of the driver according to reflection, and the facial features include movement amplitude of the facial features;
a judging module 73, configured to determine that the driver enters a fatigue state when a first threshold value preset for a movement amplitude of the facial feature in the facial feature actions is exceeded and a duration of the recorded feature actions exceeds a preset time;
The determining module 74 is configured to determine that the driver is tired when the total duration of the fatigue state of the driver exceeds the preset second threshold value and the occurrence frequency of the fatigue state exceeds the preset third threshold value within the preset time period.
Further, the extraction module 71 is configured to analyze the facial feature images by using a pre-trained facial average model to obtain each facial feature point; the face average model is trained by the following units:
the building unit is used for building a sample library, wherein the sample library is provided with a first number of facial feature pictures, and a second number of facial feature points are marked in each facial sample picture to obtain each marked facial sample picture;
the input unit is used for inputting the face sample pictures after marking as samples into a pre-trained face feature recognition model for training to obtain a face average model.
Further, the face feature recognition model is obtained through training by a training unit, and the training unit is used for training the face feature recognition model by the following gradient lifting decision tree algorithm formula:
p ^m+1 =p ^m +k m (i,p ^m )
wherein m represents a concatenation number, k m Representing the current level of regressors, each consisting of a number of regression trees, For shape estimation of the current model, each regressor is based on the current picture entered +.>And->To predict an incrementIn the model training process, training a first regression tree by adopting a second number of characteristic points of the first face sample picture according to the sequence of the first number of marked face sample pictures, determining the first regression tree as a current regression tree, training a second regression tree by adopting residual errors of a predicted value of the current regression tree and a normal value of the second number of characteristic points, determining the next regression tree as a new current regression tree, repeatedly executing the actions of training the predicted value of the current regression tree and the residual errors of the normal value of the second number of characteristic points to obtain the next regression tree by adopting the predicted value of the current regression tree and the normal value of the second number of characteristic points, determining the next regression tree as the new current regression tree until the predicted value of the last tree is trained to be close to 0, obtaining each regression tree of a gradient lifting decision tree algorithm, and generating a face characteristic recognition model according to each regression tree.
Further, the facial feature picture is obtained by:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring an original face picture, wherein the original face picture is a picture of the whole face of a driver when the driver drives;
The contraction unit is used for contracting the original face picture to a preset size to obtain a first face picture;
and the identification unit is used for carrying out feature identification on the first face picture by adopting a pre-trained convolutional neural network model to obtain a face five-sense organ picture.
Further, the first face picture is divided into a first region and a second region, the first region and the second region being mutually exclusive, and the recognition unit includes:
the segmentation unit is used for segmenting a first region in the first face picture to obtain a third region and a fourth region of the second face picture, segmenting a second region in the first face picture to obtain a fifth region and a sixth region of the second face picture, wherein the third region and the fourth region are mutually disjoint, and the fifth region and the sixth region are mutually disjoint;
and the generating unit is used for generating the facial five-sense organ picture according to the third region, the fourth region, the fifth region and the sixth region in the second facial picture.
For specific limitation of the fatigue driving detection device, reference may be made to the limitation of the fatigue driving detection method hereinabove, and no further description is given here. The above-mentioned various modules in the fatigue driving detection device may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data related to the fatigue driving detection method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a method for detecting fatigue driving.
In one embodiment, a computer device is provided, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the steps of the fatigue driving detection method of the above embodiment, such as steps S10 to S40 shown in fig. 2. Alternatively, the processor may implement the functions of the respective modules/units of the fatigue driving detection apparatus in the above embodiment when executing the computer program, such as the functions of the extraction module 71 to the determination module 74 shown in fig. 6. In order to avoid repetition, a description thereof is omitted.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored, where the computer program is executed by a processor to implement the method for detecting fatigue driving in the embodiment of the method, or where the computer program is executed by the processor to implement the functions of each module/unit in the device for detecting fatigue driving in the embodiment of the device. In order to avoid repetition, a description thereof is omitted. Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (8)

1. The fatigue driving detection method is characterized by comprising the following steps of:
acquiring a face five-sense organ picture obtained by a driver during driving in real time, and extracting each facial feature point on the face five-sense organ picture, wherein the facial feature points are external dimension outline basic feature points of the five-sense organ on the face;
Inputting the positions of the facial feature points on the face into a pre-trained classifier to obtain facial features of the driver, wherein the facial features are generated by the facial features of the driver according to reflection, and the facial features comprise the movement range of the facial features; the movement amplitude of the five sense organs is the displacement generated when the driver changes from the normal state of the five sense organs to the current state of the five sense organs; the classifier is obtained by extracting preset facial feature points from preset facial feature pictures, training the positions of the extracted facial feature points on the face as a training set, and the classifier stores the pairing relation between the positions of the input facial feature points on the face and facial feature actions of a driver as output;
when the movement amplitude of the facial features in the facial features actions exceeds a preset first threshold value and the recorded duration time of the facial features actions exceeds a preset time, determining that the driver enters a fatigue state;
determining that the driver is tired to drive if the total duration of the fatigue state of the driver exceeds a preset second threshold value and the occurrence frequency of the fatigue state exceeds a preset third threshold value within a preset time period;
The facial feature picture is obtained through the following steps:
acquiring an original face picture, wherein the original face picture is a picture of the whole face of the driver during driving;
reducing the original face picture to a preset size to obtain a first face picture;
and carrying out feature recognition on the first face picture by adopting a pre-trained convolutional neural network model to obtain the face five-sense organ picture.
2. The method for detecting fatigue driving according to claim 1, wherein the extracting each facial feature point on the face five sense organs picture specifically comprises: analyzing the facial feature pictures by adopting a pre-trained facial average model to obtain each facial feature point; the face average model is obtained through training the following steps:
establishing a sample library, wherein the sample library contains a first number of facial feature pictures, and marking a second number of facial feature points in each facial sample picture to obtain each marked facial sample picture;
and (3) taking each marked face sample picture as a sample, inputting the sample picture into a pre-trained face feature recognition model for training, and obtaining the face average model.
3. The method for detecting fatigue driving according to claim 2, wherein the face feature recognition model is obtained by training the following gradient lifting decision tree algorithm formula:
p ^m+1 =p ^m +k m (i,p ^m )
Wherein m represents a concatenation number, k m Representing the current level of regressors, each consisting of a number of regression trees,for shape estimation of the current model, each regressor is based on the current picture entered +.>And->To predict an incrementIn the model training process, training a first regression tree by adopting a second number of characteristic points of a first face sample picture according to the sequence of the first number of marked face sample pictures, determining the first regression tree as a current regression tree, training a next regression tree by adopting residual errors of the predicted value of the current regression tree and the normal value of the second number of characteristic points, determining the next regression tree as a new current regression tree, repeatedly executing the step of training the next regression tree by adopting residual errors of the predicted value of the current regression tree and the normal value of the second number of characteristic points, determining the next regression tree as a new current regression tree until the predicted value of the last regression tree and the normal value of the second number of characteristic points are trained to be close to 0, obtaining each regression tree of the gradient lifting decision tree algorithm, and generating the face characteristic identification model according to each regression tree.
4. The method for detecting fatigue driving according to claim 1, wherein the first face picture is divided into a first region and a second region, the first region and the second region are mutually exclusive, the feature recognition is performed on the first face picture by using a pre-trained convolutional neural network model, and obtaining the face five sense organs picture includes:
dividing the first region in the first face picture to obtain a third region and a fourth region of a second face picture, and dividing the second region in the first face picture to obtain a fifth region and a sixth region of the second face picture, wherein the third region and the fourth region are mutually disjoint, and the fifth region and the sixth region are mutually disjoint;
and generating a facial five-sense organ picture according to the third region, the fourth region, the fifth region and the sixth region in the second facial picture.
5. The utility model provides a driver fatigue detection device which characterized in that, driver fatigue detection device includes:
the extraction module is used for acquiring the facial feature pictures obtained by the driver during driving in real time and extracting each facial feature point on the facial feature pictures;
The input module is used for inputting the positions of the facial feature points on the face of the driver into a pre-trained classifier to obtain facial features of the driver, wherein the facial features are generated by the facial features of the driver according to reflection, and the facial features comprise the movement range of the facial features; the movement amplitude of the five sense organs is the displacement generated when the driver changes from the normal state of the five sense organs to the current state of the five sense organs; the classifier is obtained by extracting preset facial feature points from preset facial feature pictures, training the positions of the extracted facial feature points on the face as a training set, and the classifier stores the pairing relation between the positions of the input facial feature points on the face and facial feature actions of a driver as output;
the judging module is used for determining that the driver enters a fatigue state when the movement amplitude of the facial features in the facial features actions exceeds a preset first threshold value and the recorded duration time of the facial features actions is preset;
the determining module is used for determining that the driver is in fatigue driving when the total duration of the fatigue state of the driver exceeds a preset second threshold value and the occurrence frequency of the fatigue state exceeds a preset third threshold value within a preset time period;
The facial five sense organs picture is obtained by the following units:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring an original face picture, wherein the original face picture is a picture of the whole face of a driver when the driver drives;
the contraction unit is used for contracting the original face picture to a preset size to obtain a first face picture;
and the identification unit is used for carrying out feature identification on the first face picture by adopting a pre-trained convolutional neural network model to obtain a face five-sense organ picture.
6. The apparatus for detecting fatigue driving according to claim 5, wherein the extracting of each facial feature point on the face five sense organs picture is specifically: the analysis unit is used for analyzing the facial feature images by adopting a pre-trained facial average model to obtain each facial feature point; the face average model is trained by the following units:
the building unit is used for building a sample library, wherein the sample library is provided with a first number of facial feature pictures, and a second number of facial feature points are marked in each facial sample picture to obtain each marked facial sample picture;
the input unit is used for inputting the face sample pictures after marking as samples into a pre-trained face feature recognition model for training to obtain a face average model.
7. Computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the fatigue driving detection method according to any of claims 1-4 when the computer program is executed.
8. A computer-readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the fatigue driving detection method according to any of claims 1 to 4.
CN201810871611.1A 2018-08-02 2018-08-02 Fatigue driving detection method, device, computer equipment and storage medium Active CN109241842B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810871611.1A CN109241842B (en) 2018-08-02 2018-08-02 Fatigue driving detection method, device, computer equipment and storage medium
PCT/CN2018/106394 WO2020024395A1 (en) 2018-08-02 2018-09-19 Fatigue driving detection method and apparatus, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810871611.1A CN109241842B (en) 2018-08-02 2018-08-02 Fatigue driving detection method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109241842A CN109241842A (en) 2019-01-18
CN109241842B true CN109241842B (en) 2024-03-05

Family

ID=65072810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810871611.1A Active CN109241842B (en) 2018-08-02 2018-08-02 Fatigue driving detection method, device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN109241842B (en)
WO (1) WO2020024395A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110119714B (en) * 2019-05-14 2022-02-25 山东浪潮科学研究院有限公司 Driver fatigue detection method and device based on convolutional neural network
WO2021014637A1 (en) * 2019-07-25 2021-01-28 三菱電機株式会社 Driving support device, driving support method, driving support program, and driving support system
CN111444657B (en) * 2020-03-10 2023-05-02 五邑大学 Method and device for constructing fatigue driving prediction model and storage medium
CN111860098A (en) * 2020-04-21 2020-10-30 北京嘀嘀无限科技发展有限公司 Fatigue driving detection method and device, electronic equipment and medium
CN111814636A (en) * 2020-06-29 2020-10-23 北京百度网讯科技有限公司 Safety belt detection method and device, electronic equipment and storage medium
CN111881783A (en) * 2020-07-10 2020-11-03 北京嘉楠捷思信息技术有限公司 Fatigue detection method and device
CN112183220B (en) * 2020-09-04 2024-05-24 广州汽车集团股份有限公司 Driver fatigue detection method and system and computer storage medium thereof
CN112070051B (en) * 2020-09-16 2022-09-20 华东交通大学 Pruning compression-based fatigue driving rapid detection method
CN112989978A (en) * 2021-03-04 2021-06-18 扬州微地图地理信息科技有限公司 Driving assistance recognition method based on high-precision map
CN113205081B (en) * 2021-06-11 2024-01-05 北京惠朗时代科技有限公司 SVM model worker fatigue accurate judging method based on significance detection
CN113657212A (en) * 2021-07-30 2021-11-16 浙江大华技术股份有限公司 Fatigue detection method and related device
CN114120395B (en) * 2021-11-02 2024-09-13 深圳技术大学 Driving behavior monitoring method and device and computer readable storage medium
CN114898339B (en) * 2022-05-20 2024-06-07 一汽解放汽车有限公司 Training method, device, equipment and storage medium of driving behavior prediction model

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011050734A1 (en) * 2009-10-30 2011-05-05 深圳市汉华安道科技有限责任公司 Method, device and car for fatigue driving detection
CN102254151A (en) * 2011-06-16 2011-11-23 清华大学 Driver fatigue detection method based on face video analysis
CN103479367A (en) * 2013-09-09 2014-01-01 广东工业大学 Driver fatigue detection method based on facial action unit recognition
CN103714660A (en) * 2013-12-26 2014-04-09 苏州清研微视电子科技有限公司 System for achieving fatigue driving judgment on basis of image processing and fusion between heart rate characteristic and expression characteristic
CN105740847A (en) * 2016-03-02 2016-07-06 同济大学 Fatigue grade discrimination algorithm based on driver eye portion identification and vehicle driving track
CN106781282A (en) * 2016-12-29 2017-05-31 天津中科智能识别产业技术研究院有限公司 A kind of intelligent travelling crane driver fatigue early warning system
CN106909879A (en) * 2017-01-11 2017-06-30 开易(北京)科技有限公司 A kind of method for detecting fatigue driving and system
CN107229922A (en) * 2017-06-12 2017-10-03 西南科技大学 A kind of fatigue driving monitoring method and device
CN107704805A (en) * 2017-09-01 2018-02-16 深圳市爱培科技术股份有限公司 method for detecting fatigue driving, drive recorder and storage device
CN107992831A (en) * 2017-12-07 2018-05-04 深圳云天励飞技术有限公司 Fatigue state detection method, device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485191B (en) * 2015-09-02 2018-12-11 腾讯科技(深圳)有限公司 A kind of method for detecting fatigue state of driver and system
CN106781286A (en) * 2017-02-10 2017-05-31 开易(深圳)科技有限公司 A kind of method for detecting fatigue driving and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011050734A1 (en) * 2009-10-30 2011-05-05 深圳市汉华安道科技有限责任公司 Method, device and car for fatigue driving detection
CN102254151A (en) * 2011-06-16 2011-11-23 清华大学 Driver fatigue detection method based on face video analysis
CN103479367A (en) * 2013-09-09 2014-01-01 广东工业大学 Driver fatigue detection method based on facial action unit recognition
CN103714660A (en) * 2013-12-26 2014-04-09 苏州清研微视电子科技有限公司 System for achieving fatigue driving judgment on basis of image processing and fusion between heart rate characteristic and expression characteristic
CN105740847A (en) * 2016-03-02 2016-07-06 同济大学 Fatigue grade discrimination algorithm based on driver eye portion identification and vehicle driving track
CN106781282A (en) * 2016-12-29 2017-05-31 天津中科智能识别产业技术研究院有限公司 A kind of intelligent travelling crane driver fatigue early warning system
CN106909879A (en) * 2017-01-11 2017-06-30 开易(北京)科技有限公司 A kind of method for detecting fatigue driving and system
CN107229922A (en) * 2017-06-12 2017-10-03 西南科技大学 A kind of fatigue driving monitoring method and device
CN107704805A (en) * 2017-09-01 2018-02-16 深圳市爱培科技术股份有限公司 method for detecting fatigue driving, drive recorder and storage device
CN107992831A (en) * 2017-12-07 2018-05-04 深圳云天励飞技术有限公司 Fatigue state detection method, device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于KLT算法的驾驶员疲劳检测方法;戢玲玲 等;计算机工程与设计;20100128;第31卷(第02期);第436-442页 *
驾驶员疲劳检测实时控制系统设计;唐新星 等;制造业自动化;20161125;第38卷(第11期);第44-51页 *

Also Published As

Publication number Publication date
CN109241842A (en) 2019-01-18
WO2020024395A1 (en) 2020-02-06

Similar Documents

Publication Publication Date Title
CN109241842B (en) Fatigue driving detection method, device, computer equipment and storage medium
CN109344682B (en) Classroom monitoring method, classroom monitoring device, computer equipment and storage medium
CN111667011B (en) Damage detection model training and vehicle damage detection method, device, equipment and medium
CN112232293B (en) Image processing model training method, image processing method and related equipment
CN109472213B (en) Palm print recognition method and device, computer equipment and storage medium
CN113239874B (en) Behavior gesture detection method, device, equipment and medium based on video image
CN111754396B (en) Face image processing method, device, computer equipment and storage medium
CN110490902B (en) Target tracking method and device applied to smart city and computer equipment
CN110827236B (en) Brain tissue layering method, device and computer equipment based on neural network
CN111488855A (en) Fatigue driving detection method, device, computer equipment and storage medium
CN111160275B (en) Pedestrian re-recognition model training method, device, computer equipment and storage medium
CN110163151B (en) Training method and device of face model, computer equipment and storage medium
CN107832721B (en) Method and apparatus for outputting information
CN112818821B (en) Human face acquisition source detection method and device based on visible light and infrared light
WO2021169642A1 (en) Video-based eyeball turning determination method and system
CN113706481A (en) Sperm quality detection method, sperm quality detection device, computer equipment and storage medium
CN113034514A (en) Sky region segmentation method and device, computer equipment and storage medium
Zhao et al. Research on fatigue detection based on visual features
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
CN116863522A (en) Acne grading method, device, equipment and medium
WO2023279799A1 (en) Object identification method and apparatus, and electronic system
CN116091596A (en) Multi-person 2D human body posture estimation method and device from bottom to top
CN113506274B (en) Detection system for human cognitive condition based on visual saliency difference map
CN115497092A (en) Image processing method, device and equipment
Fa et al. Multi-scale spatial–temporal attention graph convolutional networks for driver fatigue detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant