CN114758403B - Intelligent analysis method and device for fatigue driving - Google Patents

Intelligent analysis method and device for fatigue driving Download PDF

Info

Publication number
CN114758403B
CN114758403B CN202210651936.5A CN202210651936A CN114758403B CN 114758403 B CN114758403 B CN 114758403B CN 202210651936 A CN202210651936 A CN 202210651936A CN 114758403 B CN114758403 B CN 114758403B
Authority
CN
China
Prior art keywords
face
picture
driving
matrix
fatigue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210651936.5A
Other languages
Chinese (zh)
Other versions
CN114758403A (en
Inventor
熊滔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Jingran Intelligent Technology Co ltd
Original Assignee
Wuhan Jingran Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Jingran Intelligent Technology Co ltd filed Critical Wuhan Jingran Intelligent Technology Co ltd
Priority to CN202210651936.5A priority Critical patent/CN114758403B/en
Publication of CN114758403A publication Critical patent/CN114758403A/en
Application granted granted Critical
Publication of CN114758403B publication Critical patent/CN114758403B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • B60W2040/0827Inactivity or incapacity of driver due to sleepiness
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/229Attention level, e.g. attentive to driving, reading or sleeping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of intelligent decision making, in particular to an intelligent analysis method and device for fatigue driving, which comprises the following steps: receiving a driving starting instruction to start monitoring equipment pre-installed in a cab; capturing the driving state of a driver in real time by utilizing monitoring equipment to obtain a driving picture; the method comprises the steps of picking an original face picture of a driver from a driving picture and splitting the original face picture into a plurality of face pixel blocks; projecting the plurality of face pixel blocks into a pre-constructed coordinate system to obtain a plurality of face vector blocks; performing histogram mapping on the plurality of face vector blocks to obtain a face quantization histogram; calculating a space co-occurrence matrix of the face quantization histogram and then converting the space co-occurrence matrix into a Markov matrix, and optimizing a face pixel block through the Markov matrix to obtain an optimized face picture; and inputting the optimized face picture into a fatigue driving intelligent diagnosis model which is trained in advance to obtain the driving fatigue grade of the driver. The invention can realize the intelligent analysis of the driving state of the driver and improve the accuracy of the fatigue state analysis of the driver.

Description

Intelligent analysis method and device for fatigue driving
Technical Field
The invention relates to the field of intelligent decision making, in particular to an intelligent analysis method and device for fatigue driving.
Background
The driving fatigue means that a driver generates imbalance between physiological functions and psychological functions after driving for a long time, and the driving skill is objectively reduced, so how to accurately judge the driving fatigue state of the driver is important to ensure the driving safety of the driver.
At present, a driver is usually reminded based on the set driving time in the road driving process, but the driving state of the driver cannot be accurately captured by setting the driving time, so that the driving fatigue information of the driver cannot be accurately analyzed.
Disclosure of Invention
The invention provides an intelligent analysis method and device for fatigue driving, and mainly aims to realize intelligent analysis of driving states of drivers and improve the accuracy of fatigue state analysis of the drivers.
In order to achieve the above object, the present invention provides an intelligent analysis method for fatigue driving, comprising:
receiving a driving starting instruction, and starting monitoring equipment which is pre-installed in a cab according to the driving starting instruction;
capturing the driving state of a driver in real time by using the monitoring equipment to obtain a driving picture;
picking an original face picture of a driver from the driving picture, and splitting the original face picture into a plurality of face pixel blocks;
projecting the plurality of face pixel blocks into a pre-constructed coordinate system to obtain a plurality of face vector blocks;
performing histogram mapping on the face vector blocks to obtain a face quantization histogram;
and calculating a spatial co-occurrence matrix of the face quantization histogram, wherein the calculation method comprises the following steps:
Figure 639299DEST_PATH_IMAGE001
Figure 375174DEST_PATH_IMAGE002
wherein C represents the spatial co-occurrence matrix, K represents a matrix dimension of the spatial co-occurrence matrix,
Figure 42916DEST_PATH_IMAGE003
each matrix element representing the spatial co-occurrence matrix,
Figure 649478DEST_PATH_IMAGE004
and
Figure 111683DEST_PATH_IMAGE005
represents the s and m pixels of the face picture,
Figure 302231DEST_PATH_IMAGE006
and
Figure 242505DEST_PATH_IMAGE007
the ith and jth groups of the face quantization histogram are represented, and d represents each matrix element of the spatial co-occurrence matrix
Figure 234732DEST_PATH_IMAGE004
And
Figure 336680DEST_PATH_IMAGE005
a chebyshev distance in the coordinate system;
converting the space co-occurrence matrix into a Markov matrix, and optimizing the face pixel block through the Markov matrix to obtain an optimized face picture;
and inputting the optimized face picture into a fatigue driving intelligent diagnosis model trained in advance to obtain the driving fatigue grade of the driver, wherein the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network.
Optionally, the splitting the original face picture into a plurality of face pixel blocks includes:
performing low-pass filtering pretreatment on the original face picture by using a moving average filter to obtain a primary face picture;
based on a pre-constructed sliding window, performing pixel splitting on the primary face picture to obtain a plurality of face pixel blocks, wherein the number of the face pixel blocks is as follows:
Figure 47147DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 555226DEST_PATH_IMAGE009
the number of the face pixel blocks is the number of the face pixel blocks,
Figure 870801DEST_PATH_IMAGE010
and the picture specification of the primary face picture is obtained.
Optionally, the performing histogram mapping on the plurality of face vector blocks to obtain a face quantization histogram includes:
receiving a pre-constructed vector square block set, wherein the vector square block set comprises 120 vector square blocks, and each vector square block consists of a single-dimensional vector with the column length of 16;
converting each face vector block into a single-dimensional vector with the length of 16 according to an end-to-end connection mode;
sequentially calculating the Manhattan distance between each 1-dimensional vector and each vector square in the vector square set, and selecting the vector square block with the minimum Manhattan distance to determine the vector square block as a face quantization square;
and grouping the face quantization histogram blocks corresponding to each single-dimensional vector to construct and obtain the face quantization histogram.
Optionally, the transforming the spatial co-occurrence matrix into a markov matrix includes:
converting the spatial co-occurrence matrix into a Markov matrix by adopting the following calculation method:
Figure 940388DEST_PATH_IMAGE011
wherein, the first and the second end of the pipe are connected with each other,
Figure 341414DEST_PATH_IMAGE012
a value representing the u row and v column of the Markov matrix,
Figure 154649DEST_PATH_IMAGE003
each matrix element representing the spatial co-occurrence matrix, K represents a matrix dimension of the spatial co-occurrence matrix.
Optionally, the optimizing the face pixel block through the markov matrix to obtain an optimized face picture includes:
calculating to obtain a divergence matrix according to each face pixel block;
calculating an optimized pixel value set of the Markov matrix;
and sequentially adding the optimized pixel value set to the divergence matrix according to the position corresponding relation to obtain the optimized face picture.
Optionally, the obtaining a divergence matrix by calculation according to each face pixel block includes:
and calculating to obtain the divergence matrix according to the following calculation formula:
Figure 855889DEST_PATH_IMAGE013
Figure 75473DEST_PATH_IMAGE014
wherein S represents the divergence matrix, N is the total number of the face pixel blocks,
Figure 494953DEST_PATH_IMAGE015
representing the ith personThe number of the face pixel blocks is set,
Figure 846300DEST_PATH_IMAGE016
the corresponding mean value of all human face pixel blocks.
Optionally, the calculating an optimized set of pixel values of the markov matrix comprises:
calculating to obtain an optimized pixel value set by adopting the following calculation formula
Figure 667625DEST_PATH_IMAGE017
Wherein the content of the first and second substances,
Figure 547856DEST_PATH_IMAGE018
a set of optimized pixel values is represented,
Figure 422009DEST_PATH_IMAGE019
a pixel probability distribution function representing a markov matrix,
Figure 577047DEST_PATH_IMAGE020
a matrix constant representing a markov matrix.
Optionally, the inputting the optimized human face picture into a fatigue driving intelligent diagnosis model trained in advance to obtain a driving fatigue level of the driver includes:
performing feature extraction on the optimized picture by using the convolution layer in the fatigue driving intelligent diagnosis model which is trained in advance to obtain a feature picture;
performing bottom layer feature fusion on the feature picture and the optimized face picture by using a standard layer in the pre-trained intelligent fatigue driving diagnosis model to obtain a fusion picture;
pooling the fused picture by using a pooling layer in the pre-trained fatigue driving intelligent diagnosis model to obtain a pooled picture;
and calculating the driving fatigue category probability of the pooled picture by using a full connection layer in the pre-trained intelligent diagnosis model for fatigue driving, and outputting the driving fatigue grade of the driver by using an output layer in the pre-trained intelligent diagnosis model for fatigue driving according to the driving fatigue category probability.
Optionally, the performing bottom-layer feature fusion on the feature picture and the optimized face picture by using the standard layer in the pre-trained intelligent fatigue driving diagnosis model to obtain a fusion picture includes:
and performing bottom layer feature fusion on the feature picture and the optimized face picture by using the following formula to obtain a fusion picture:
Figure 252879DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 304012DEST_PATH_IMAGE022
representing a fusion picture, H representing the characteristics of the characteristic picture and the optimized face picture, X representing the characteristic picture and the optimized face picture,
Figure 698084DEST_PATH_IMAGE023
represents the fusion characteristic mean function of the characteristic picture and the optimized face picture,
Figure 391233DEST_PATH_IMAGE024
a fusion characteristic standard deviation function representing the characteristic picture and the optimized face picture,
Figure 154528DEST_PATH_IMAGE025
and expressing the normalization functions of the feature picture, the optimized human face picture semantic feature and the voiceprint feature.
In order to solve the above problems, the present invention further provides an apparatus for intelligently analyzing fatigue driving, the apparatus comprising:
the monitoring equipment starting module is used for receiving a driving starting instruction and starting monitoring equipment which is arranged in a cab in advance according to the driving starting instruction;
the driving picture capturing module is used for capturing the driving state of the driver in real time by utilizing the monitoring equipment to obtain a driving picture;
the face picture splitting module is used for picking an original face picture of a driver from the driving picture and splitting the original face picture into a plurality of face pixel blocks;
the human face pixel block projection module is used for projecting the human face pixel blocks into a pre-constructed coordinate system to obtain a plurality of human face vector blocks;
the face vector block mapping module is used for performing histogram mapping on the face vector blocks to obtain a face quantization histogram;
a co-occurrence matrix calculation module, configured to calculate a spatial co-occurrence matrix of the face quantization histogram, where the calculation method is as follows:
Figure 642141DEST_PATH_IMAGE026
Figure 523509DEST_PATH_IMAGE027
wherein C represents the spatial co-occurrence matrix, K represents a matrix dimension of the spatial co-occurrence matrix,
Figure 489191DEST_PATH_IMAGE028
each matrix element representing the spatial co-occurrence matrix,
Figure 139615DEST_PATH_IMAGE029
and
Figure 63709DEST_PATH_IMAGE030
represents the s and m pixels of the face picture,
Figure 134171DEST_PATH_IMAGE031
and
Figure 169123DEST_PATH_IMAGE032
the ith and jth groups of the face quantization histogram are represented, and d represents each matrix element of the spatial co-occurrence matrix
Figure 408475DEST_PATH_IMAGE029
And
Figure 972311DEST_PATH_IMAGE030
a chebyshev distance in the coordinate system;
the face picture optimization module is used for converting the space co-occurrence matrix into a Markov matrix and optimizing the face pixel block through the Markov matrix to obtain an optimized face picture;
and the fatigue grade detection module is used for inputting the optimized face picture into a fatigue driving intelligent diagnosis model which is trained in advance to obtain the driving fatigue grade of the driver, wherein the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network.
In order to solve the problems in the background art, the embodiment of the invention receives a driving starting instruction to start monitoring equipment which is installed in a cab in advance; the driving state of the driver is captured in real time by utilizing the monitoring equipment to obtain a driving picture so as to shoot the driving state of the driver in real time and ensure the real-time analysis of the state of the subsequent driver; secondly, the original face picture of the driver is extracted from the driving picture and then split into a plurality of face pixel blocks, the fatigue condition of the driver can be diagnosed by analyzing the face state of the driver, the state analysis accuracy of a subsequent driver is guaranteed, the plurality of face pixel blocks are projected into a pre-constructed coordinate system to obtain a plurality of face vector blocks, histogram mapping is performed on the plurality of face vector blocks to obtain a face quantization histogram, a space co-occurrence matrix of the face quantization histogram is calculated and then converted into a Markov matrix, the face pixel blocks are optimized through the Markov matrix to obtain an optimized face picture, the finally obtained face picture can be guaranteed to be in the optimal state, and the fatigue state analysis accuracy of the subsequent driver is further guaranteed; furthermore, the optimized face picture is input to a fatigue driving intelligent diagnosis model which is trained in advance, so that the driving fatigue degree of the optimized face picture is intelligently detected, and the driving fatigue grade of the driver is obtained. Therefore, the intelligent analysis method and the intelligent analysis device for fatigue driving, provided by the invention, can realize the intelligent analysis of the driving state of the driver and improve the accuracy of the analysis of the fatigue state of the driver.
Drawings
Fig. 1 is a schematic flow chart of an intelligent fatigue driving analysis method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart showing a detailed implementation of one of the steps in FIG. 1;
FIG. 3 is a schematic diagram of an example of the encryption counter of FIG. 1;
FIG. 4 is a functional block diagram of an apparatus for intelligently analyzing fatigue driving according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device for implementing the fatigue driving intelligent analysis method according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the application provides an intelligent analysis method for fatigue driving. The executing subject of the fatigue driving intelligent analysis method includes, but is not limited to, at least one of electronic devices such as a server and a terminal, which can be configured to execute the method provided by the embodiments of the present application. In other words, the fatigue driving intelligent analysis method may be performed by software or hardware installed in a terminal device or a server device. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Example 1:
fig. 1 is a schematic flow chart of an intelligent fatigue driving analysis method according to an embodiment of the present invention. In this embodiment, the intelligent analysis method for fatigue driving includes:
and S1, receiving a driving starting instruction, and starting monitoring equipment pre-installed in a cab according to the driving starting instruction.
The embodiment of the invention mainly aims to monitor the fatigue state of the driver in real time so as to safely remind the driver of driving carefully. Therefore, a driving starting instruction is received firstly, wherein the driving starting instruction is generally connected with engines such as automobiles, and the driving starting instruction can be automatically triggered after the engines are started, so that monitoring equipment pre-installed in a cab is started according to the driving starting instruction.
Where the monitoring device is typically mounted in a fixed position within the cockpit of an automobile or the like, it is emphasized that the monitoring device may capture the face of the driver at the fixed position.
And S2, capturing the driving state of the driver in real time by using the monitoring equipment to obtain a driving picture.
It can be understood that after the monitoring device is started, the driving state of the driver can be shot in real time, so that a driving picture is obtained, wherein the driving picture comprises the face state of the driver.
S3, extracting an original face picture of the driver from the driving picture, and splitting the original face picture into a plurality of face pixel blocks.
In the embodiment of the present invention, the fatigue condition of the driver is mainly diagnosed by analyzing the facial state of the driver, and therefore, in detail, the extracting the original face picture of the driver from the driving picture includes:
inputting the driving picture into a face recognition model which is constructed in advance, wherein the face recognition model comprises a YOLO model;
and recognizing the face area of the driver according to the YOLO model to obtain the original face picture.
It should be explained that the YOLO model is an end-to-end target detection algorithm, has the advantage of high detection speed, and is applicable to an application scenario of detecting a human face in real time in the embodiment of the present invention.
Further, the splitting the original face picture into a plurality of face pixel blocks includes:
performing low-pass filtering pretreatment on the original face picture by using a moving average filter to obtain a primary face picture;
based on a pre-constructed sliding window, performing pixel splitting on the primary face picture to obtain a plurality of face pixel blocks, wherein the number of the face pixel blocks is as follows:
Figure 562692DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 401335DEST_PATH_IMAGE033
the number of the face pixel blocks is the number,
Figure 993729DEST_PATH_IMAGE034
and the picture specification of the primary face picture is obtained.
It should be explained that the moving average filter can effectively eliminate high-frequency noise in the original face image, and keep low-frequency pixel points more useful for face recognition.
Further, in the embodiment of the present invention, the specification of each pixel block is 4 × 4, that is, each pixel block has a specification of 4 × 4
Figure 259625DEST_PATH_IMAGE034
The specification of each pixel block of the primary face picture with the picture specification is 4 x 4.
And S4, projecting the face pixel blocks into a pre-constructed coordinate system to obtain a plurality of face vector blocks.
It should be understood that, in order to improve the recognition accuracy of fatigue driving, each pixel point of an original face picture needs to be optimized, so that each face pixel block needs to be projected into a pre-constructed coordinate system to obtain a plurality of face vector blocks, each vector block represents that the pixel block is mapped to the coordinate system and not only includes the value of the pixel point, but also includes a direction, and the direction is indicated by the connection of each pixel point and a dot.
And S5, performing histogram mapping on the face vector blocks to obtain a face quantization histogram.
In detail, referring to fig. 2, the performing histogram mapping on the plurality of face vector blocks to obtain a face quantization histogram includes:
s21, receiving a vector square block set constructed in advance, wherein the vector square block set comprises 120 vector square blocks, and each vector square block consists of a single-dimensional vector with the column length of 16;
s22, converting each face vector block into a single-dimensional vector with the length of 16 according to an end-to-end connection mode;
s23, sequentially calculating the Manhattan distance between each 1-dimensional vector and each vector square in the vector square set, and selecting the vector histogram block with the minimum Manhattan distance to determine the vector histogram block as a face quantization square;
and S24, grouping the face quantization histogram blocks corresponding to each single-dimensional vector, and constructing to obtain the face quantization histogram.
It should be explained that the vector straight square set is a vector which is photographed and quantized in advance according to different driving environments of the driver. If the vector straight square A is obtained by vectorization in a dark driving environment of a driver, the vector straight square B is obtained by vectorization in a light-rich driving environment, and the vector straight square C is obtained by vectorization in a late-night driving environment.
Furthermore, each vector histogram block is composed of 16 groups of digital single-dimensional vectors, so that for calculating the manhattan distance between the face vector block and each vector square block, the dimensions of the vector square block and the face vector block need to be the same, and since each face vector block is projected by a pixel block, and the specification of the pixel block is 4 × 4, the specification of the face vector block is also 4 × 4.
Illustratively, the face vector block is
Figure 337302DEST_PATH_IMAGE035
The length of the 16 single-dimensional vector obtained by the end-to-end connection method is as follows:
Figure 714057DEST_PATH_IMAGE036
illustratively, if the face vector block 132 is shared, there are 132 groups of face quantization straight blocks, and if there are the same face quantization straight blocks, such as 20 groups of face quantization straight blocks a and 12 groups of face quantization straight blocks B, the face quantization histogram is constructed according to the number of each group. The abscissa of the face quantization histogram is the identifier of each group of face quantization straight blocks, such as A, B, C, and the ordinate is the number of each group of face quantization straight blocks, such as 20 for the face quantization straight block a and 12 for the face quantization straight block B.
And S6, calculating a spatial co-occurrence matrix of the face quantization histogram.
In detail, the spatial co-occurrence matrix of the face quantization histogram is calculated by the following method:
Figure 662421DEST_PATH_IMAGE026
Figure 99219DEST_PATH_IMAGE037
wherein C represents the spatial co-occurrence matrix, K represents a matrix dimension of the spatial co-occurrence matrix,
Figure 162728DEST_PATH_IMAGE038
each matrix element representing the spatial co-occurrence matrix,
Figure 343173DEST_PATH_IMAGE039
and
Figure 146044DEST_PATH_IMAGE040
represents the s and m pixels of the face picture,
Figure 753743DEST_PATH_IMAGE031
and
Figure 274854DEST_PATH_IMAGE041
the ith and jth groups of the face quantization histogram are represented, and d represents each matrix element of the spatial co-occurrence matrix
Figure 258991DEST_PATH_IMAGE039
And
Figure 414903DEST_PATH_IMAGE040
a chebyshev distance in the coordinate system.
And S7, converting the space co-occurrence matrix into a Markov matrix, and optimizing the face pixel block through the Markov matrix to obtain an optimized face picture.
In detail, the converting the spatial symbiotic matrix into a markov matrix includes:
converting the spatial co-occurrence matrix into a Markov matrix by adopting the following calculation method:
Figure 193503DEST_PATH_IMAGE042
wherein the content of the first and second substances,
Figure 201911DEST_PATH_IMAGE043
a value representing the uth row and the vth column of the Markov matrix,
Figure 724159DEST_PATH_IMAGE044
each matrix element representing the spatial co-occurrence matrix, K represents a matrix dimension of the spatial co-occurrence matrix.
Further, the optimizing the face pixel block through the markov matrix to obtain an optimized face picture includes:
calculating to obtain a divergence matrix according to each face pixel block;
calculating an optimized pixel value set of the Markov matrix;
and sequentially adding the optimized pixel value sets to the divergence matrix according to the position corresponding relation to obtain the optimized face picture.
In detail, the obtaining of the divergence matrix by calculation according to each face pixel block includes:
and calculating to obtain the divergence matrix according to the following calculation formula:
Figure 32781DEST_PATH_IMAGE045
Figure 949659DEST_PATH_IMAGE046
wherein S represents the divergence matrix, N is the total number of the face pixel blocks,
Figure 976521DEST_PATH_IMAGE047
representing the ith personal face pixel block,
Figure 36880DEST_PATH_IMAGE048
the average value of all face pixel blocks is obtained.
Further, the calculating the optimized set of pixel values for the markov matrix comprises:
calculating to obtain an optimized pixel value set by adopting the following calculation formula
Figure 200009DEST_PATH_IMAGE017
Wherein the content of the first and second substances,
Figure 54832DEST_PATH_IMAGE049
a set of optimized pixel values is represented,
Figure 303411DEST_PATH_IMAGE050
a pixel probability distribution function representing a markov matrix,
Figure 167462DEST_PATH_IMAGE051
a matrix constant representing a markov matrix.
And S8, inputting the optimized human face picture into a fatigue driving intelligent diagnosis model trained in advance to obtain the driving fatigue grade of the driver, wherein the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network.
The method comprises the steps of inputting an optimized face picture into a fatigue driving intelligent diagnosis model trained in advance to intelligently detect the driving fatigue degree of the optimized face picture, so as to obtain the driving fatigue grade of a driver, wherein the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network and comprises a convolutional layer, a standard layer, a pooling layer and a full connection layer, the convolutional layer is used for extracting a feature picture of the optimized face picture, the standard layer is used for fusing the feature picture with the bottom layer feature of the optimized face picture, the pooling layer is used for reducing the dimension of the picture and reducing the calculation complexity of the picture, and the full connection layer is used for calculating the fatigue driving category probability of the picture, so that the driving fatigue grade of the driver is output.
As an embodiment of the present invention, referring to fig. 3, the inputting the optimized human face picture into a fatigue driving intelligent diagnosis model trained in advance to obtain a driving fatigue level of a driver includes:
s31, performing feature extraction on the optimized picture by using the convolution layer in the pre-trained fatigue driving intelligent diagnosis model to obtain a feature picture;
s32, performing bottom layer feature fusion on the feature picture and the optimized face picture by using the standard layer in the pre-trained intelligent fatigue driving diagnosis model to obtain a fusion picture;
s33, performing pooling treatment on the fusion picture by using a pooling layer in the pre-trained fatigue driving intelligent diagnosis model to obtain a pooled picture;
and S34, calculating the driving fatigue category probability of the pooled picture by using the full connection layer in the pre-trained intelligent diagnosis model for fatigue driving, and outputting the driving fatigue level of the driver by using the output layer in the pre-trained intelligent diagnosis model for fatigue driving according to the driving fatigue category probability.
Optionally, feature extraction of the optimized picture may be implemented by a convolution kernel in the convolution layer, the pooling process of the fused picture may be implemented by a maximum/minimum pooling function in the pooling layer, and the driving fatigue category probability of the pooled picture may be calculated by an activation function in the fully-connected layer, such as a softmax function.
Further, in an optional embodiment of the present invention, the feature picture and the optimized face picture are subjected to bottom layer feature fusion by using the following formula to obtain a fusion picture:
Figure 960929DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 721075DEST_PATH_IMAGE052
representing a fusion picture, H representing the characteristics of the characteristic picture and the optimized face picture, X representing the characteristic picture and the optimized face picture,
Figure 456950DEST_PATH_IMAGE053
represents the fusion characteristic mean function of the characteristic picture and the optimized face picture,
Figure 593533DEST_PATH_IMAGE054
a fusion characteristic standard deviation function representing the characteristic picture and the optimized face picture,
Figure 731253DEST_PATH_IMAGE055
and expressing the normalization functions of the feature picture, the optimized human face picture semantic feature and the voiceprint feature.
In order to solve the problems in the background art, the embodiment of the invention receives a driving starting instruction to start monitoring equipment which is installed in a cab in advance; the driving state of the driver is captured in real time by utilizing the monitoring equipment to obtain a driving picture so as to shoot the driving state of the driver in real time and ensure the real-time analysis of the state of the subsequent driver; secondly, the original face picture of the driver is extracted from the driving picture and then split into a plurality of face pixel blocks, the fatigue condition of the driver can be diagnosed by analyzing the face state of the driver, the state analysis accuracy of a subsequent driver is guaranteed, the plurality of face pixel blocks are projected into a pre-constructed coordinate system to obtain a plurality of face vector blocks, histogram mapping is performed on the plurality of face vector blocks to obtain a face quantization histogram, a space co-occurrence matrix of the face quantization histogram is calculated and then converted into a Markov matrix, the face pixel blocks are optimized through the Markov matrix to obtain an optimized face picture, the finally obtained face picture can be guaranteed to be in the optimal state, and the fatigue state analysis accuracy of the subsequent driver is further guaranteed; further, the optimized face picture is input to a fatigue driving intelligent diagnosis model which is trained in advance, so that the driving fatigue degree of the optimized face picture is intelligently detected, and the driving fatigue grade of the driver is obtained. Therefore, the intelligent analysis method for fatigue driving provided by the invention can realize the intelligent analysis of the driving state of the driver and improve the accuracy of the analysis of the fatigue state of the driver.
Example 2:
fig. 4 is a functional block diagram of an apparatus for intelligently analyzing fatigue driving according to an embodiment of the present invention.
The apparatus 100 for intelligently analyzing fatigue driving according to the present invention may be installed in an electronic device. According to the realized functions, the apparatus 100 for intelligently analyzing fatigue driving may include a monitoring device starting module 101, a driving picture capturing module 102, a face picture splitting module 103, a face pixel block projecting module 104, a face vector block mapping module 105, a co-occurrence matrix calculating module 106, a face picture optimizing module 107, and a fatigue level detecting module 108. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
The monitoring equipment starting module 101 is used for receiving a driving starting instruction and starting monitoring equipment which is installed in a cab in advance according to the driving starting instruction;
the driving picture capturing module 102 is configured to capture a driving state of a driver in real time by using the monitoring device to obtain a driving picture;
the face image splitting module 103 is configured to extract an original face image of a driver from the driving image, and split the original face image into a plurality of face pixel blocks;
the face pixel block projection module 104 is configured to project the plurality of face pixel blocks into a pre-constructed coordinate system to obtain a plurality of face vector blocks;
the face vector block mapping module 105 is configured to perform histogram mapping on the face vector blocks to obtain a face quantization histogram;
the co-occurrence matrix calculation module 106 is configured to calculate a spatial co-occurrence matrix of the face quantization histogram, where the calculation method is as follows:
Figure 193459DEST_PATH_IMAGE056
Figure 384006DEST_PATH_IMAGE057
wherein C represents the spatial co-occurrence matrix, K represents a matrix dimension of the spatial co-occurrence matrix,
Figure 589860DEST_PATH_IMAGE058
each matrix element representing the spatial co-occurrence matrix,
Figure 582087DEST_PATH_IMAGE059
and
Figure 949614DEST_PATH_IMAGE060
represents the s-th and m-th pixels of the face picture,
Figure 660081DEST_PATH_IMAGE061
and
Figure 404046DEST_PATH_IMAGE062
the ith and jth groups of the face quantization histogram are represented, and d represents each matrix element of the spatial co-occurrence matrix
Figure 14894DEST_PATH_IMAGE059
And
Figure 287743DEST_PATH_IMAGE060
a chebyshev distance in the coordinate system;
the face picture optimization module 107 is configured to convert the spatial co-occurrence matrix into a markov matrix, and optimize the face pixel block through the markov matrix to obtain an optimized face picture;
the fatigue level detection module 108 is configured to input the optimized face picture to a fatigue driving intelligent diagnosis model trained in advance to obtain a driving fatigue level of the driver, where the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network.
In detail, the specific implementation manner of using each module in the device 100 for intelligently analyzing fatigue driving in the embodiment of the present invention is the same as that in embodiment 1, and is not repeated here.
Example 3:
fig. 5 is a schematic structural diagram of an electronic device for implementing an intelligent fatigue driving analysis method according to an embodiment of the present invention.
The electronic device 1 may include a processor 10, a memory 11 and a bus 12, and may further include a computer program, such as a fatigue driving intelligent analysis method program, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of a fatigue driving intelligent analysis method program, but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (e.g., a fatigue driving intelligent analysis method program, etc.) stored in the memory 11 and calling data stored in the memory 11.
The bus 12 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 12 may be divided into an address bus, a data bus, a control bus, etc. The bus 12 is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 5 only shows an electronic device with components, and it will be understood by a person skilled in the art that the structure shown in fig. 5 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The intelligent fatigue driving analysis method program stored in the memory 11 of the electronic device 1 is a combination of a plurality of instructions, and when running in the processor 10, can realize:
receiving a driving starting instruction, and starting monitoring equipment which is pre-installed in a cab according to the driving starting instruction;
capturing the driving state of a driver in real time by using the monitoring equipment to obtain a driving picture;
picking an original face picture of a driver from the driving picture, and splitting the original face picture into a plurality of face pixel blocks;
projecting the plurality of face pixel blocks into a pre-constructed coordinate system to obtain a plurality of face vector blocks;
performing histogram mapping on the plurality of face vector blocks to obtain a face quantization histogram;
and calculating a spatial co-occurrence matrix of the face quantization histogram, wherein the calculation method comprises the following steps:
Figure 485506DEST_PATH_IMAGE063
Figure 33162DEST_PATH_IMAGE064
wherein C represents the spatial co-occurrence matrix, K represents a matrix dimension of the spatial co-occurrence matrix,
Figure 999981DEST_PATH_IMAGE058
each matrix element representing the spatial co-occurrence matrix,
Figure 709311DEST_PATH_IMAGE059
and
Figure 627327DEST_PATH_IMAGE060
represents the s and m pixels of the face picture,
Figure 978673DEST_PATH_IMAGE061
and
Figure 268840DEST_PATH_IMAGE062
presentation instrumentThe ith and jth groups of the face quantization histogram, d represents each matrix element of the spatial co-occurrence matrix
Figure 680230DEST_PATH_IMAGE059
And
Figure 55848DEST_PATH_IMAGE060
a chebyshev distance in the coordinate system;
converting the space co-occurrence matrix into a Markov matrix, and optimizing the face pixel block through the Markov matrix to obtain an optimized face picture;
and inputting the optimized face picture into a fatigue driving intelligent diagnosis model trained in advance to obtain the driving fatigue grade of the driver, wherein the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network.
Specifically, the specific implementation method of the processor 10 for the instruction may refer to the description of the relevant steps in the embodiments corresponding to fig. 1 to fig. 5, which is not repeated herein.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
receiving a driving starting instruction, and starting monitoring equipment which is pre-installed in a cab according to the driving starting instruction;
capturing the driving state of a driver in real time by using the monitoring equipment to obtain a driving picture;
picking an original face picture of a driver from the driving picture, and splitting the original face picture into a plurality of face pixel blocks;
projecting the plurality of face pixel blocks into a pre-constructed coordinate system to obtain a plurality of face vector blocks;
performing histogram mapping on the plurality of face vector blocks to obtain a face quantization histogram;
and calculating a spatial co-occurrence matrix of the face quantization histogram, wherein the calculation method comprises the following steps:
Figure 945307DEST_PATH_IMAGE065
Figure 385253DEST_PATH_IMAGE066
wherein C represents the spatial co-occurrence matrix, K represents a matrix dimension of the spatial co-occurrence matrix,
Figure 436386DEST_PATH_IMAGE058
each matrix element representing the spatial co-occurrence matrix,
Figure 830458DEST_PATH_IMAGE059
and
Figure 523607DEST_PATH_IMAGE060
represents the s and m pixels of the face picture,
Figure 522787DEST_PATH_IMAGE061
and
Figure 275980DEST_PATH_IMAGE062
the ith and jth groups of the face quantization histogram are represented, and d represents each matrix element of the spatial co-occurrence matrix
Figure 124725DEST_PATH_IMAGE059
And
Figure 621565DEST_PATH_IMAGE060
a chebyshev distance in the coordinate system;
converting the space co-occurrence matrix into a Markov matrix, and optimizing the face pixel block through the Markov matrix to obtain an optimized face picture;
and inputting the optimized face picture into a fatigue driving intelligent diagnosis model trained in advance to obtain the driving fatigue grade of the driver, wherein the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. An intelligent analysis method for fatigue driving, the method comprising:
receiving a driving starting instruction, and starting monitoring equipment which is pre-installed in a driving cabin according to the driving starting instruction;
capturing the driving state of a driver in real time by using the monitoring equipment to obtain a driving picture;
picking up an original face picture of a driver from the driving picture, and splitting the original face picture into a plurality of face pixel blocks;
projecting the plurality of face pixel blocks into a pre-constructed coordinate system to obtain a plurality of face vector blocks;
performing histogram mapping on the face vector blocks to obtain a face quantization histogram;
and calculating a spatial co-occurrence matrix of the face quantization histogram, wherein the calculation method comprises the following steps:
Figure DEST_PATH_IMAGE001
Figure 473040DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE003
representing the spatial co-occurrence matrix in a spatial co-occurrence matrix,
Figure 629215DEST_PATH_IMAGE004
a matrix dimension representing the spatial co-occurrence matrix,
Figure DEST_PATH_IMAGE005
each matrix element representing the spatial co-occurrence matrix,
Figure 239319DEST_PATH_IMAGE006
and
Figure DEST_PATH_IMAGE007
representing the face picture
Figure 457942DEST_PATH_IMAGE008
And
Figure DEST_PATH_IMAGE009
a plurality of pixels, each of which is a pixel,
Figure 897014DEST_PATH_IMAGE010
and
Figure DEST_PATH_IMAGE011
representing the quantization histogram of the face
Figure DEST_PATH_IMAGE013
And a first
Figure 251903DEST_PATH_IMAGE014
The number of the groups is set to be,
Figure 547755DEST_PATH_IMAGE015
each matrix element representing the spatial co-occurrence matrix
Figure 378308DEST_PATH_IMAGE016
And
Figure DEST_PATH_IMAGE017
a chebyshev distance in the coordinate system;
converting the space co-occurrence matrix into a Markov matrix, and optimizing the face pixel block through the Markov matrix to obtain an optimized face picture;
and inputting the optimized face picture into a fatigue driving intelligent diagnosis model trained in advance to obtain the driving fatigue grade of the driver, wherein the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network.
2. The intelligent analysis method for fatigue driving according to claim 1, wherein the splitting the original face picture into a plurality of face pixel blocks comprises:
performing low-pass filtering pretreatment on the original face picture by using a moving average filter to obtain a primary face picture;
based on a pre-constructed sliding window, performing pixel splitting on the primary face picture to obtain a plurality of face pixel blocks, wherein the number of the face pixel blocks is as follows:
Figure 902961DEST_PATH_IMAGE018
wherein
Figure DEST_PATH_IMAGE019
The number of the face pixel blocks is the number,
Figure 830466DEST_PATH_IMAGE020
and the picture specification of the primary face picture is obtained.
3. The intelligent analysis method for fatigue driving according to claim 1, wherein the performing histogram mapping on the plurality of face vector blocks to obtain a face quantization histogram comprises:
receiving a pre-constructed vector square block set, wherein the vector square block set comprises 120 vector square blocks, and each vector square block consists of a single-dimensional vector with the column length of 16;
converting each face vector block into a single-dimensional vector with the length of 16 according to an end-to-end connection mode;
sequentially calculating the Manhattan distance between each 1-dimensional vector and each vector square in the vector square set, and selecting the vector square block with the minimum Manhattan distance to determine the vector square block as a face quantization square;
and grouping the face quantization histogram blocks corresponding to each single-dimensional vector to construct and obtain the face quantization histogram.
4. The intelligent analysis method for fatigue driving of claim 1, wherein said converting the spatial co-occurrence matrix into a markov matrix comprises:
converting the spatial co-occurrence matrix into a Markov matrix by adopting the following calculation method:
Figure DEST_PATH_IMAGE021
wherein the content of the first and second substances,
Figure 782372DEST_PATH_IMAGE022
represents the Markov matrix to
Figure DEST_PATH_IMAGE023
Line for mobile communication terminal
Figure 428117DEST_PATH_IMAGE024
The value of the column is such that,
Figure DEST_PATH_IMAGE025
each matrix element representing the spatial co-occurrence matrix,
Figure 490882DEST_PATH_IMAGE026
a matrix dimension representing the spatial co-occurrence matrix.
5. The intelligent analysis method for fatigue driving according to claim 1, wherein said optimizing the block of pixels of the face by the markov matrix to obtain an optimized picture of the face comprises:
calculating to obtain a divergence matrix according to each face pixel block;
calculating an optimized pixel value set of the Markov matrix;
and sequentially adding the optimized pixel value sets to the divergence matrix according to the position corresponding relation to obtain the optimized face picture.
6. The intelligent analysis method for fatigue driving according to claim 5, wherein said calculating a divergence matrix from each of said blocks of face pixels comprises:
and calculating to obtain the divergence matrix according to the following calculation formula:
Figure DEST_PATH_IMAGE027
Figure 538473DEST_PATH_IMAGE028
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE029
a matrix of the divergence is represented by a matrix of divergences,
Figure 661281DEST_PATH_IMAGE030
is the total number of the face pixel blocks,
Figure DEST_PATH_IMAGE031
represents the first
Figure DEST_PATH_IMAGE033
The block of pixels of the face of the person,
Figure 325480DEST_PATH_IMAGE034
the corresponding mean value of all human face pixel blocks.
7. The intelligent analysis method for fatigue driving of claim 5, wherein said computing the optimized set of pixel values for the Markov matrix comprises:
calculating to obtain an optimized pixel value set by adopting the following calculation formula
Figure DEST_PATH_IMAGE035
Wherein the content of the first and second substances,
Figure 16155DEST_PATH_IMAGE036
a set of optimized pixel values is represented,
Figure DEST_PATH_IMAGE037
a pixel probability distribution function representing a markov matrix,
Figure 918252DEST_PATH_IMAGE038
a matrix constant representing a markov matrix.
8. The intelligent analysis method for fatigue driving according to any one of claims 1 to 7, wherein the inputting the optimized human face picture into a pre-trained intelligent diagnosis model for fatigue driving to obtain the driving fatigue level of the driver comprises:
performing feature extraction on the optimized picture by using the convolution layer in the fatigue driving intelligent diagnosis model which is trained in advance to obtain a feature picture;
performing bottom layer feature fusion on the feature picture and the optimized face picture by using a standard layer in the pre-trained intelligent fatigue driving diagnosis model to obtain a fusion picture;
pooling the fused picture by using a pooling layer in the pre-trained fatigue driving intelligent diagnosis model to obtain a pooled picture;
and calculating the driving fatigue class probability of the pooled picture by utilizing the full connection layer in the pre-trained intelligent diagnosis model for fatigue driving, and outputting the driving fatigue grade of the driver by utilizing the output layer in the pre-trained intelligent diagnosis model for fatigue driving according to the driving fatigue class probability.
9. The intelligent analysis method for fatigue driving according to claim 8, wherein the performing bottom-layer feature fusion on the feature picture and the optimized face picture by using a standard layer in the pre-trained intelligent diagnosis model for fatigue driving to obtain a fused picture comprises:
and performing bottom layer feature fusion on the feature picture and the optimized face picture by using the following formula to obtain a fusion picture:
Figure DEST_PATH_IMAGE039
wherein the content of the first and second substances,
Figure 211961DEST_PATH_IMAGE040
representing a fusion picture, H representing the characteristics of the characteristic picture and the optimized face picture, X representing the characteristic picture and the optimized face picture,
Figure DEST_PATH_IMAGE041
represents the fusion characteristic mean function of the characteristic picture and the optimized face picture,
Figure 832298DEST_PATH_IMAGE042
a fusion characteristic standard deviation function representing the characteristic picture and the optimized face picture,
Figure DEST_PATH_IMAGE043
is used as the normalization function of (1).
10. An apparatus for intelligent analysis of fatigue driving, the apparatus comprising:
the monitoring equipment starting module is used for receiving a driving starting instruction and starting monitoring equipment which is arranged in a cab in advance according to the driving starting instruction;
the driving picture capturing module is used for capturing the driving state of the driver in real time by utilizing the monitoring equipment to obtain a driving picture;
the face picture splitting module is used for picking an original face picture of a driver from the driving picture and splitting the original face picture into a plurality of face pixel blocks;
the human face pixel block projection module is used for projecting the human face pixel blocks into a pre-constructed coordinate system to obtain a plurality of human face vector blocks;
the face vector block mapping module is used for performing histogram mapping on the face vector blocks to obtain a face quantization histogram;
a co-occurrence matrix calculation module, configured to calculate a spatial co-occurrence matrix of the face quantization histogram, where the calculation method is as follows:
Figure 502445DEST_PATH_IMAGE044
Figure DEST_PATH_IMAGE045
wherein the content of the first and second substances,
Figure 259049DEST_PATH_IMAGE046
representing the spatial co-occurrence matrix
Figure DEST_PATH_IMAGE047
A matrix dimension representing the spatial co-occurrence matrix,
Figure 723659DEST_PATH_IMAGE048
each matrix element representing the spatial co-occurrence matrix,
Figure DEST_PATH_IMAGE049
and
Figure 831292DEST_PATH_IMAGE050
represents the face picture
Figure 508393DEST_PATH_IMAGE008
And
Figure 588344DEST_PATH_IMAGE009
a plurality of pixels, each of which is a pixel,
Figure DEST_PATH_IMAGE051
and
Figure 738703DEST_PATH_IMAGE052
representing the quantization histogram of the face
Figure DEST_PATH_IMAGE053
And
Figure DEST_PATH_IMAGE055
group of
Figure 349944DEST_PATH_IMAGE056
Each matrix representing the spatial co-occurrence matrix
Figure DEST_PATH_IMAGE057
And
Figure 361893DEST_PATH_IMAGE058
a chebyshev distance in the coordinate system;
the human face picture optimization module is used for converting the space co-occurrence matrix into a Markov matrix and optimizing the human face pixel block through the Markov matrix to obtain an optimized human face picture;
and the fatigue grade detection module is used for inputting the optimized face picture into a fatigue driving intelligent diagnosis model which is trained in advance to obtain the driving fatigue grade of the driver, wherein the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network.
CN202210651936.5A 2022-06-10 2022-06-10 Intelligent analysis method and device for fatigue driving Active CN114758403B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210651936.5A CN114758403B (en) 2022-06-10 2022-06-10 Intelligent analysis method and device for fatigue driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210651936.5A CN114758403B (en) 2022-06-10 2022-06-10 Intelligent analysis method and device for fatigue driving

Publications (2)

Publication Number Publication Date
CN114758403A CN114758403A (en) 2022-07-15
CN114758403B true CN114758403B (en) 2022-09-13

Family

ID=82336965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210651936.5A Active CN114758403B (en) 2022-06-10 2022-06-10 Intelligent analysis method and device for fatigue driving

Country Status (1)

Country Link
CN (1) CN114758403B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542257A (en) * 2011-12-20 2012-07-04 东南大学 Driver fatigue level detection method based on video sensor
CN114241452A (en) * 2021-12-17 2022-03-25 武汉理工大学 Image recognition-based driver multi-index fatigue driving detection method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5680667B2 (en) * 2009-12-02 2015-03-04 タタ コンサルタンシー サービシズ リミテッドTATA Consultancy Services Limited System and method for identifying driver wakefulness
US10867195B2 (en) * 2018-03-12 2020-12-15 Microsoft Technology Licensing, Llc Systems and methods for monitoring driver state

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542257A (en) * 2011-12-20 2012-07-04 东南大学 Driver fatigue level detection method based on video sensor
CN114241452A (en) * 2021-12-17 2022-03-25 武汉理工大学 Image recognition-based driver multi-index fatigue driving detection method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A Real Time Intelligent Driver Fatigue Alarm System Based On Video Sequences;P.Ratnaka等;《International Journal of Engineering Research and Applications》;20160430;第53-59页 *
在线字典学习形变模型的疲劳状态识别方法;王辉等;《哈尔滨工程大学学报》;20170405(第06期);第892-897页 *
基于眼部自商图-梯度图共生矩阵的疲劳驾驶检测;潘剑凯 等;《中国图象图形学》;20210131;第154-164页 *

Also Published As

Publication number Publication date
CN114758403A (en) 2022-07-15

Similar Documents

Publication Publication Date Title
CN112395978B (en) Behavior detection method, behavior detection device and computer readable storage medium
CN112446919B (en) Object pose estimation method and device, electronic equipment and computer storage medium
CN112446025A (en) Federal learning defense method and device, electronic equipment and storage medium
CN112100425B (en) Label labeling method and device based on artificial intelligence, electronic equipment and medium
CN111311010A (en) Vehicle risk prediction method and device, electronic equipment and readable storage medium
CN116168350B (en) Intelligent monitoring method and device for realizing constructor illegal behaviors based on Internet of things
CN111274937A (en) Fall detection method and device, electronic equipment and computer-readable storage medium
CN115457451B (en) Constant temperature and humidity test box monitoring method and device based on Internet of things
CN112528909A (en) Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
CN111985449A (en) Rescue scene image identification method, device, equipment and computer medium
CN114022841A (en) Personnel monitoring and identifying method and device, electronic equipment and readable storage medium
CN112528903B (en) Face image acquisition method and device, electronic equipment and medium
CN111950707B (en) Behavior prediction method, device, equipment and medium based on behavior co-occurrence network
CN112329666A (en) Face recognition method and device, electronic equipment and storage medium
CN114758403B (en) Intelligent analysis method and device for fatigue driving
CN115690615B (en) Video stream-oriented deep learning target recognition method and system
CN113255456B (en) Inactive living body detection method, inactive living body detection device, electronic equipment and storage medium
CN112507903B (en) False face detection method, false face detection device, electronic equipment and computer readable storage medium
CN114049676A (en) Fatigue state detection method, device, equipment and storage medium
CN114187476A (en) Vehicle insurance information checking method, device, equipment and medium based on image analysis
CN113869218A (en) Face living body detection method and device, electronic equipment and readable storage medium
CN113343882A (en) Crowd counting method and device, electronic equipment and storage medium
CN112541436A (en) Concentration degree analysis method and device, electronic equipment and computer storage medium
CN114677652B (en) Illegal behavior monitoring method and device
CN111652226B (en) Picture-based target identification method and device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant