CN114758403B - Intelligent analysis method and device for fatigue driving - Google Patents
Intelligent analysis method and device for fatigue driving Download PDFInfo
- Publication number
- CN114758403B CN114758403B CN202210651936.5A CN202210651936A CN114758403B CN 114758403 B CN114758403 B CN 114758403B CN 202210651936 A CN202210651936 A CN 202210651936A CN 114758403 B CN114758403 B CN 114758403B
- Authority
- CN
- China
- Prior art keywords
- face
- picture
- driving
- matrix
- fatigue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/08—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/08—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
- B60W2040/0818—Inactivity or incapacity of driver
- B60W2040/0827—Inactivity or incapacity of driver due to sleepiness
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2540/00—Input parameters relating to occupants
- B60W2540/229—Attention level, e.g. attentive to driving, reading or sleeping
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of intelligent decision making, in particular to an intelligent analysis method and device for fatigue driving, which comprises the following steps: receiving a driving starting instruction to start monitoring equipment pre-installed in a cab; capturing the driving state of a driver in real time by utilizing monitoring equipment to obtain a driving picture; the method comprises the steps of picking an original face picture of a driver from a driving picture and splitting the original face picture into a plurality of face pixel blocks; projecting the plurality of face pixel blocks into a pre-constructed coordinate system to obtain a plurality of face vector blocks; performing histogram mapping on the plurality of face vector blocks to obtain a face quantization histogram; calculating a space co-occurrence matrix of the face quantization histogram and then converting the space co-occurrence matrix into a Markov matrix, and optimizing a face pixel block through the Markov matrix to obtain an optimized face picture; and inputting the optimized face picture into a fatigue driving intelligent diagnosis model which is trained in advance to obtain the driving fatigue grade of the driver. The invention can realize the intelligent analysis of the driving state of the driver and improve the accuracy of the fatigue state analysis of the driver.
Description
Technical Field
The invention relates to the field of intelligent decision making, in particular to an intelligent analysis method and device for fatigue driving.
Background
The driving fatigue means that a driver generates imbalance between physiological functions and psychological functions after driving for a long time, and the driving skill is objectively reduced, so how to accurately judge the driving fatigue state of the driver is important to ensure the driving safety of the driver.
At present, a driver is usually reminded based on the set driving time in the road driving process, but the driving state of the driver cannot be accurately captured by setting the driving time, so that the driving fatigue information of the driver cannot be accurately analyzed.
Disclosure of Invention
The invention provides an intelligent analysis method and device for fatigue driving, and mainly aims to realize intelligent analysis of driving states of drivers and improve the accuracy of fatigue state analysis of the drivers.
In order to achieve the above object, the present invention provides an intelligent analysis method for fatigue driving, comprising:
receiving a driving starting instruction, and starting monitoring equipment which is pre-installed in a cab according to the driving starting instruction;
capturing the driving state of a driver in real time by using the monitoring equipment to obtain a driving picture;
picking an original face picture of a driver from the driving picture, and splitting the original face picture into a plurality of face pixel blocks;
projecting the plurality of face pixel blocks into a pre-constructed coordinate system to obtain a plurality of face vector blocks;
performing histogram mapping on the face vector blocks to obtain a face quantization histogram;
and calculating a spatial co-occurrence matrix of the face quantization histogram, wherein the calculation method comprises the following steps:
wherein C represents the spatial co-occurrence matrix, K represents a matrix dimension of the spatial co-occurrence matrix,each matrix element representing the spatial co-occurrence matrix,andrepresents the s and m pixels of the face picture,andthe ith and jth groups of the face quantization histogram are represented, and d represents each matrix element of the spatial co-occurrence matrixAnda chebyshev distance in the coordinate system;
converting the space co-occurrence matrix into a Markov matrix, and optimizing the face pixel block through the Markov matrix to obtain an optimized face picture;
and inputting the optimized face picture into a fatigue driving intelligent diagnosis model trained in advance to obtain the driving fatigue grade of the driver, wherein the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network.
Optionally, the splitting the original face picture into a plurality of face pixel blocks includes:
performing low-pass filtering pretreatment on the original face picture by using a moving average filter to obtain a primary face picture;
based on a pre-constructed sliding window, performing pixel splitting on the primary face picture to obtain a plurality of face pixel blocks, wherein the number of the face pixel blocks is as follows:
wherein the content of the first and second substances,the number of the face pixel blocks is the number of the face pixel blocks,and the picture specification of the primary face picture is obtained.
Optionally, the performing histogram mapping on the plurality of face vector blocks to obtain a face quantization histogram includes:
receiving a pre-constructed vector square block set, wherein the vector square block set comprises 120 vector square blocks, and each vector square block consists of a single-dimensional vector with the column length of 16;
converting each face vector block into a single-dimensional vector with the length of 16 according to an end-to-end connection mode;
sequentially calculating the Manhattan distance between each 1-dimensional vector and each vector square in the vector square set, and selecting the vector square block with the minimum Manhattan distance to determine the vector square block as a face quantization square;
and grouping the face quantization histogram blocks corresponding to each single-dimensional vector to construct and obtain the face quantization histogram.
Optionally, the transforming the spatial co-occurrence matrix into a markov matrix includes:
converting the spatial co-occurrence matrix into a Markov matrix by adopting the following calculation method:
wherein, the first and the second end of the pipe are connected with each other,a value representing the u row and v column of the Markov matrix,each matrix element representing the spatial co-occurrence matrix, K represents a matrix dimension of the spatial co-occurrence matrix.
Optionally, the optimizing the face pixel block through the markov matrix to obtain an optimized face picture includes:
calculating to obtain a divergence matrix according to each face pixel block;
calculating an optimized pixel value set of the Markov matrix;
and sequentially adding the optimized pixel value set to the divergence matrix according to the position corresponding relation to obtain the optimized face picture.
Optionally, the obtaining a divergence matrix by calculation according to each face pixel block includes:
and calculating to obtain the divergence matrix according to the following calculation formula:
wherein S represents the divergence matrix, N is the total number of the face pixel blocks,representing the ith personThe number of the face pixel blocks is set,the corresponding mean value of all human face pixel blocks.
Optionally, the calculating an optimized set of pixel values of the markov matrix comprises:
calculating to obtain an optimized pixel value set by adopting the following calculation formula
Wherein the content of the first and second substances,a set of optimized pixel values is represented,a pixel probability distribution function representing a markov matrix,a matrix constant representing a markov matrix.
Optionally, the inputting the optimized human face picture into a fatigue driving intelligent diagnosis model trained in advance to obtain a driving fatigue level of the driver includes:
performing feature extraction on the optimized picture by using the convolution layer in the fatigue driving intelligent diagnosis model which is trained in advance to obtain a feature picture;
performing bottom layer feature fusion on the feature picture and the optimized face picture by using a standard layer in the pre-trained intelligent fatigue driving diagnosis model to obtain a fusion picture;
pooling the fused picture by using a pooling layer in the pre-trained fatigue driving intelligent diagnosis model to obtain a pooled picture;
and calculating the driving fatigue category probability of the pooled picture by using a full connection layer in the pre-trained intelligent diagnosis model for fatigue driving, and outputting the driving fatigue grade of the driver by using an output layer in the pre-trained intelligent diagnosis model for fatigue driving according to the driving fatigue category probability.
Optionally, the performing bottom-layer feature fusion on the feature picture and the optimized face picture by using the standard layer in the pre-trained intelligent fatigue driving diagnosis model to obtain a fusion picture includes:
and performing bottom layer feature fusion on the feature picture and the optimized face picture by using the following formula to obtain a fusion picture:
wherein the content of the first and second substances,representing a fusion picture, H representing the characteristics of the characteristic picture and the optimized face picture, X representing the characteristic picture and the optimized face picture,represents the fusion characteristic mean function of the characteristic picture and the optimized face picture,a fusion characteristic standard deviation function representing the characteristic picture and the optimized face picture,and expressing the normalization functions of the feature picture, the optimized human face picture semantic feature and the voiceprint feature.
In order to solve the above problems, the present invention further provides an apparatus for intelligently analyzing fatigue driving, the apparatus comprising:
the monitoring equipment starting module is used for receiving a driving starting instruction and starting monitoring equipment which is arranged in a cab in advance according to the driving starting instruction;
the driving picture capturing module is used for capturing the driving state of the driver in real time by utilizing the monitoring equipment to obtain a driving picture;
the face picture splitting module is used for picking an original face picture of a driver from the driving picture and splitting the original face picture into a plurality of face pixel blocks;
the human face pixel block projection module is used for projecting the human face pixel blocks into a pre-constructed coordinate system to obtain a plurality of human face vector blocks;
the face vector block mapping module is used for performing histogram mapping on the face vector blocks to obtain a face quantization histogram;
a co-occurrence matrix calculation module, configured to calculate a spatial co-occurrence matrix of the face quantization histogram, where the calculation method is as follows:
wherein C represents the spatial co-occurrence matrix, K represents a matrix dimension of the spatial co-occurrence matrix,each matrix element representing the spatial co-occurrence matrix,andrepresents the s and m pixels of the face picture,andthe ith and jth groups of the face quantization histogram are represented, and d represents each matrix element of the spatial co-occurrence matrixAnda chebyshev distance in the coordinate system;
the face picture optimization module is used for converting the space co-occurrence matrix into a Markov matrix and optimizing the face pixel block through the Markov matrix to obtain an optimized face picture;
and the fatigue grade detection module is used for inputting the optimized face picture into a fatigue driving intelligent diagnosis model which is trained in advance to obtain the driving fatigue grade of the driver, wherein the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network.
In order to solve the problems in the background art, the embodiment of the invention receives a driving starting instruction to start monitoring equipment which is installed in a cab in advance; the driving state of the driver is captured in real time by utilizing the monitoring equipment to obtain a driving picture so as to shoot the driving state of the driver in real time and ensure the real-time analysis of the state of the subsequent driver; secondly, the original face picture of the driver is extracted from the driving picture and then split into a plurality of face pixel blocks, the fatigue condition of the driver can be diagnosed by analyzing the face state of the driver, the state analysis accuracy of a subsequent driver is guaranteed, the plurality of face pixel blocks are projected into a pre-constructed coordinate system to obtain a plurality of face vector blocks, histogram mapping is performed on the plurality of face vector blocks to obtain a face quantization histogram, a space co-occurrence matrix of the face quantization histogram is calculated and then converted into a Markov matrix, the face pixel blocks are optimized through the Markov matrix to obtain an optimized face picture, the finally obtained face picture can be guaranteed to be in the optimal state, and the fatigue state analysis accuracy of the subsequent driver is further guaranteed; furthermore, the optimized face picture is input to a fatigue driving intelligent diagnosis model which is trained in advance, so that the driving fatigue degree of the optimized face picture is intelligently detected, and the driving fatigue grade of the driver is obtained. Therefore, the intelligent analysis method and the intelligent analysis device for fatigue driving, provided by the invention, can realize the intelligent analysis of the driving state of the driver and improve the accuracy of the analysis of the fatigue state of the driver.
Drawings
Fig. 1 is a schematic flow chart of an intelligent fatigue driving analysis method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart showing a detailed implementation of one of the steps in FIG. 1;
FIG. 3 is a schematic diagram of an example of the encryption counter of FIG. 1;
FIG. 4 is a functional block diagram of an apparatus for intelligently analyzing fatigue driving according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device for implementing the fatigue driving intelligent analysis method according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the application provides an intelligent analysis method for fatigue driving. The executing subject of the fatigue driving intelligent analysis method includes, but is not limited to, at least one of electronic devices such as a server and a terminal, which can be configured to execute the method provided by the embodiments of the present application. In other words, the fatigue driving intelligent analysis method may be performed by software or hardware installed in a terminal device or a server device. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Example 1:
fig. 1 is a schematic flow chart of an intelligent fatigue driving analysis method according to an embodiment of the present invention. In this embodiment, the intelligent analysis method for fatigue driving includes:
and S1, receiving a driving starting instruction, and starting monitoring equipment pre-installed in a cab according to the driving starting instruction.
The embodiment of the invention mainly aims to monitor the fatigue state of the driver in real time so as to safely remind the driver of driving carefully. Therefore, a driving starting instruction is received firstly, wherein the driving starting instruction is generally connected with engines such as automobiles, and the driving starting instruction can be automatically triggered after the engines are started, so that monitoring equipment pre-installed in a cab is started according to the driving starting instruction.
Where the monitoring device is typically mounted in a fixed position within the cockpit of an automobile or the like, it is emphasized that the monitoring device may capture the face of the driver at the fixed position.
And S2, capturing the driving state of the driver in real time by using the monitoring equipment to obtain a driving picture.
It can be understood that after the monitoring device is started, the driving state of the driver can be shot in real time, so that a driving picture is obtained, wherein the driving picture comprises the face state of the driver.
S3, extracting an original face picture of the driver from the driving picture, and splitting the original face picture into a plurality of face pixel blocks.
In the embodiment of the present invention, the fatigue condition of the driver is mainly diagnosed by analyzing the facial state of the driver, and therefore, in detail, the extracting the original face picture of the driver from the driving picture includes:
inputting the driving picture into a face recognition model which is constructed in advance, wherein the face recognition model comprises a YOLO model;
and recognizing the face area of the driver according to the YOLO model to obtain the original face picture.
It should be explained that the YOLO model is an end-to-end target detection algorithm, has the advantage of high detection speed, and is applicable to an application scenario of detecting a human face in real time in the embodiment of the present invention.
Further, the splitting the original face picture into a plurality of face pixel blocks includes:
performing low-pass filtering pretreatment on the original face picture by using a moving average filter to obtain a primary face picture;
based on a pre-constructed sliding window, performing pixel splitting on the primary face picture to obtain a plurality of face pixel blocks, wherein the number of the face pixel blocks is as follows:
wherein the content of the first and second substances,the number of the face pixel blocks is the number,and the picture specification of the primary face picture is obtained.
It should be explained that the moving average filter can effectively eliminate high-frequency noise in the original face image, and keep low-frequency pixel points more useful for face recognition.
Further, in the embodiment of the present invention, the specification of each pixel block is 4 × 4, that is, each pixel block has a specification of 4 × 4The specification of each pixel block of the primary face picture with the picture specification is 4 x 4.
And S4, projecting the face pixel blocks into a pre-constructed coordinate system to obtain a plurality of face vector blocks.
It should be understood that, in order to improve the recognition accuracy of fatigue driving, each pixel point of an original face picture needs to be optimized, so that each face pixel block needs to be projected into a pre-constructed coordinate system to obtain a plurality of face vector blocks, each vector block represents that the pixel block is mapped to the coordinate system and not only includes the value of the pixel point, but also includes a direction, and the direction is indicated by the connection of each pixel point and a dot.
And S5, performing histogram mapping on the face vector blocks to obtain a face quantization histogram.
In detail, referring to fig. 2, the performing histogram mapping on the plurality of face vector blocks to obtain a face quantization histogram includes:
s21, receiving a vector square block set constructed in advance, wherein the vector square block set comprises 120 vector square blocks, and each vector square block consists of a single-dimensional vector with the column length of 16;
s22, converting each face vector block into a single-dimensional vector with the length of 16 according to an end-to-end connection mode;
s23, sequentially calculating the Manhattan distance between each 1-dimensional vector and each vector square in the vector square set, and selecting the vector histogram block with the minimum Manhattan distance to determine the vector histogram block as a face quantization square;
and S24, grouping the face quantization histogram blocks corresponding to each single-dimensional vector, and constructing to obtain the face quantization histogram.
It should be explained that the vector straight square set is a vector which is photographed and quantized in advance according to different driving environments of the driver. If the vector straight square A is obtained by vectorization in a dark driving environment of a driver, the vector straight square B is obtained by vectorization in a light-rich driving environment, and the vector straight square C is obtained by vectorization in a late-night driving environment.
Furthermore, each vector histogram block is composed of 16 groups of digital single-dimensional vectors, so that for calculating the manhattan distance between the face vector block and each vector square block, the dimensions of the vector square block and the face vector block need to be the same, and since each face vector block is projected by a pixel block, and the specification of the pixel block is 4 × 4, the specification of the face vector block is also 4 × 4.
Illustratively, the face vector block isThe length of the 16 single-dimensional vector obtained by the end-to-end connection method is as follows:。
illustratively, if the face vector block 132 is shared, there are 132 groups of face quantization straight blocks, and if there are the same face quantization straight blocks, such as 20 groups of face quantization straight blocks a and 12 groups of face quantization straight blocks B, the face quantization histogram is constructed according to the number of each group. The abscissa of the face quantization histogram is the identifier of each group of face quantization straight blocks, such as A, B, C, and the ordinate is the number of each group of face quantization straight blocks, such as 20 for the face quantization straight block a and 12 for the face quantization straight block B.
And S6, calculating a spatial co-occurrence matrix of the face quantization histogram.
In detail, the spatial co-occurrence matrix of the face quantization histogram is calculated by the following method:
wherein C represents the spatial co-occurrence matrix, K represents a matrix dimension of the spatial co-occurrence matrix,each matrix element representing the spatial co-occurrence matrix,andrepresents the s and m pixels of the face picture,andthe ith and jth groups of the face quantization histogram are represented, and d represents each matrix element of the spatial co-occurrence matrixAnda chebyshev distance in the coordinate system.
And S7, converting the space co-occurrence matrix into a Markov matrix, and optimizing the face pixel block through the Markov matrix to obtain an optimized face picture.
In detail, the converting the spatial symbiotic matrix into a markov matrix includes:
converting the spatial co-occurrence matrix into a Markov matrix by adopting the following calculation method:
wherein the content of the first and second substances,a value representing the uth row and the vth column of the Markov matrix,each matrix element representing the spatial co-occurrence matrix, K represents a matrix dimension of the spatial co-occurrence matrix.
Further, the optimizing the face pixel block through the markov matrix to obtain an optimized face picture includes:
calculating to obtain a divergence matrix according to each face pixel block;
calculating an optimized pixel value set of the Markov matrix;
and sequentially adding the optimized pixel value sets to the divergence matrix according to the position corresponding relation to obtain the optimized face picture.
In detail, the obtaining of the divergence matrix by calculation according to each face pixel block includes:
and calculating to obtain the divergence matrix according to the following calculation formula:
wherein S represents the divergence matrix, N is the total number of the face pixel blocks,representing the ith personal face pixel block,the average value of all face pixel blocks is obtained.
Further, the calculating the optimized set of pixel values for the markov matrix comprises:
calculating to obtain an optimized pixel value set by adopting the following calculation formula
Wherein the content of the first and second substances,a set of optimized pixel values is represented,a pixel probability distribution function representing a markov matrix,a matrix constant representing a markov matrix.
And S8, inputting the optimized human face picture into a fatigue driving intelligent diagnosis model trained in advance to obtain the driving fatigue grade of the driver, wherein the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network.
The method comprises the steps of inputting an optimized face picture into a fatigue driving intelligent diagnosis model trained in advance to intelligently detect the driving fatigue degree of the optimized face picture, so as to obtain the driving fatigue grade of a driver, wherein the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network and comprises a convolutional layer, a standard layer, a pooling layer and a full connection layer, the convolutional layer is used for extracting a feature picture of the optimized face picture, the standard layer is used for fusing the feature picture with the bottom layer feature of the optimized face picture, the pooling layer is used for reducing the dimension of the picture and reducing the calculation complexity of the picture, and the full connection layer is used for calculating the fatigue driving category probability of the picture, so that the driving fatigue grade of the driver is output.
As an embodiment of the present invention, referring to fig. 3, the inputting the optimized human face picture into a fatigue driving intelligent diagnosis model trained in advance to obtain a driving fatigue level of a driver includes:
s31, performing feature extraction on the optimized picture by using the convolution layer in the pre-trained fatigue driving intelligent diagnosis model to obtain a feature picture;
s32, performing bottom layer feature fusion on the feature picture and the optimized face picture by using the standard layer in the pre-trained intelligent fatigue driving diagnosis model to obtain a fusion picture;
s33, performing pooling treatment on the fusion picture by using a pooling layer in the pre-trained fatigue driving intelligent diagnosis model to obtain a pooled picture;
and S34, calculating the driving fatigue category probability of the pooled picture by using the full connection layer in the pre-trained intelligent diagnosis model for fatigue driving, and outputting the driving fatigue level of the driver by using the output layer in the pre-trained intelligent diagnosis model for fatigue driving according to the driving fatigue category probability.
Optionally, feature extraction of the optimized picture may be implemented by a convolution kernel in the convolution layer, the pooling process of the fused picture may be implemented by a maximum/minimum pooling function in the pooling layer, and the driving fatigue category probability of the pooled picture may be calculated by an activation function in the fully-connected layer, such as a softmax function.
Further, in an optional embodiment of the present invention, the feature picture and the optimized face picture are subjected to bottom layer feature fusion by using the following formula to obtain a fusion picture:
wherein the content of the first and second substances,representing a fusion picture, H representing the characteristics of the characteristic picture and the optimized face picture, X representing the characteristic picture and the optimized face picture,represents the fusion characteristic mean function of the characteristic picture and the optimized face picture,a fusion characteristic standard deviation function representing the characteristic picture and the optimized face picture,and expressing the normalization functions of the feature picture, the optimized human face picture semantic feature and the voiceprint feature.
In order to solve the problems in the background art, the embodiment of the invention receives a driving starting instruction to start monitoring equipment which is installed in a cab in advance; the driving state of the driver is captured in real time by utilizing the monitoring equipment to obtain a driving picture so as to shoot the driving state of the driver in real time and ensure the real-time analysis of the state of the subsequent driver; secondly, the original face picture of the driver is extracted from the driving picture and then split into a plurality of face pixel blocks, the fatigue condition of the driver can be diagnosed by analyzing the face state of the driver, the state analysis accuracy of a subsequent driver is guaranteed, the plurality of face pixel blocks are projected into a pre-constructed coordinate system to obtain a plurality of face vector blocks, histogram mapping is performed on the plurality of face vector blocks to obtain a face quantization histogram, a space co-occurrence matrix of the face quantization histogram is calculated and then converted into a Markov matrix, the face pixel blocks are optimized through the Markov matrix to obtain an optimized face picture, the finally obtained face picture can be guaranteed to be in the optimal state, and the fatigue state analysis accuracy of the subsequent driver is further guaranteed; further, the optimized face picture is input to a fatigue driving intelligent diagnosis model which is trained in advance, so that the driving fatigue degree of the optimized face picture is intelligently detected, and the driving fatigue grade of the driver is obtained. Therefore, the intelligent analysis method for fatigue driving provided by the invention can realize the intelligent analysis of the driving state of the driver and improve the accuracy of the analysis of the fatigue state of the driver.
Example 2:
fig. 4 is a functional block diagram of an apparatus for intelligently analyzing fatigue driving according to an embodiment of the present invention.
The apparatus 100 for intelligently analyzing fatigue driving according to the present invention may be installed in an electronic device. According to the realized functions, the apparatus 100 for intelligently analyzing fatigue driving may include a monitoring device starting module 101, a driving picture capturing module 102, a face picture splitting module 103, a face pixel block projecting module 104, a face vector block mapping module 105, a co-occurrence matrix calculating module 106, a face picture optimizing module 107, and a fatigue level detecting module 108. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
The monitoring equipment starting module 101 is used for receiving a driving starting instruction and starting monitoring equipment which is installed in a cab in advance according to the driving starting instruction;
the driving picture capturing module 102 is configured to capture a driving state of a driver in real time by using the monitoring device to obtain a driving picture;
the face image splitting module 103 is configured to extract an original face image of a driver from the driving image, and split the original face image into a plurality of face pixel blocks;
the face pixel block projection module 104 is configured to project the plurality of face pixel blocks into a pre-constructed coordinate system to obtain a plurality of face vector blocks;
the face vector block mapping module 105 is configured to perform histogram mapping on the face vector blocks to obtain a face quantization histogram;
the co-occurrence matrix calculation module 106 is configured to calculate a spatial co-occurrence matrix of the face quantization histogram, where the calculation method is as follows:
wherein C represents the spatial co-occurrence matrix, K represents a matrix dimension of the spatial co-occurrence matrix,each matrix element representing the spatial co-occurrence matrix,andrepresents the s-th and m-th pixels of the face picture,andthe ith and jth groups of the face quantization histogram are represented, and d represents each matrix element of the spatial co-occurrence matrixAnda chebyshev distance in the coordinate system;
the face picture optimization module 107 is configured to convert the spatial co-occurrence matrix into a markov matrix, and optimize the face pixel block through the markov matrix to obtain an optimized face picture;
the fatigue level detection module 108 is configured to input the optimized face picture to a fatigue driving intelligent diagnosis model trained in advance to obtain a driving fatigue level of the driver, where the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network.
In detail, the specific implementation manner of using each module in the device 100 for intelligently analyzing fatigue driving in the embodiment of the present invention is the same as that in embodiment 1, and is not repeated here.
Example 3:
fig. 5 is a schematic structural diagram of an electronic device for implementing an intelligent fatigue driving analysis method according to an embodiment of the present invention.
The electronic device 1 may include a processor 10, a memory 11 and a bus 12, and may further include a computer program, such as a fatigue driving intelligent analysis method program, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of a fatigue driving intelligent analysis method program, but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (e.g., a fatigue driving intelligent analysis method program, etc.) stored in the memory 11 and calling data stored in the memory 11.
The bus 12 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 12 may be divided into an address bus, a data bus, a control bus, etc. The bus 12 is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 5 only shows an electronic device with components, and it will be understood by a person skilled in the art that the structure shown in fig. 5 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The intelligent fatigue driving analysis method program stored in the memory 11 of the electronic device 1 is a combination of a plurality of instructions, and when running in the processor 10, can realize:
receiving a driving starting instruction, and starting monitoring equipment which is pre-installed in a cab according to the driving starting instruction;
capturing the driving state of a driver in real time by using the monitoring equipment to obtain a driving picture;
picking an original face picture of a driver from the driving picture, and splitting the original face picture into a plurality of face pixel blocks;
projecting the plurality of face pixel blocks into a pre-constructed coordinate system to obtain a plurality of face vector blocks;
performing histogram mapping on the plurality of face vector blocks to obtain a face quantization histogram;
and calculating a spatial co-occurrence matrix of the face quantization histogram, wherein the calculation method comprises the following steps:
wherein C represents the spatial co-occurrence matrix, K represents a matrix dimension of the spatial co-occurrence matrix,each matrix element representing the spatial co-occurrence matrix,andrepresents the s and m pixels of the face picture,andpresentation instrumentThe ith and jth groups of the face quantization histogram, d represents each matrix element of the spatial co-occurrence matrixAnda chebyshev distance in the coordinate system;
converting the space co-occurrence matrix into a Markov matrix, and optimizing the face pixel block through the Markov matrix to obtain an optimized face picture;
and inputting the optimized face picture into a fatigue driving intelligent diagnosis model trained in advance to obtain the driving fatigue grade of the driver, wherein the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network.
Specifically, the specific implementation method of the processor 10 for the instruction may refer to the description of the relevant steps in the embodiments corresponding to fig. 1 to fig. 5, which is not repeated herein.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
receiving a driving starting instruction, and starting monitoring equipment which is pre-installed in a cab according to the driving starting instruction;
capturing the driving state of a driver in real time by using the monitoring equipment to obtain a driving picture;
picking an original face picture of a driver from the driving picture, and splitting the original face picture into a plurality of face pixel blocks;
projecting the plurality of face pixel blocks into a pre-constructed coordinate system to obtain a plurality of face vector blocks;
performing histogram mapping on the plurality of face vector blocks to obtain a face quantization histogram;
and calculating a spatial co-occurrence matrix of the face quantization histogram, wherein the calculation method comprises the following steps:
wherein C represents the spatial co-occurrence matrix, K represents a matrix dimension of the spatial co-occurrence matrix,each matrix element representing the spatial co-occurrence matrix,andrepresents the s and m pixels of the face picture,andthe ith and jth groups of the face quantization histogram are represented, and d represents each matrix element of the spatial co-occurrence matrixAnda chebyshev distance in the coordinate system;
converting the space co-occurrence matrix into a Markov matrix, and optimizing the face pixel block through the Markov matrix to obtain an optimized face picture;
and inputting the optimized face picture into a fatigue driving intelligent diagnosis model trained in advance to obtain the driving fatigue grade of the driver, wherein the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (10)
1. An intelligent analysis method for fatigue driving, the method comprising:
receiving a driving starting instruction, and starting monitoring equipment which is pre-installed in a driving cabin according to the driving starting instruction;
capturing the driving state of a driver in real time by using the monitoring equipment to obtain a driving picture;
picking up an original face picture of a driver from the driving picture, and splitting the original face picture into a plurality of face pixel blocks;
projecting the plurality of face pixel blocks into a pre-constructed coordinate system to obtain a plurality of face vector blocks;
performing histogram mapping on the face vector blocks to obtain a face quantization histogram;
and calculating a spatial co-occurrence matrix of the face quantization histogram, wherein the calculation method comprises the following steps:
wherein the content of the first and second substances,representing the spatial co-occurrence matrix in a spatial co-occurrence matrix,a matrix dimension representing the spatial co-occurrence matrix,each matrix element representing the spatial co-occurrence matrix,andrepresenting the face pictureAnda plurality of pixels, each of which is a pixel,andrepresenting the quantization histogram of the faceAnd a firstThe number of the groups is set to be,each matrix element representing the spatial co-occurrence matrixAnda chebyshev distance in the coordinate system;
converting the space co-occurrence matrix into a Markov matrix, and optimizing the face pixel block through the Markov matrix to obtain an optimized face picture;
and inputting the optimized face picture into a fatigue driving intelligent diagnosis model trained in advance to obtain the driving fatigue grade of the driver, wherein the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network.
2. The intelligent analysis method for fatigue driving according to claim 1, wherein the splitting the original face picture into a plurality of face pixel blocks comprises:
performing low-pass filtering pretreatment on the original face picture by using a moving average filter to obtain a primary face picture;
based on a pre-constructed sliding window, performing pixel splitting on the primary face picture to obtain a plurality of face pixel blocks, wherein the number of the face pixel blocks is as follows:
3. The intelligent analysis method for fatigue driving according to claim 1, wherein the performing histogram mapping on the plurality of face vector blocks to obtain a face quantization histogram comprises:
receiving a pre-constructed vector square block set, wherein the vector square block set comprises 120 vector square blocks, and each vector square block consists of a single-dimensional vector with the column length of 16;
converting each face vector block into a single-dimensional vector with the length of 16 according to an end-to-end connection mode;
sequentially calculating the Manhattan distance between each 1-dimensional vector and each vector square in the vector square set, and selecting the vector square block with the minimum Manhattan distance to determine the vector square block as a face quantization square;
and grouping the face quantization histogram blocks corresponding to each single-dimensional vector to construct and obtain the face quantization histogram.
4. The intelligent analysis method for fatigue driving of claim 1, wherein said converting the spatial co-occurrence matrix into a markov matrix comprises:
converting the spatial co-occurrence matrix into a Markov matrix by adopting the following calculation method:
5. The intelligent analysis method for fatigue driving according to claim 1, wherein said optimizing the block of pixels of the face by the markov matrix to obtain an optimized picture of the face comprises:
calculating to obtain a divergence matrix according to each face pixel block;
calculating an optimized pixel value set of the Markov matrix;
and sequentially adding the optimized pixel value sets to the divergence matrix according to the position corresponding relation to obtain the optimized face picture.
6. The intelligent analysis method for fatigue driving according to claim 5, wherein said calculating a divergence matrix from each of said blocks of face pixels comprises:
and calculating to obtain the divergence matrix according to the following calculation formula:
7. The intelligent analysis method for fatigue driving of claim 5, wherein said computing the optimized set of pixel values for the Markov matrix comprises:
calculating to obtain an optimized pixel value set by adopting the following calculation formula
8. The intelligent analysis method for fatigue driving according to any one of claims 1 to 7, wherein the inputting the optimized human face picture into a pre-trained intelligent diagnosis model for fatigue driving to obtain the driving fatigue level of the driver comprises:
performing feature extraction on the optimized picture by using the convolution layer in the fatigue driving intelligent diagnosis model which is trained in advance to obtain a feature picture;
performing bottom layer feature fusion on the feature picture and the optimized face picture by using a standard layer in the pre-trained intelligent fatigue driving diagnosis model to obtain a fusion picture;
pooling the fused picture by using a pooling layer in the pre-trained fatigue driving intelligent diagnosis model to obtain a pooled picture;
and calculating the driving fatigue class probability of the pooled picture by utilizing the full connection layer in the pre-trained intelligent diagnosis model for fatigue driving, and outputting the driving fatigue grade of the driver by utilizing the output layer in the pre-trained intelligent diagnosis model for fatigue driving according to the driving fatigue class probability.
9. The intelligent analysis method for fatigue driving according to claim 8, wherein the performing bottom-layer feature fusion on the feature picture and the optimized face picture by using a standard layer in the pre-trained intelligent diagnosis model for fatigue driving to obtain a fused picture comprises:
and performing bottom layer feature fusion on the feature picture and the optimized face picture by using the following formula to obtain a fusion picture:
wherein the content of the first and second substances,representing a fusion picture, H representing the characteristics of the characteristic picture and the optimized face picture, X representing the characteristic picture and the optimized face picture,represents the fusion characteristic mean function of the characteristic picture and the optimized face picture,a fusion characteristic standard deviation function representing the characteristic picture and the optimized face picture,is used as the normalization function of (1).
10. An apparatus for intelligent analysis of fatigue driving, the apparatus comprising:
the monitoring equipment starting module is used for receiving a driving starting instruction and starting monitoring equipment which is arranged in a cab in advance according to the driving starting instruction;
the driving picture capturing module is used for capturing the driving state of the driver in real time by utilizing the monitoring equipment to obtain a driving picture;
the face picture splitting module is used for picking an original face picture of a driver from the driving picture and splitting the original face picture into a plurality of face pixel blocks;
the human face pixel block projection module is used for projecting the human face pixel blocks into a pre-constructed coordinate system to obtain a plurality of human face vector blocks;
the face vector block mapping module is used for performing histogram mapping on the face vector blocks to obtain a face quantization histogram;
a co-occurrence matrix calculation module, configured to calculate a spatial co-occurrence matrix of the face quantization histogram, where the calculation method is as follows:
wherein the content of the first and second substances,representing the spatial co-occurrence matrixA matrix dimension representing the spatial co-occurrence matrix,each matrix element representing the spatial co-occurrence matrix,andrepresents the face pictureAnda plurality of pixels, each of which is a pixel,andrepresenting the quantization histogram of the faceAndgroup ofEach matrix representing the spatial co-occurrence matrixAnda chebyshev distance in the coordinate system;
the human face picture optimization module is used for converting the space co-occurrence matrix into a Markov matrix and optimizing the human face pixel block through the Markov matrix to obtain an optimized human face picture;
and the fatigue grade detection module is used for inputting the optimized face picture into a fatigue driving intelligent diagnosis model which is trained in advance to obtain the driving fatigue grade of the driver, wherein the fatigue driving intelligent diagnosis model is constructed by a convolutional neural network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210651936.5A CN114758403B (en) | 2022-06-10 | 2022-06-10 | Intelligent analysis method and device for fatigue driving |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210651936.5A CN114758403B (en) | 2022-06-10 | 2022-06-10 | Intelligent analysis method and device for fatigue driving |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114758403A CN114758403A (en) | 2022-07-15 |
CN114758403B true CN114758403B (en) | 2022-09-13 |
Family
ID=82336965
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210651936.5A Active CN114758403B (en) | 2022-06-10 | 2022-06-10 | Intelligent analysis method and device for fatigue driving |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114758403B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102542257A (en) * | 2011-12-20 | 2012-07-04 | 东南大学 | Driver fatigue level detection method based on video sensor |
CN114241452A (en) * | 2021-12-17 | 2022-03-25 | 武汉理工大学 | Image recognition-based driver multi-index fatigue driving detection method |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5680667B2 (en) * | 2009-12-02 | 2015-03-04 | タタ コンサルタンシー サービシズ リミテッドTATA Consultancy Services Limited | System and method for identifying driver wakefulness |
US10867195B2 (en) * | 2018-03-12 | 2020-12-15 | Microsoft Technology Licensing, Llc | Systems and methods for monitoring driver state |
-
2022
- 2022-06-10 CN CN202210651936.5A patent/CN114758403B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102542257A (en) * | 2011-12-20 | 2012-07-04 | 东南大学 | Driver fatigue level detection method based on video sensor |
CN114241452A (en) * | 2021-12-17 | 2022-03-25 | 武汉理工大学 | Image recognition-based driver multi-index fatigue driving detection method |
Non-Patent Citations (3)
Title |
---|
A Real Time Intelligent Driver Fatigue Alarm System Based On Video Sequences;P.Ratnaka等;《International Journal of Engineering Research and Applications》;20160430;第53-59页 * |
在线字典学习形变模型的疲劳状态识别方法;王辉等;《哈尔滨工程大学学报》;20170405(第06期);第892-897页 * |
基于眼部自商图-梯度图共生矩阵的疲劳驾驶检测;潘剑凯 等;《中国图象图形学》;20210131;第154-164页 * |
Also Published As
Publication number | Publication date |
---|---|
CN114758403A (en) | 2022-07-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112395978B (en) | Behavior detection method, behavior detection device and computer readable storage medium | |
CN112446919B (en) | Object pose estimation method and device, electronic equipment and computer storage medium | |
CN112446025A (en) | Federal learning defense method and device, electronic equipment and storage medium | |
CN112100425B (en) | Label labeling method and device based on artificial intelligence, electronic equipment and medium | |
CN111311010A (en) | Vehicle risk prediction method and device, electronic equipment and readable storage medium | |
CN116168350B (en) | Intelligent monitoring method and device for realizing constructor illegal behaviors based on Internet of things | |
CN111274937A (en) | Fall detection method and device, electronic equipment and computer-readable storage medium | |
CN115457451B (en) | Constant temperature and humidity test box monitoring method and device based on Internet of things | |
CN112528909A (en) | Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium | |
CN111985449A (en) | Rescue scene image identification method, device, equipment and computer medium | |
CN114022841A (en) | Personnel monitoring and identifying method and device, electronic equipment and readable storage medium | |
CN112528903B (en) | Face image acquisition method and device, electronic equipment and medium | |
CN111950707B (en) | Behavior prediction method, device, equipment and medium based on behavior co-occurrence network | |
CN112329666A (en) | Face recognition method and device, electronic equipment and storage medium | |
CN114758403B (en) | Intelligent analysis method and device for fatigue driving | |
CN115690615B (en) | Video stream-oriented deep learning target recognition method and system | |
CN113255456B (en) | Inactive living body detection method, inactive living body detection device, electronic equipment and storage medium | |
CN112507903B (en) | False face detection method, false face detection device, electronic equipment and computer readable storage medium | |
CN114049676A (en) | Fatigue state detection method, device, equipment and storage medium | |
CN114187476A (en) | Vehicle insurance information checking method, device, equipment and medium based on image analysis | |
CN113869218A (en) | Face living body detection method and device, electronic equipment and readable storage medium | |
CN113343882A (en) | Crowd counting method and device, electronic equipment and storage medium | |
CN112541436A (en) | Concentration degree analysis method and device, electronic equipment and computer storage medium | |
CN114677652B (en) | Illegal behavior monitoring method and device | |
CN111652226B (en) | Picture-based target identification method and device and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |