CN109426757B - Driver head posture monitoring method, system, medium and equipment based on deep learning - Google Patents

Driver head posture monitoring method, system, medium and equipment based on deep learning Download PDF

Info

Publication number
CN109426757B
CN109426757B CN201710716168.6A CN201710716168A CN109426757B CN 109426757 B CN109426757 B CN 109426757B CN 201710716168 A CN201710716168 A CN 201710716168A CN 109426757 B CN109426757 B CN 109426757B
Authority
CN
China
Prior art keywords
information
module
driver
attention point
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710716168.6A
Other languages
Chinese (zh)
Other versions
CN109426757A (en
Inventor
金会庆
王江波
李伟
程泽良
马晓峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Sanlian Applied Traffic Technology Co ltd
Original Assignee
Anhui Sanlian Applied Traffic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Sanlian Applied Traffic Technology Co ltd filed Critical Anhui Sanlian Applied Traffic Technology Co ltd
Priority to CN201710716168.6A priority Critical patent/CN109426757B/en
Publication of CN109426757A publication Critical patent/CN109426757A/en
Application granted granted Critical
Publication of CN109426757B publication Critical patent/CN109426757B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

A method, a system, a medium and a device for monitoring the head posture of a driver based on deep learning comprise the following steps: starting a system through interface operation, initializing image information acquisition equipment, and presetting information processing logic; establishing communication connection with a server side and receiving system version information; detecting the version of system version information, a storage device and an image information acquisition device, and sending prompt information; the server receives the prompt information, triggers a system to collect video data according to the prompt information, extracts characteristic information in current single-frame picture information, constructs a head posture sight model according to the characteristic information, performs deep learning on the head posture sight model according to picture samples, generates a deep learning result set, extracts the single-frame picture information from the video data and stores the single-frame picture information as the picture samples; processing according to preset processing logic to obtain the detection information of the focus of attention; and acquiring a posture correct and wrong judgment result according to the attention point detection information and the deep learning result set, and generating and storing a report according to the posture judgment result.

Description

Driver head posture monitoring method, system, medium and equipment based on deep learning
Technical Field
The invention relates to a driving test monitoring system, in particular to a method, a system, a medium and equipment for monitoring the head posture of a driver based on deep learning.
Background
With the advance of time, the number of Chinese drivers is continuously increased, and in addition, in the traditional technology, the driving test of a driving school is monitored by a simple electronic detection reminding device and a system through manual cooperation of a coach, so that the training efficiency of the drivers in the driving school is low, and the learning quality of the driving skills of the drivers cannot be guaranteed, therefore, the problem that the driving skill training resources are increasingly tense is further highlighted along with the non-ideal working efficiency and effect of the education and training of the drivers. Because in the process of daily motor vehicle acquaintance examination detection, an important function requirement is required in the driver examination detection process when the driver sight line is detected, most driver examination errors are closely related to the sight line change state of an examinee, and in the traditional technology, a coach in a driving school is parallel to the examinee side, so that the sight line attention direction of the examinee cannot be accurately detected.
At present, the following methods are mainly used for detecting drivers: based on the detection of the sensor, the method is mainly based on a wearable sensor, measures acceleration information or angular velocity information of each part of the body of the driver in real time, and then detects the behavior state of the driver according to the measured information. The method has the defects that the wearable sensor needs to be carried about, the equipment cost is high, and the use is very inconvenient. Another type of technology is based primarily on video image analysis detection methods, by directly extracting image features and detecting data. The method has the defects that background modeling is not accurate, the detection error is large by directly using the extracted turning angle and the feature data of the attention point, so that more false detection and missed detection are caused, and the feature robustness is low.
The wearable sensor needs to be carried about in the prior art, the equipment cost is high, the use is inconvenient, the detection value error is large, the false detection and the missed detection are more, the hardware cost is high, the feature robustness is low, the information utilization rate is low, and the accuracy of the driver posture right-wrong judgment result is low.
Disclosure of Invention
In view of the technical problems of high hardware cost, low feature robustness, low information utilization rate and low accuracy of the driver posture correct and incorrect determination result in the prior art, the invention aims to provide a driver head posture monitoring method, system, medium and device based on deep learning, and the technical problems of high hardware cost, low feature robustness, low information utilization rate and low accuracy of the driver posture correct and incorrect determination result in the prior art are solved.
In order to achieve the above objects and other related objects, the present invention provides a method for monitoring the head pose of a driver based on deep learning, which relates to a system for monitoring the head pose of a driver based on deep learning, comprising: hardware power-on operation is carried out through an interface, a driver head posture monitoring system based on deep learning is started, communication parameter information is configured, image information acquisition equipment is initialized, and information processing logic is preset; establishing communication connection with a server side and receiving system version information; detecting the version of system version information, a storage device and an image information acquisition device, completing detection and sending prompt information; the method comprises the steps that a server side receives prompt information, a system is triggered to collect video data according to the prompt information, single-frame picture information is extracted from the video data and stored as a picture sample, the head turning angle and the feature information of a point of interest in the current single-frame picture information are extracted, a head posture sight model is built according to the head turning angle and the feature information of the point of interest, deep learning of the head posture sight model is carried out according to the picture sample, and a deep learning result set is generated; processing according to preset processing logic to obtain driver focus detection information; and acquiring a driver posture correction and error judgment result according to the driver attention point detection information and the deep learning result set, and generating and storing a detection report according to the driver posture correction and error judgment result.
In an embodiment of the present invention, the method for starting the deep learning-based driver head posture monitoring system through interface operation, configuring communication parameter information, initializing an image information acquisition device, and presetting information processing logic includes: starting hardware equipment; detecting hardware equipment and judging whether the hardware equipment is provided with a system or not; if yes, initializing communication parameter information, a camera and a detection device; and if not, installing the system on the hardware equipment.
In an embodiment of the present invention, establishing a communication connection with a server and receiving system version information includes: sending connection request information to a server; judging whether an observation instruction sent by a server side is received; if yes, judging to establish connection with the server; if not, the connection request information is continuously sent until the connection with the server side is established.
In an embodiment of the present invention, detecting a system version information version, a storage device, and an image information acquisition device, completing detection, and sending a prompt message includes: establishing connection with a maintenance background; acquiring latest version information sent by a maintenance background, and judging whether the system is upgraded or not according to the latest version information; if yes, judging that the system is the latest version; if not, the system is upgraded according to the upgrading information sent by the maintenance background; detecting a storage hard disk and a camera and generating detection information; and sending out prompt information according to the detection information.
In an embodiment of the present invention, a server receives a prompt message, triggers a system to acquire video data according to the prompt message, extracts single-frame picture information from the video data and stores the single-frame picture information as a picture sample, extracts a head turning angle and feature information of a point of interest in current single-frame picture information, constructs a head posture sight model according to the head turning angle and the feature information of the point of interest, and performs deep learning on the head posture sight model according to the picture sample to generate a deep learning result set, including: the server receives the prompt information and acquires video data of the driver; extracting the current single-frame picture information according to the video data and the time; extracting the turning angle and the characteristic data of the attention point in the single-frame picture information; splicing the turning angle and the attention point feature data to obtain a turning angle and an attention point feature vector; constructing a head posture sight model according to the turning angle and the attention point feature vector; extracting global variables of the attention points of the driver in the picture, and comparing the global variables with the picture samples to obtain model incremental information; the head posture sight model carries out deep learning according to the model increment information and updates the head posture sight model; saving single-frame picture information; and aggregating the single-frame picture information to obtain a picture sample.
In one embodiment of the present invention, the processing of the driver attention detection information according to the preset processing logic includes: extracting the turning angle and the characteristic information of the attention point in the current single-frame picture information; fusing the turning angle and the feature information of the attention point to obtain the global turning angle and the feature information of the attention point; comparing the global turning angle and the attention point feature information with the turning angle and the attention point feature vector contained in the picture sample to obtain similarity data; and sequencing all the similarity data to obtain the driver attention point detection information.
In one embodiment of the present invention, a system for monitoring a head posture of a driver based on deep learning includes: the system comprises a system initial module, a communication module, an automatic detection module, an image information module, a model calculation module and a result storage module; the system initialization module is used for starting the system through interface operation, configuring communication parameter information, initializing image information acquisition equipment and presetting information processing logic; the communication module is used for establishing communication connection with the server side and receiving system version information, and is connected with the system initial module; the automatic detection module is used for detecting the version information version of the system, the storage device and the image information acquisition device, completing detection and sending prompt information and is connected with the communication module; the image information module is used for receiving the prompt information by the server side, triggering the system to acquire video data according to the prompt information, extracting single-frame picture information from the video data and storing the single-frame picture information as a picture sample, extracting the head turning angle and the feature information of a point of interest in the current single-frame picture information, constructing a head posture sight model according to the head turning angle and the feature information of the point of interest, performing deep learning on the head posture sight model according to the picture sample, and generating a deep learning result set; the model calculation module is used for obtaining the detection information of the attention point of the driver according to the preset processing logic, and is connected with the image information module; and the result storage module is used for acquiring the positive and negative judgment result of the posture of the driver according to the detection information of the attention point of the driver and the deep learning result set, generating a detection report according to the positive and negative judgment result of the posture of the driver and storing the detection report, and is connected with the model calculation module.
In one embodiment of the present invention, a system initialization module includes: the device comprises a hardware starting module, an installation detection module, an equipment parameter detection module and an installation module; the hardware starting module is used for starting hardware equipment; the installation detection module is used for detecting the hardware equipment and judging whether the hardware equipment is provided with the system or not, and the installation detection module is connected with the hardware starting module; the equipment parameter detection module is used for initializing communication parameter information, a camera and a detection device when the hardware equipment is provided with a system and is connected with the hardware starting module; and the installation module is used for installing the system on the hardware equipment when the system is not installed on the hardware equipment, and the installation module is connected with the installation detection module.
In one embodiment of the present invention, a communication module includes: the system comprises a server request module, an instruction receiving and judging module, a connection judging module and a connection continuous request module; the server side request module is used for sending connection request information to the server side; the instruction receiving and judging module is used for judging whether an observation instruction sent by the server is received or not, and is connected with the server request module; the connection judging module is used for judging that the connection with the server side is established when an observation instruction sent by the server side is received, and is connected with the instruction receiving judging module; and the connection continuous request module is used for continuously sending the connection request information until the connection continuous request module establishing connection with the server is connected with the instruction receiving and judging module when the observation instruction sent by the server is not received.
In one embodiment of the present invention, the automatic detection module includes: the system comprises a maintenance connection module, a system version module, a version judgment module, a new version module, a self-upgrade module, an equipment detection module and a follow-up action trigger module; the maintenance connection module is used for establishing connection with the maintenance background; the system version module is used for acquiring the latest version information sent by the maintenance background and is connected with the maintenance connection module; the version judging module is used for judging whether the system is upgraded according to the latest version information, and is connected with the system version module; the new version module is used for judging the system to be the latest version when the system version is the latest version, and the new version module is connected with the version judgment module; the self-upgrading module is used for upgrading the system according to the upgrading information sent by the maintenance background when the system version is not the latest version and is connected with the version judgment module; the equipment detection module is used for detecting the storage hard disk and the camera and generating detection information; and the follow-up action triggering module is used for sending prompt information according to the detection information, is connected with the version judging module and is connected with the equipment checking module.
In one embodiment of the present invention, an image information module includes: the device comprises a video data receiving module, a single-frame obtaining module, a turning angle and attention point feature vector extracting module, a vector splicing module, a model building module, a model training module, a single-frame storing module and a picture sample module; the video data receiving module is used for receiving the prompt information by the server side and acquiring the video data of the driver; the single-frame acquisition module is used for extracting the current single-frame picture information according to the video data and time, and is connected with the video data receiving module; the device comprises a turning angle and attention point feature vector extraction module, a single frame acquisition module and a turning angle and attention point feature vector extraction module, wherein the turning angle and attention point feature vector extraction module is used for extracting turning angle and attention point feature data in single frame picture information; the vector splicing module is used for splicing the turning angle and the attention point feature data to obtain a turning angle and an attention point feature vector, and is connected with the feature extraction module; the model building module is used for building a head posture sight model according to the turning angle and the attention point feature vector and is connected with the vector splicing module; the model increment module is used for extracting global variables of the attention points of the driver in the picture and comparing the global variables to obtain model increment information and is connected with the model construction module; the model training module is used for carrying out deep learning on the head posture sight model according to the model increment information and updating the head posture sight model, and the model training module is connected with the model increment module; the single-frame storage module is used for storing single-frame picture information and comprises a single-frame storage module and a model training module; and the picture sample module is used for gathering single-frame picture information to obtain a picture sample. The picture sample module is connected with the single-frame storage module.
In one embodiment of the present invention, the model calculation module includes: the system comprises a to-be-detected feature extraction module, a feature fusion module, a similarity comparison module and a posture calculation module; the to-be-detected feature extraction module is used for extracting the turning angle and the feature information of the attention point in the current single-frame picture information; the feature fusion module is used for fusing the turning angle and the feature information of the attention point to obtain the global turning angle and the feature information of the attention point, and is connected with the feature extraction module to be detected; the similarity comparison module is used for comparing the global turning angle and the attention point feature information with the turning angle and the attention point feature vector contained in the picture sample to obtain eight action feature similarity data such as a left B column, a left rearview mirror, an inner rearview mirror, a right B column of an overlooking instrument panel, a right rearview mirror, a front view, a head-down gear and the like; and the attitude calculation module is used for sequencing all the similarity data to obtain the detection information of the attention points of the driver and is connected with the similarity comparison module.
In an embodiment of the present invention, the present invention provides a computer-readable storage medium, on which a computer program is stored, the computer program being executed by a processor to perform the method for monitoring the head posture of a driver based on deep learning provided by the present invention.
In an embodiment of the present invention, the present invention provides a device for monitoring a head posture of a driver based on deep learning, including: a processor and a memory; the memory is used for storing a computer program, and the processor is used for executing the computer program stored by the memory so as to enable the deep learning-based driver head posture monitoring device to execute the deep learning-based driver head posture monitoring method provided by the invention.
As described above, the method, system, medium and device for monitoring the head posture of the driver based on deep learning provided by the invention have the following beneficial effects: in order to realize the whole-process electronic monitoring and judgment of the three-examination of the driving subjects of the motor vehicle, a visual tracking technology prototype of the driving examination extracts video data such as the posture of a driver through a vehicle-mounted camera, carries out computer visual algorithm processing including face detection, optical flow detection and the like by using tools such as a deep learning neural network and the like, completes the behavior analysis of detecting the attention point of the driver, whether the body extends out of the vehicle, and the like, improves the objectivity and the accuracy of the three-examination of the subjects, reduces the labor cost, solves the technical problems of high hardware cost, weak feature robustness, low information utilization rate and low accuracy of the positive and negative judgment result of the posture of the driver in the prior art, inspects each frame of image, has 15 frames per second, generates action results per frame, and samples are trained for tracking the face and identifying the action features of the face for judging the action, the head posture picture obtained from the monitoring video is used as a sample library, the feature robustness is strong, and the actual detection accuracy is high.
Drawings
Fig. 1 is a flowchart illustrating an embodiment of a method for monitoring a head pose of a driver based on deep learning according to the present invention.
Fig. 2 is a flowchart illustrating step S1 in fig. 1 in an embodiment.
Fig. 3 is a flowchart illustrating step S2 in fig. 1 in an embodiment.
Fig. 4 is a flowchart illustrating step S3 in fig. 1 in an embodiment.
Fig. 5 is a flowchart illustrating step S4 in fig. 1 in an embodiment.
Fig. 6 is a flowchart illustrating step S5 in fig. 1 in an embodiment.
Fig. 7 is a schematic structural diagram of a deep learning-based system for monitoring the head posture of a driver according to the present invention.
Fig. 8 is a block diagram of the hardware start module 11 in fig. 7 according to an embodiment.
Fig. 9 is a block diagram of the communication module 12 of fig. 7 in one embodiment.
Fig. 10 is a block diagram of the automatic detection module 13 in fig. 7 according to an embodiment.
Fig. 11 is a block diagram of the video data receiving module 14 in fig. 7 according to an embodiment.
Fig. 12 is a block diagram of the model calculation module 15 in fig. 7 according to an embodiment.
Description of the element reference numerals
Driver head posture monitoring system based on deep learning
11 system initial module
12 communication module
13 automatic detection module
14 image information module
15 model calculation module
16 result storage module
111 hardware starting module
112 installation detection module
113 equipment parameter detecting module
114 mounting module
121 server request module
122 instruction receiving and judging module
123 connection judging module
124 connection continuation request module
131 maintenance connection module
132 system version module
133 version decision module
134 New version module
135 self-upgrade module
136 equipment detection module
137 follow-up action triggering module
141 video data receiving module
142 single frame acquisition module
143 turning angle and attention point feature vector extraction module
144 vector splicing module
145 model building module
146 model increment Module
147 model training module
148 single frame storage module
149 picture sample module
51 to-be-detected feature extraction module
52 feature fusion module
53 similarity comparison module
54 attitude calculation module
Description of step designations
Method steps S1-S6
Method steps S11-S14
Method steps S21-S24
Method steps S31-S37
Method steps S41-S49
Method steps S51-S54
Detailed Description
The following description of the embodiments of the present invention is provided for illustrative purposes, and other advantages and effects of the present invention will become apparent to those skilled in the art from the present disclosure.
Referring to fig. 1 to 12, it should be understood that the structures shown in the drawings attached to the present specification are only used for understanding and reading the contents disclosed in the specification, and are not used to limit the conditions under which the present invention can be implemented, so that the present invention has no technical essence, and any modification of the structures, changes of the proportional relationship, or adjustment of the size should fall within the scope covered by the technical contents disclosed in the present invention without affecting the efficacy and achievable purpose of the present invention. In addition, the terms "upper", "lower", "left", "right", "middle" and "one" used in the present specification are for clarity of description, and are not intended to limit the scope of the present invention, and the relative relationship between the terms and the terms is not to be construed as a scope of the present invention.
Referring to fig. 1, a flowchart of an embodiment of a method for monitoring a head pose of a driver based on deep learning according to the present invention is shown, as shown in fig. 1, the method includes:
step S1, the driver head posture monitoring system based on deep learning is started through hardware power-on operation on the interface, communication parameter information is configured, image information acquisition equipment is initialized, information processing logic is preset, a user starts the system through pressing a system start button on a system main interface through a control panel, a computer and other client terminals which are provided with the driver sight line monitoring system, the system automatically carries out installation detection and setting, and hardware equipment such as a camera, a sensor, a storage disk and the like is initialized;
step S2, establishing communication connection with the server and receiving system version information, wherein the system communication adopts HTTP protocol, JSON data format is used as communication data format, HTTP request mode is POST, the system sends communication connection request and response to the server, and uplink and downlink communication transmission channel is established between the system and the server;
s3, detecting the version of system version information, a storage device and an image information acquisition device, finishing detection and sending prompt information, automatically judging the version of the system according to the latest online installation version information of the system installation version by the system, automatically installing the system according to comparison information, testing the SD card, the camera and the sensor to form a detection log file and store the detection log file, and triggering the system to process image information according to a monitoring result;
step S4, the server receives the prompt information, triggers the system to collect video data according to the prompt information, extracts single-frame picture information from the video data and stores the single-frame picture information as a picture sample, extracts the turning angle and the attention point characteristic information in the current single-frame picture information, constructs a head posture sight model according to the turning angle and the attention point characteristic information, performs deep learning on the head posture sight model according to the picture sample to generate a deep learning result set, collects eye video data of a driver through a camera installed in a cab, stores the single-frame image information in the video data as an image analysis sample, and stores the video data in an SD card;
step S5, obtaining driver focus detection information according to preset processing logic, extracting video information collected by a camera from a storage device, framing the video information, extracting picture information of a current frame, constructing a deep neural network model according to the turning angle and the characteristic information of the attention point contained in the picture information, training the deep neural network model by using the picture analysis sample, video data such as the posture of a driver are extracted through a vehicle-mounted camera, computer vision algorithm processing including face detection, optical flow detection and the like is carried out through tools such as a deep learning neural network, a video processing module carries out video data acquisition, a single-frame original-size 720p high-definition picture is generated and used for detecting the attention point of the driver, and the original-size picture is compressed into a JPG picture and stored in a special directory set for a specific measurement instruction of a superior device. And when the superior equipment finishes measurement, all pictures in the directory are packed and compressed, and are transferred to the specified directory of the SD card for standby, so that the attention points of the driver are detected. If the face is detected to be in the visual field range of the camera, the deep learning neural network and related logics are used for calibrating the face position and judging the orientation of the face position, namely the attention point of the driver;
and step S6, acquiring a driver posture correction and error judgment result according to the driver attention point detection information and the deep learning result set, and generating and storing a detection report according to the driver posture correction and error judgment result.
Please refer to fig. 2, which is a flowchart illustrating step S1 in fig. 1 in an embodiment, which specifically includes:
and step S11, starting hardware equipment, wherein a user starts a power supply of the system hardware equipment through power-on operation of a main control interface, clicks a cursor of the driver sight line detection system on the main interface of the mobile terminal to start, the system hardware equipment mainly comprises a plurality of cameras arranged on the relative positions of a driver seat of a cab and a driver, detection equipment arranged on the driver seat and a client terminal, and the driving test system equipment does not provide an operation interface when being deployed. And the system software sets software self-starting configuration under an Autostart catalog of the Ubuntu system during installation. The hardware is electrified, and when the Ubuntu system is started, a starting script is executed, and a driving test system program is automatically started;
step S12, detecting hardware equipment, judging whether the hardware equipment is provided with a system, searching whether the hardware equipment has an installation path of the system by traversing the file path in the system, and detecting whether the hardware equipment is provided with a sight monitoring system of a monitor according to the installation path;
step S13, if yes, initializing communication parameter information, IP addresses of a camera, a detection device driving test system and superior equipment, initializing types of communication information, information IDs, various communication protocols and transmission data, and initializing the camera and a monitoring sensor through an equipment list in hardware equipment;
and step S14, if not, installing the system on the hardware equipment, and if the driver sight line monitoring system is not installed in the hardware equipment and the mobile terminal, receiving and installing the latest version of system installation file sent by the remote maintenance center.
Please refer to fig. 3, which is a flowchart illustrating step S2 in fig. 1 in an embodiment, which specifically includes:
step S21, sending connection request information to a server;
step S22, judging whether an observation instruction sent by a server side is received;
step S23, if yes, the connection with the server is judged to be established;
and step S24, if not, the connection request information is continuously sent until the connection is established with the server.
Please refer to fig. 4, which is a flowchart illustrating step S3 in fig. 1 in an embodiment, which specifically includes:
step S31, establishing connection with a maintenance background, providing HTTP service, analyzing and packaging response messages of examination starting instructions JSON, analyzing and packaging response messages of measurement starting instructions JSON, processing operation instructions from superior equipment, analyzing JSON strings to take out service types, and finishing the report of algorithm processing results; finishing operation requests of upgrading, maintaining and the like of a remote maintenance center;
step S32, obtaining the latest version information sent by the maintenance background, calling the corresponding sub-module, and packaging the processing result into a JSON string reply request end;
step S33, judging whether the system is upgraded according to the latest version information;
step S34, if yes, the system is judged to be the latest version;
step S35, if not, system upgrade is carried out according to upgrade information sent by the maintenance background, the latest version system installation package or the upgrade package sent by the online maintenance background is received, installation data of the latest version system is obtained from the installation package or the upgrade package, the latest version system is installed according to a path in a file system of the user terminal, and setup.py is input: software installation auto configuration script, driverEquipment desktop: software self-starting configuration file, Driversafe _ start.sh: a software self-starting script;
step S36, detecting the storage hard disk and the camera and generating detection information, wherein the storage hard disk, the camera and the sensor are connected with an expansion interface of the system, the system can monitor the use state and the capacity of the storage hard disk such as an SD card or a magnetic disk and the like, acquire the driver posture right and wrong judgment result, and detect the states of the camera and the sensor;
and step S37, sending prompt information according to the detection information, receiving the posture correction and error judgment result data of the system driver, if the posture correction and error judgment result data of the system driver shows that the whole system is detected normally, sending trigger information for starting data acquisition and data processing to other functional modules of the system, starting a software executable program driverEquipment, and entering the system.
Please refer to fig. 5, which is a flowchart illustrating step S4 in fig. 1 in an embodiment, which specifically includes:
step S41, the server receives the prompt information to obtain the video data of the driver, the camera obtains the video image of the driver in the driving process in real time through the photosensitive imaging element, and sends the video data obtained by shooting to the image processing logic through a data bus or a wireless transmission mode;
step S42, extracting the current single-frame picture information according to the video data and time, processing the video information by the driver sight line detection system according to the preset image processing logic to obtain a single-frame original size picture and a compressed format picture, preferably, framing the video data acquired by the camera according to the timestamp, using the generated single-frame picture in an image algorithm library for corresponding analysis, and compressing and storing the picture for report generation;
step S43, extracting the turning angle and the feature data of the focus in the single-frame picture information, and storing the single-frame picture information obtained by processing the video data into a storage device for establishing a sample and extracting the image information in the subsequent operation;
step S44, splicing the turning angle and the attention point feature data to obtain a turning angle and an attention point feature vector, extracting single-frame picture information from the image storage queue and aggregating the single-frame picture information into an image analysis sample, wherein the image analysis sample is used for training a deep neural network model;
step S45, constructing a head posture sight model according to the turning angle and the attention point feature vector;
s46, extracting global variables of the attention points of the driver in the picture, and comparing the global variables with the picture sample to obtain model incremental information;
step S47, the head posture sight model carries out deep learning according to the model increment information and updates the head posture sight model;
step S48, storing single frame picture information;
and step S49, aggregating the single-frame picture information to obtain a picture sample.
Please refer to fig. 6, which is a flowchart illustrating step S5 in fig. 1 in an embodiment, which specifically includes:
step S51, extracting the turning angle and the feature information of the attention point in the current single-frame picture information, extracting a local turning angle and a feature vector set of the attention point from a processed head image data set, then fusing the local turning angle and the feature vector set of the attention point to obtain a head posture turning angle and a feature vector of the attention point, preprocessing the sight line image and the posture image of the head to be detected, extracting the local turning angle and the feature vector of the head, the global turning angle and the feature vector of the attention point, fusing the local turning angle and the feature vector of the attention point to obtain a global turning angle and a feature vector of the attention point, preprocessing each head posture picture in an image analysis sample to obtain preprocessed information with the picture to be detected;
step S52, fusing the turning angle and the attention point feature information to obtain a global turning angle and attention point feature information, and obtaining a sample global turning angle and an attention point feature vector contained in a sample according to the to-be-detected picture preprocessing information of the image analysis sample;
step S53, comparing the global turning angle and the attention point feature information with the turning angle and the attention point feature vector included in the picture sample to obtain eight motion feature similarity data, such as a left B-pillar, a left rearview mirror, an inside rearview mirror, a right B-pillar of an overlooking instrument panel, a right rearview mirror, a front view, a head down view and the like, where the similarity data is used to determine the operation violation determination of the driver in the following scenarios in the subject three test:
scene one: before starting, the inside and outside rearview mirrors are not observed, and the traffic condition behind is observed by returning. Before starting, observing a left rearview mirror and a right rearview mirror: the head deflects 30 degrees to the left, and the driver does not observe the left rearview mirror; the head deflects more than 30 degrees to the left; when the inner rearview mirror is watched, the head deflects rightwards by more than 30 degrees, and the elevation angle on the head is more than 30 degrees; left rear, head deflects more than 60 degrees to the left, which determines violation.
Scene two: the line of sight is more than 2 seconds away from the direction of travel. During the running process of the vehicle, when the sight of the driver leaves the front and the duration time of the driver deviating to one side exceeds two seconds, the violation is judged.
Scene three: the head is lowered to look at the gear during driving. During driving, the head is lowered for more than 2 seconds, when the head is lowered for looking at the gear, the duration of the head which deflects to the right by more than 30 degrees is more than 2 seconds, the head lowering angle is more than 30 degrees, the duration is more than 2 seconds, and violation is judged.
Scene four: in the process of turning the vehicle, the road traffic condition is not observed through a left rearview mirror; after the left turn light is turned on, if the examinee does not observe the left rearview mirror, the head does not deflect left by 30-60 degrees, and violation is judged.
Scene five: in the process of turning the vehicle, the road traffic condition is not observed through the right rearview mirror; and after the examinee opens the right steering and the like, if the examinee does not observe the right rearview mirror, the head does not deflect 45-60 degrees to the right, and violation is judged.
Scene six: before lane changing, the road traffic condition is observed after the observation is carried out in the direction of lane changing without the observation of an inner rearview mirror and an outer rearview mirror; after a voice command of changing lanes is received, or within a certain time after a driver turns on a turn light, if the inside and outside rearview mirrors and the corresponding measured rear part are not observed, and the head deflection is more than 60 degrees, violation is judged.
Scene seven: before stopping, the traffic conditions of the rear and the right side are not observed through the inner rearview mirror and the outer rearview mirror, and in the process that the vehicle speed is reduced to 0 after the right turn lamp is turned on safely through observation and confirmation, if a driver does not observe the inner rearview mirror, the right rearview mirror and the right rear, violation is judged.
And eighth scene: the traffic condition of the left rear part and the right rear part is observed without returning before opening the door when the vehicle needs to get off; when the vehicle speed is 0, if the driver does not observe the left rear part before opening the vehicle door again, the violation is judged;
and step S54, sequencing all the similarity data to obtain the detection information of the attention points of the driver, wherein the similarity data is cosine similarity.
Referring to fig. 7, a schematic structural diagram of a deep learning-based driver head pose monitoring system according to the present invention is shown, and as shown in fig. 7, a deep learning-based driver head pose monitoring system 1 includes: the system comprises a system initial module 11, a communication module 12, an automatic detection module 13, an image information module 14, a model calculation module 15 and a result storage module 16; the system initialization module 11 is used for starting a driver head posture monitoring system based on deep learning through hardware power-on operation on an interface, configuring communication parameter information, initializing image information acquisition equipment, presetting information processing logic, starting the system through pressing a system start button on a system main interface on a client terminal such as a control panel and a computer provided with the driver sight monitoring system by a user, automatically carrying out installation detection and setting on the system, and initializing hardware equipment such as a camera, a sensor and a storage disk; the communication module 12 is used for establishing communication connection with the server and receiving system version information, the system communication adopts an HTTP protocol, a JSON data format is used as a communication data format, the HTTP request mode is POST, the system sends a communication connection request to the server and responds, an uplink and downlink communication transmission channel is established between the system and the server, and the communication module 12 is connected with the system initial module 11; the automatic detection module 13 is used for detecting the version of system version information, the storage device and the image information acquisition device, completing detection and sending prompt information, detecting the version of the system version information, the storage device and the image information acquisition device, completing detection and sending the prompt information, automatically judging the version of the system per se according to the latest online installation version information of the system installation version by the system, automatically installing the system according to comparison information, testing the SD card, the camera and the sensor to form a detection log file and store the detection log file, triggering the system to process the image information according to a monitoring result, and connecting the automatic detection module 13 with the communication module 12; the image information module 14 is used for receiving prompt information by a server, triggering a system to acquire video data according to the prompt information, extracting single-frame picture information from the video data and storing the single-frame picture information as a picture sample, extracting turning angle and attention point characteristic information in the current single-frame picture information, constructing a head posture sight model according to the turning angle and the attention point characteristic information, performing deep learning on the head posture sight model according to the picture sample to generate a deep learning result set, acquiring eye video data of a driver through a camera installed in a cab, storing the single-frame image information in the video data as an image analysis sample, and storing the video data in an SD card, wherein the image information module 14 is connected with the automatic detection module 13; a model calculation module 15, for obtaining the driver's attention detection information according to the preset processing logic, extracting the video information collected by the camera from the storage device, framing the video information, extracting the picture information of the current frame, constructing a deep neural network model according to the turning angle and the characteristic information of the attention point contained in the picture information, training the deep neural network model by using the picture analysis sample, video data such as the posture of a driver are extracted through a vehicle-mounted camera, computer vision algorithm processing including face detection, optical flow detection and the like is carried out through tools such as a deep learning neural network, a video processing module carries out video data acquisition, a single-frame original-size 720p high-definition picture is generated and used for detecting the attention point of the driver, and the original-size picture is compressed into a JPG picture and stored in a special directory set for a specific measurement instruction of a superior device. And when the superior equipment finishes measurement, all pictures in the directory are packed and compressed, and are transferred to the specified directory of the SD card for standby, so that the attention points of the driver are detected. If the face is detected to be in the visual field range of the camera, the deep learning neural network and related logics are used for calibrating the face position and judging the orientation, namely the attention point of the driver, and the model calculation module 15 is connected with the image information module 14; and the result storage module 16 is used for acquiring the positive and negative judgment result of the posture of the driver according to the detection information of the attention point of the driver and the deep learning result set, generating a detection report according to the positive and negative judgment result of the posture of the driver and storing the detection report, and the result storage module 16 is connected with the model calculation module 15.
Referring to fig. 8, a detailed module diagram of the hardware start module 11 in fig. 7 in an embodiment is shown, which specifically includes: a hardware starting module 111, an installation detection module 112, an equipment parameter detection module 113 and an installation module 114; the hardware starting module 111 is used for starting hardware equipment, a user starts a power supply of the system hardware equipment through power-on operation of a main control interface, a cursor of the driver sight line detection system is clicked on the main interface of the mobile terminal to be started, the system hardware equipment mainly comprises a plurality of cameras installed on the relative positions of a cab driver seat and a driver, detection equipment installed on the driver seat and a client terminal, and the driving test system equipment does not provide an operation interface when being deployed. And the system software sets software self-starting configuration under an Autostart catalog of the Ubuntu system during installation. The hardware is electrified, and when the Ubuntu system is started, a starting script is executed, and a driving test system program is automatically started; the installation detection module 112 is used for detecting hardware equipment, judging whether the hardware equipment is provided with a system or not, searching whether the hardware equipment has an installation path of the system or not through traversing a file path in the system, detecting whether a watcher sight monitoring system is installed in the hardware equipment or not according to the installation path, and connecting the installation detection module 112 with the hardware starting module 111; the equipment parameter detection module 113 is used for initializing communication parameter information, configuring IP addresses of a camera and detection device driving test system and superior equipment if the equipment parameter detection module 113 is used, initializing the types of the communication information, information IDs, various communication protocols and various transmission data, initializing the camera and a monitoring sensor through an equipment list in hardware equipment, and connecting the equipment parameter detection module 113 with the hardware starting module 111; and the installation module 114 is used for receiving and installing the latest version of system installation files sent by the remote maintenance center when the driver sight line monitoring system is not installed in the hardware equipment, the hardware equipment and the mobile terminal if the system is installed on the hardware equipment, and the installation module 114 is connected with the installation detection module 112.
Referring to fig. 9, a schematic block diagram of the communication module 12 in fig. 7 in an embodiment is shown, which specifically includes: a server request module 121, an instruction receiving and judging module 122, a connection judging module 123 and a connection continuation request module 124; a server request module 121, configured to send connection request information to a server; an instruction receiving and judging module 122, configured to judge whether an observation instruction sent by a server is received, where the receiving and judging module 122 is connected to the server request module 121; the connection judging module 123 is configured to judge that a connection is established with the server when an observation instruction sent by the server is received, and the connection judging module 123 is connected with the instruction receiving judging module 122; a connection continuation request module 124, configured to, when the observation instruction sent by the server is not received, continuously send connection request information until a connection is established with the server, where the connection continuation request module 124 is connected with the instruction receiving and determining module 122.
Please refer to fig. 10, which is a schematic diagram illustrating an embodiment of the automatic detection module 13 in fig. 7, which specifically includes: a maintenance connection module 131, a system version module 132, a version determination module 133, a new version module 134, a self-upgrade module 135, a device detection module 136, and a follow-up action trigger module 137; the maintenance connection module 131 is used for establishing connection with a maintenance background, providing an HTTP service, instructing JSON to parse and encapsulate a response message for examination starting, instructing the JSON to parse and encapsulate the response message for measurement starting, processing an operation instruction from a superior device, parsing a JSON string to take out a service type, and completing reporting of an algorithm processing result; finishing operation requests of upgrading, maintaining and the like of a remote maintenance center; the system version module 132 is used for acquiring the latest version information sent by the maintenance background, calling corresponding sub-modules, packaging the processing result into a JSON string reply request end, and connecting the system version module 132 with the maintenance connection module 131; a version judging module 133, configured to judge whether the system is upgraded according to the latest version information, where the version judging module 133 is connected to the system version module 132; a new version module 134, configured to determine that the system is the latest version when the system version is the latest version, where the new version module 134 is connected to the version determination module 133; the self-upgrade module 135 is configured to, when there is no system with the latest version, perform system upgrade according to upgrade information sent by the maintenance background, receive a system installation package or an upgrade package with the latest version sent by the online maintenance background, obtain installation data of the system with the latest version from the installation package or the upgrade package, install the system with the latest version according to a path in a file system of the user terminal, and input setup.py: software installation auto configuration script, driverEquipment desktop: software self-starting configuration file, Driversafe start.sh: the software self-starting script is connected with the self-upgrading module 135 and the version judging module 133; the device detection module 136 is used for detecting a storage hard disk and a camera and generating detection information, the storage hard disk, the camera and a sensor are connected with an expansion interface of the system, the system can monitor the use state, the capacity and other information of the storage hard disk such as an SD card or a magnetic disk, acquire the driver posture right and wrong judgment result, and detect the states of the camera and the sensor; and the subsequent action triggering module 137 is used for sending prompt information according to the detection information, receiving the posture correct and wrong judgment result data of the system driver, sending triggering information for starting data acquisition and data processing to other function modules of the system if the posture correct and wrong judgment result data of the system driver shows that the whole system is detected normally, starting a software executable program driverEequipment, entering the system, and connecting the subsequent action triggering module 137 with the equipment checking module 236.
Please refer to fig. 11, which is a schematic block diagram of the video data receiving module 14 in fig. 7 in an embodiment, specifically including: the system comprises a video data receiving module 141, a single frame acquiring module 142, a turning angle and attention point feature vector extracting module 143, a vector splicing module 144, a model constructing module 145, a model increment module 146, a model training module 147, a single frame storing module 148 and a picture sample module 149; the video data receiving module 141 is used for receiving the prompt information by the server side, acquiring the video data of the driver, acquiring the video image of the driver in the driving process in real time by the camera through the photosensitive imaging element, and sending the video data acquired by shooting to the image processing logic in a data bus or wireless transmission mode; a single frame acquisition module 142, configured to extract current shown single frame picture information according to video data and time, process the video information according to a preset image processing logic by a driver sight line detection system to obtain a single frame original size picture and a compressed format picture, preferably, perform framing processing on the video data acquired by the camera according to a timestamp, use the generated single frame picture in an image algorithm library for corresponding analysis, and compress and store the picture for report generation, where the single frame acquisition module 142 is connected to the video data receiving module 141; a turning angle and attention point feature vector extraction module 143, configured to extract turning angles and attention point feature data in single-frame picture information, store the single-frame picture information obtained by video data processing in a storage device for constructing a sample and extracting image information in subsequent operations, where the turning angle and attention point feature vector extraction module 143 is connected to the single-frame acquisition module 142; the vector splicing module 144 is configured to splice the turning angle and the feature data of the attention point to obtain a turning angle and a feature vector of the attention point, extract single-frame picture information from the image storage queue and aggregate the single-frame picture information into an image analysis sample, where the image analysis sample is used to train a deep neural network model, and the vector splicing module 144 is connected to the feature extraction module 143; the model building module 145 is used for building a head posture sight model according to the turning angle and the attention point feature vector, and the model building module 145 is connected with the vector splicing module 144; the model increment module 146 is used for extracting global variables of the attention points of the driver in the picture and comparing the global variables to obtain model increment information, and the model increment module 146 is connected with the model construction module 145; the model training module 147 is used for the head posture sight model to perform deep learning according to the model increment information and update the head posture sight model, and the model training module 147 is connected with the model increment module 146; a single frame storage module 148 for storing single frame picture information, the single frame storage module 148 being connected to the model training module 147; the picture sample module 149 is configured to aggregate single-frame picture information to obtain a picture sample. The picture sample module 149 is connected to the single frame holding module 148.
Please refer to fig. 12, which is a block diagram of the model calculation module 15 in fig. 7 according to an embodiment, specifically including: the system comprises a to-be-detected feature extraction module 151, a feature fusion module 152, a similarity comparison module 153 and a posture calculation module 154; the feature extraction module 151 to be detected is used for extracting the turning angle and the feature information of the attention point in the current single-frame picture information, extracting a local turning angle and a feature vector set of the attention point from a processed head image data set, then fusing the local turning angle and the feature vector set of the attention point to obtain a head posture turning angle and a feature vector of the attention point, preprocessing a sight line image and a posture image of the head to be detected, extracting the local turning angle and the feature vector of the head, the global turning angle and the feature vector of the attention point, fusing the local turning angle and the feature vector of the attention point to obtain a global turning angle and the feature vector of the attention point, and preprocessing each head posture picture in an image analysis sample to obtain preprocessed information with the picture to be detected; the feature fusion module 152 is configured to fuse the turning angle and the feature information of the attention point to obtain a global turning angle and the feature information of the attention point, and obtain a sample global turning angle and a feature vector of the attention point included in a sample according to-be-detected image preprocessing information of an image analysis sample, where the feature fusion module 152 is connected to the to-be-detected feature extraction module 151; a similarity comparison module 153, configured to compare the global turning angle and the attention point feature information with the turning angle and the attention point feature vector included in the picture sample to obtain eight motion feature similarity data, such as a left B-pillar, a left rearview mirror, an inside rearview mirror, a right B-pillar of an overlooking instrument panel, a right rearview mirror, a front view, a low-head gear, and the like; and the posture calculation module 154 is used for sequencing all the similarity data to obtain the attention detection information of the driver, and the posture calculation module 154 is connected with the similarity comparison module 153.
The present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method for monitoring the head pose of a driver provided by the present invention, and those skilled in the art will understand that: all or part of the steps for implementing the above method embodiments may be performed by hardware associated with a computer program. The aforementioned computer program may be stored in a computer readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The invention provides a driver head posture monitoring device based on deep learning, which comprises: a processor and a memory; the memory is used for storing a computer program, and the processor is used for executing the computer program stored in the memory, so as to enable the deep learning-based driver head posture monitoring device to execute the deep learning-based driver head posture monitoring method provided by the invention, wherein the memory may include a Random Access Memory (RAM) or may further include a non-volatile memory (e.g. at least one disk memory). The processor may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the integrated circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components.
In summary, the invention provides a method, a system, a medium and a device for monitoring the head posture of a driver based on deep learning. The invention has the following beneficial effects: in order to realize the whole-process electronic monitoring and judgment of the three-examination of the driving subjects of the motor vehicle, a driving examination vision tracking technical prototype extracts video data such as the posture of a driver through a vehicle-mounted camera, carries out computer vision algorithm processing including face detection, optical flow detection and the like by using tools such as a deep learning neural network and the like, inspects images of each frame for 15 frames per second to generate action data results, samples are trained for face tracking, the face action characteristics are identified for judging actions, detection of the attention points of the driver and analysis of whether the body stretches out of the vehicle are completed, the objectivity and the accuracy of the three-examination of the subjects are improved, and the labor cost is reduced. In the third test of the motor vehicle driving subjects, the driving test system uses a camera to collect the driving video of an examinee, and the face orientation of the examinee is detected so as to confirm the possible observation target of the examinee; completing the detection of whether an object extends out of the automobile in the left front window area so as to confirm whether a body part of an examinee extends out of the automobile; and finishing the camera imaging quality evaluation to confirm whether other objects block the camera. After detection of a driver focus, whether a body extends out of a window, whether a camera is shielded or not and the like is completed, the driving test system reports a relevant state to the superior device according to a communication protocol agreed with the superior device (the device which finally completes driving test subject compliance judgment) so as to help the superior device to complete driving test judgment. In conclusion, the invention solves the technical problems of high hardware cost, weaker feature robustness, low information utilization rate and low accuracy of the positive and negative judgment result of the posture of the driver in the prior art, takes the head posture picture obtained from the monitoring video as a sample library, does not need to design the feature, has strong feature robustness, higher actual detection accuracy rate and higher commercial value and practicability.

Claims (12)

1. A method for monitoring the head posture of a driver based on deep learning is characterized by comprising the following steps:
hardware power-on operation is carried out through an interface, the driver head posture monitoring system based on deep learning is started, communication parameter information is configured, image information acquisition equipment is initialized, and information processing logic is preset;
establishing communication connection with a server side and receiving system version information;
detecting the version of the system version information, the storage equipment and the image information acquisition equipment, completing detection and sending prompt information;
the server receives the prompt information, triggers a system to collect video data according to the prompt information, extracts single-frame picture information from the video data and stores the single-frame picture information as a picture sample, extracts turning angle and attention point characteristic information in the current single-frame picture information, constructs a head posture sight model according to the turning angle and the attention point characteristic information, and performs deep learning on the head posture sight model according to the picture sample to generate a deep learning result set;
processing according to the preset information processing logic to obtain the detection information of the attention point of the driver;
acquiring a driver posture correction and error judgment result according to the driver attention point detection information and the deep learning result set, and generating and storing a detection report according to the driver posture correction and error judgment result;
wherein the generating a deep learning result set comprises:
the server receives the prompt information and acquires the video data of the driver;
extracting the current single-frame picture information according to the video data and the time;
extracting the turning angle and the characteristic data of the attention point in the single-frame picture information;
splicing the turning angle and the attention point feature data to obtain a turning angle and an attention point feature vector;
constructing the head posture sight model according to the turning angle and the attention point feature vector;
extracting global variables of the attention points of the driver in the picture, and comparing the global variables with the picture sample to obtain model incremental information;
the head posture sight model carries out deep learning according to the model increment information and updates the head posture sight model;
saving the single-frame picture information;
and aggregating the single-frame picture information to obtain the picture sample.
2. The method according to claim 1, wherein the interface operation is used for starting the deep learning-based driver head posture monitoring system, configuring communication parameter information, initializing an image information acquisition device, and presetting information processing logic, and comprises the following steps:
starting hardware equipment;
detecting the hardware equipment, and judging whether the system is installed on the hardware equipment or not;
if so, initializing the communication parameter information, the camera and the detection device;
and if not, installing the system on the hardware equipment.
3. The method of claim 1, wherein establishing a communication connection with a server and accepting system version information comprises:
sending connection request information to the server;
judging whether an observation instruction sent by a server side is received;
if so, judging that the connection is established with the server;
and if not, continuously sending the connection request information until the connection with the server side is established.
4. The method of claim 1, wherein the detecting the version of the system version information, the storage device, and the image information capture device, completing the detection, and sending a prompt message comprises:
establishing connection with a maintenance background;
obtaining the latest version information sent by a maintenance background;
judging whether the system is upgraded or not according to the latest version information;
if yes, judging that the system is the latest version;
if not, upgrading the system according to the upgrading information sent by the maintenance background;
detecting a storage hard disk and a camera and generating detection information;
and sending out prompt information according to the detection information.
5. The method according to claim 1, wherein the processing of the driver focus detection information according to the preset information processing logic comprises:
extracting the turning angle and the characteristic information of the attention point in the current single-frame picture information;
fusing the turning angle and the attention point characteristic information to obtain a global turning angle and attention point characteristic information;
comparing the global turning angle and the attention point feature information with the turning angle and the attention point feature vector contained in the picture sample to obtain eight action feature similarity data such as a left B column, a left rearview mirror, an inside rearview mirror, an overlooking instrument panel, a right B column, a right rearview mirror, a front view, a head-down view and the like;
and sequencing all the similarity data to obtain the driver attention point detection information.
6. A driver head pose monitoring system based on deep learning, comprising: the system comprises a system initial module, a communication module, an automatic detection module, an image information module, a model calculation module and a result storage module;
the system initialization module is used for starting the system through interface operation, configuring communication parameter information, initializing image information acquisition equipment and presetting information processing logic;
the communication module is used for establishing communication connection with the server side and receiving system version information;
the automatic detection module is used for detecting the version of the system version information, the storage equipment and the image information acquisition equipment, completing detection and sending prompt information;
the image information module is used for receiving the prompt information by the server side, triggering a system to acquire video data according to the prompt information, extracting single-frame picture information from the video data and storing the single-frame picture information as a picture sample, extracting head turning angle and attention point characteristic information in the current single-frame picture information, constructing a head posture sight model according to the head turning angle and the attention point characteristic information, performing deep learning on the head posture sight model according to the picture sample, and generating a deep learning result set;
the model calculation module is used for processing according to the preset information processing logic to obtain the detection information of the attention point of the driver;
the result storage module is used for acquiring a driver posture positive and negative judgment result according to the driver attention point detection information and the deep learning result set, and generating and storing a detection report according to the driver posture positive and negative judgment result;
the image information module includes: the device comprises a video data receiving module, a single-frame obtaining module, a turning angle and attention point feature vector extracting module, a vector splicing module, a model constructing module, a model increment module, a model training module, a single-frame storing module and a picture sample module;
the video data receiving module is used for receiving the prompt information by the server side and acquiring the video data of the driver;
the single-frame acquisition module is used for extracting the current single-frame picture information according to the video data and the time;
the turning angle and attention point feature vector extraction module is used for extracting turning angle and attention point feature data in the single-frame picture information;
the vector splicing module is used for splicing the turning angle and the attention point feature data to obtain a turning angle and an attention point feature vector;
the model building module is used for building the head posture sight model according to the turning angle and the attention point feature vector;
the model increment module is used for extracting global variables of the attention points of the driver in the picture and comparing the global variables to obtain model increment information;
the model training module is used for the head posture sight model to carry out deep learning according to the model increment information and update the head posture sight model;
the single-frame storage module is used for storing the single-frame picture information;
the picture sample module is used for gathering the single-frame picture information to obtain the picture sample.
7. The system of claim 6, wherein the system initialization module comprises: the device comprises a hardware starting module, an installation detection module, an equipment parameter detection module and an installation module;
the hardware starting module is used for starting hardware equipment;
the installation detection module is used for detecting the hardware equipment and judging whether the system is installed on the hardware equipment or not;
the equipment parameter detection module is used for initializing the communication parameter information, the camera and the detection device when the hardware equipment is provided with the system;
the installation module is used for installing the system on the hardware equipment when the system is not installed on the hardware equipment.
8. The system of claim 6, wherein the communication module comprises: the system comprises a server request module, an instruction receiving and judging module, a connection judging module and a connection continuous request module;
the server side request module is used for sending connection request information to the server side;
the instruction receiving and judging module is used for judging whether an observation instruction sent by the server side is received or not;
the connection judging module is used for judging that the connection is established with the server when an observation instruction sent by the server is received;
and the connection continuous request module is used for continuously sending the connection request information until the connection with the server side is established when the observation instruction sent by the server side is not received.
9. The system of claim 6, wherein the automatic detection module comprises: the system comprises a maintenance connection module, a system version module, a version judgment module, a new version module, a self-upgrade module, an equipment detection module and a follow-up action trigger module;
the maintenance connection module is used for establishing connection with the maintenance background;
the system version module is used for acquiring the latest version information sent by the maintenance background;
the version judging module is used for judging whether the system is upgraded or not according to the latest version information;
the new version module is used for judging that the system is the latest version when the system version is the latest version;
the self-upgrading module is used for upgrading the system according to the upgrading information sent by the maintenance background when the system version is not the latest version;
the equipment detection module is used for detecting the storage hard disk and the camera and generating detection information;
and the follow-up action triggering module is used for sending out prompt information according to the detection information.
10. The system of claim 6, wherein the model computation module comprises: the system comprises a to-be-detected feature extraction module, a feature fusion module, a similarity comparison module and a posture calculation module;
the to-be-detected feature extraction module is used for extracting the turning angle and the feature information of the attention point in the current single-frame picture information;
the feature fusion module is used for fusing the turning angle and the attention point feature information to obtain a global turning angle and attention point feature information;
the similarity comparison module is used for comparing the global turning angle and the attention point feature information with the turning angle and the attention point feature vector contained in the picture sample to obtain eight action feature similarity data such as a left B column, a left rearview mirror, an inner rearview mirror, a downward-looking instrument panel, a right B column, a right rearview mirror, a front view, a head-down gear and the like;
and the attitude calculation module is used for sequencing all the similarity data to obtain the driver focus detection information.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the deep learning-based driver head pose monitoring method of any one of claims 1 to 5.
12. A driver head attitude monitoring device based on deep learning, comprising: a processor and a memory;
the memory is used for storing a computer program, and the processor is used for executing the computer program stored by the memory to cause the deep learning based driver head posture monitoring device to execute the deep learning based driver head posture monitoring method according to any one of claims 1 to 5.
CN201710716168.6A 2017-08-18 2017-08-18 Driver head posture monitoring method, system, medium and equipment based on deep learning Active CN109426757B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710716168.6A CN109426757B (en) 2017-08-18 2017-08-18 Driver head posture monitoring method, system, medium and equipment based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710716168.6A CN109426757B (en) 2017-08-18 2017-08-18 Driver head posture monitoring method, system, medium and equipment based on deep learning

Publications (2)

Publication Number Publication Date
CN109426757A CN109426757A (en) 2019-03-05
CN109426757B true CN109426757B (en) 2021-02-12

Family

ID=65497692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710716168.6A Active CN109426757B (en) 2017-08-18 2017-08-18 Driver head posture monitoring method, system, medium and equipment based on deep learning

Country Status (1)

Country Link
CN (1) CN109426757B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329566A (en) * 2020-10-26 2021-02-05 易显智能科技有限责任公司 Visual perception system for accurately perceiving head movements of motor vehicle driver
CN114268621B (en) * 2021-12-21 2024-04-19 东方数科(北京)信息技术有限公司 Digital instrument meter reading method and device based on deep learning

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997015033A2 (en) * 1995-10-21 1997-04-24 Paetz Joachim Method and device for preventing drivers, etc. from falling asleep and for monitoring people's reactions
CN1988703A (en) * 2006-12-01 2007-06-27 深圳市飞天网景通讯有限公司 Method for realizing information interactive operation based on shootable mobile terminal
CN101470951B (en) * 2008-01-08 2010-09-08 徐建荣 Vehicle security drive monitoring system
CN101635783A (en) * 2008-07-21 2010-01-27 青岛海信电子产业控股股份有限公司 Upgrading method of TV software
CN102510480B (en) * 2011-11-04 2014-02-05 大连海事大学 Automatic calibrating and tracking system of driver sight line
CN103871200B (en) * 2012-12-14 2016-06-08 深圳市赛格导航科技股份有限公司 Safety prompting system and method for car steering
US9405982B2 (en) * 2013-01-18 2016-08-02 GM Global Technology Operations LLC Driver gaze detection system
CN105700447B (en) * 2014-11-28 2019-07-30 奇点新源国际技术开发(北京)有限公司 A kind of monitoring method and monitor supervision platform of electric car
CN104574817A (en) * 2014-12-25 2015-04-29 清华大学苏州汽车研究院(吴江) Machine vision-based fatigue driving pre-warning system suitable for smart phone
CN106332151A (en) * 2015-06-29 2017-01-11 中兴通讯股份有限公司 External field maintenance method, device and system for radio remote unit, and terminal device
US9725036B1 (en) * 2016-06-28 2017-08-08 Toyota Motor Engineering & Manufacturing North America, Inc. Wake-up alerts for sleeping vehicle occupants
CN106201627B (en) * 2016-07-21 2019-05-24 南京大全自动化科技有限公司 A kind of new energy photovoltaic case becomes TT&C system and its teleprogram method for edition management

Also Published As

Publication number Publication date
CN109426757A (en) 2019-03-05

Similar Documents

Publication Publication Date Title
CN109409172B (en) Driver sight line detection method, system, medium, and apparatus
CN108229333B (en) Method for identifying events in motion video
US10803324B1 (en) Adaptive, self-evolving learning and testing platform for self-driving and real-time map construction
CN111604888B (en) Inspection robot control method, inspection system, storage medium and electronic device
CN111372037A (en) Target snapshot system and method
CN109616106A (en) Vehicle-mounted control screen voice recognition process testing method, electronic equipment and system
CN109426757B (en) Driver head posture monitoring method, system, medium and equipment based on deep learning
KR101697060B1 (en) Method of sening event and apparatus performing the same
CN110737798A (en) Indoor inspection method and related product
CN109409173B (en) Driver state monitoring method, system, medium and equipment based on deep learning
CN110225236B (en) Method and device for configuring parameters for video monitoring system and video monitoring system
KR20210075533A (en) Vision-based Rainfall Information System and Methodology Using Deep Learning
CN109460077B (en) Automatic tracking method, automatic tracking equipment and automatic tracking system
CN113591885A (en) Target detection model training method, device and computer storage medium
US20190143926A1 (en) Vehicle management system, inspection information transmission system, information management system, vehicle management program, inspection information transmission program, and information management program
CN111626078A (en) Method and device for identifying lane line
CN108363985B (en) Target object perception system testing method and device and computer readable storage medium
JPWO2020003764A1 (en) Image processors, mobile devices, and methods, and programs
KR102519715B1 (en) Road information providing system and method
KR102210571B1 (en) Bridge and tunnel safety diagnosis remote monitoring alarm method using GPS coordinates and mobile communication system
CN112601021B (en) Method and system for processing monitoring video of network camera
KR102121423B1 (en) Server and method for recognizing road line using camera image
CN113840137A (en) Verification method and system for mobile detection sensitivity of network camera
US11157750B2 (en) Captured image check system and captured image check method
CN112365742A (en) LDW function test method, device, test equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant