CN113239754A - Dangerous driving behavior detection and positioning method and system applied to Internet of vehicles - Google Patents

Dangerous driving behavior detection and positioning method and system applied to Internet of vehicles Download PDF

Info

Publication number
CN113239754A
CN113239754A CN202110463260.2A CN202110463260A CN113239754A CN 113239754 A CN113239754 A CN 113239754A CN 202110463260 A CN202110463260 A CN 202110463260A CN 113239754 A CN113239754 A CN 113239754A
Authority
CN
China
Prior art keywords
module
alarm
behavior
driver
dangerous driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110463260.2A
Other languages
Chinese (zh)
Inventor
房桦
冯斌
任蒙恩
周中健
孙晓松
杨统禹
王琳琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taishan University
Original Assignee
Taishan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taishan University filed Critical Taishan University
Priority to CN202110463260.2A priority Critical patent/CN113239754A/en
Publication of CN113239754A publication Critical patent/CN113239754A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a dangerous driving behavior detection and positioning system applied to the Internet of vehicles, which comprises a client and a server end in communication connection with the client, wherein the client comprises a first control unit, and the first control unit is connected with a video acquisition module, an image preprocessing module, a dangerous driving behavior detection and analysis module, an audible and visual alarm module and a vehicle positioning module. The server side comprises a second control unit, the second control unit is connected with an alarm information gathering and classifying module, a driving management module, a positioning recording module, a remote monitoring module and an automatic report generating module, dangerous driving behaviors are monitored in real time, the dangerous driving behaviors are matched with preset alarm types, and an alarm is given after the matching is successful, so that the dangerous driving behaviors of drivers are prevented and reduced, voice service customization is supported, and the intelligent monitoring system has the advantages of being strong in terminal information interactivity and timely in positioning sharing. The invention also discloses a dangerous driving behavior detection and positioning method applied to the Internet of vehicles.

Description

Dangerous driving behavior detection and positioning method and system applied to Internet of vehicles
Technical Field
The invention relates to the technical field of artificial intelligent video monitoring and detection and identification, in particular to a dangerous driving behavior detection and positioning method and system applied to the Internet of vehicles.
Background
With the development of economy in China, more and more individual vehicles almost reach the level of one vehicle for each family, and the quantity of motor vehicles kept in cities and expressways is sharply increasing. In the world, traffic accidents and casualties caused by dangerous driving behaviors are in the tens of thousands, so that driving safety and traffic safety are in a non-negligible ring.
The driving behavior of endless and poor accident caused by dangerous driving behavior becomes one of the main hidden troubles of current traffic safety. When the driver drives in an irregular way, the perception capability, the judgment capability and the vehicle control capability of the driver to the surrounding environment are reduced to different degrees. Such as driving to play mobile phones, smoking cigarettes, drinking beverages and eating, accelerating to rob the signal lamp when the signal lamp is changed, the distance is too short when the vehicle is followed, the signal lamp is not used to see a reflector, garbage is thrown out of a vehicle window, chatting and distraction are given, one hand drives the vehicle when the vehicle is driven, some or even two hands take off the handle, a plurality of ornaments are hung in a carriage, a plurality of toys are placed, and the like. Among the many reasons, the driver's own car accidents caused by dangerous driving are the most widespread, including but not limited to distracted driving, tired driving, drunk driving, non-prescribed yielding, severe overtaking, chase racing.
In real life, how to take effective measures to prevent and reduce dangerous driving behaviors of drivers has great significance for protecting the life and property safety of people and reducing the occurrence of traffic accidents.
In order to reduce the occurrence of the traffic problems and reduce the energy and manpower of traffic police, the intelligent detection system for dangerous driving behaviors of the motor vehicle is produced.
At present, mechanisms for researching and developing relevant dangerous driving behavior detection systems at home and abroad comprise:
(1) the Attention Assist that the Mei Saidess-Benz company withdraws, it is the representative of the driver fatigue state monitoring system of German series car, it is indirect monitoring, it drives the behavior according to the driver, detect the driver's state on the basis of vehicle state parameter, such as speed, engine speed, yaw angular velocity, lateral acceleration, steering wheel angular velocity and angular acceleration etc. and post-processing parameter of every signal, consider above factors comprehensively and analyze and calculate and get the monitoring result of driver's state, but the system can only monitor the driver's driving behavior indirectly, the corresponding information interaction function is true, process the lag;
(2) DriverMoni-tor equipped on Lexus and commercial vehicles by Toyota is provided by electronic device corporation, belongs to a direct monitoring method, utilizes a camera to acquire a face state signal and an eye movement signal of a driver, and combines the head position and the movement information of the driver obtained by an infrared sensor to identify the state of the driver, but the monitoring method lacks a voice service function and has poor information interactivity;
(3) the Volvo driver safety warning system (DAC) can monitor the driver's distraction in addition to the fatigue status of the driver. The DAC hardware comprises a camera, various vehicle state sensors, a vehicle track sensor and a controller, the controller comprehensively analyzes the head position and angle of a driver, eye movement, the relative position of a vehicle and a lane, steering wheel operation data and the like to judge the current driving state, and compares the current driving state with the normal driving state of the driver in a recorder arranged in the controller to judge whether the driver is in a fatigue or attention dispersion state, but terminal information interaction of the system is lacked, and the system is not convenient for a traffic police to investigate and monitor.
(4) The 26-day dribble-out in 7 and 7 months in 2017 has formally published a 'safe driving' system outwards today, which is called 'dribble escort', and which is currently only opened at the end of a dribble generation driver. The system is used for receiving 'safe driving reminding' after the journey is finished for a driver who has dangerous driving behaviors in the journey, and aims to change driving habits for the driver, but the system is poor in timeliness and is only used for internal business of a company.
In summary, the current dangerous driving detection system on the market mainly has three problems:
1. voice service function loss: in spite of the existing dangerous driving behavior detection system, only cold machine alarm sounds exist, and a driver probably feels dysphoria more after hearing the emotional voice prompt;
2. and (3) missing terminal information interaction: most systems in the market only support alarming and cannot play a real prevention role, just like the alarming sound of the safety belt at the beginning, people choose not to fasten the safety belt as usual, but after a related policy that the fastening of the safety belt is not carried out, punishment is carried out, and the like, a driver can fasten the safety belt actively. The judgment of fatigue driving and distracted driving is very difficult, and after a traffic police stops a driver off the vehicle, the driver may be excited due to scaring, and whether the driver is in a fatigue driving state or not is difficult to judge;
3. the positioning sharing function is not timely: after a traffic accident happens, a person nearby or a person involved in the accident alarms to make a call to a hospital, which needs a period of time, but a life of rescue is often poor for a few minutes or a few seconds, and products in the market at present cannot support timely positioning and make an emergency call.
Disclosure of Invention
The invention aims to provide a dangerous driving behavior position detecting and measuring method and system applied to the Internet of vehicles, which realize real-time monitoring of dangerous driving behaviors of drivers, have a voice service function, and have the characteristics of strong terminal information interactivity and timely positioning and sharing functions.
In order to achieve the purpose, the invention adopts the following technical scheme:
a dangerous driving behavior detection and positioning method applied to Internet of vehicles comprises the following steps:
a1, acquiring real-time monitoring videos and images of a driver;
a2, recognizing an input image and carrying out image preprocessing to eliminate irrelevant information and extract and restore useful real information;
a3, performing feature extraction, segmentation and matching processing on the input image, and analyzing dangerous driving behaviors;
a4, extracting relevant dangerous driving behaviors, matching the dangerous driving behaviors with preset alarm types, and giving an alarm after the matching is successful, wherein the alarm information comprises information of the dangerous driving behaviors, vehicle positioning information, driver ID and the like.
Further setting the following steps: the step a2 specifically includes the following steps:
b1, performing gray processing on the input color picture;
b2, extracting a histogram of the grayed picture;
b3, histogram equalization;
b4, histogram equalization transformation;
b5, calculating a histogram;
b6, performing median filtering processing;
b7, geometric enhancement treatment;
b8, color enhancement processing;
b9, fuzzy processing;
b10, random erasure enhancement processing;
b11, threshold segmentation;
b12, gray scale transformation.
Further setting the following steps: wherein the dangerous driving behavior comprises at least one of the following behaviors: fatigue driving behavior, calling behavior, smoking behavior, distracted driving behavior, off-duty behavior, and behavior of long-time off-road line of sight.
Further setting the following steps: the method for detecting and judging the behavior that the sight line is separated from the road surface for a long time comprises the following steps:
c1, performing horizontal gray scale integral projection on the binarized face image to obtain upper and lower eyelid coordinates, and performing face positioning;
c2, firstly determining the position of the lips in the face image by using a color analysis method;
c3, performing edge detection and positioning of eyes according to the human face skin color area;
c4, determining the position of the pupil according to the fact that the pupil image is darker than the surrounding pixels;
and C5, determining the sight line direction according to the relative position relation of the pupil and the eye corner, and if the sight line of the human eyes deviates from the right front, judging that the sight line deviates from the road surface for a long time.
Further setting the following steps: the fatigue driving behavior comprises eye closure and yawning, and the method for judging fatigue driving by eye closure comprises the following steps:
d1, calculating the distance between the upper eyelid and the lower eyelid of the driver to identify the open-closed state of the eyes;
d2, calculating the PERCLOS value of the eye at each moment in 1 minute, wherein the PERCLOS value of the eye is 100 percent of the closed frame number of the eye/the total frame number of the appointed time period;
d3, comparing the PERCLOS value of the eyes with the fatigue setting threshold value of the eyes, if the PERCLOS value of the eyes is more than or equal to the fatigue setting threshold value of the eyes, judging that the driver is in a fatigue driving state, and giving a fatigue driving alarm;
the method for judging fatigue driving by yawning comprises the following steps:
e1, calculating the opening distance of the mouth of the driver to identify the open-close state of the mouth;
e2, calculating an oral portion PEROPEN value at each time within 1 minute, wherein the oral portion PEROPEN value is 100% of the opening frame number of the mouth/the total frame number of the designated time period;
e3, comparing the oral PEROPEN value with the oral fatigue set threshold, if the oral PEROPEN value is larger than or equal to the oral fatigue set threshold, judging that the driver is in a fatigue driving state, and giving a fatigue driving alarm.
Further setting the following steps: the detection method for the telephone calling behavior, the smoking behavior and the distracted driving behavior comprises the following steps:
f1, detecting accessories in the image;
f2, judging whether the alarm limit is reached or not, and if the alarm limit is reached, alarming bad behavior.
The invention also discloses a dangerous driving behavior detection and positioning system applied to the Internet of vehicles, which comprises a client (driver real-time monitoring equipment) and a server (vehicle and driver management platform) in communication connection with the client, wherein the client comprises a first control unit which is connected with:
the video acquisition module: the video acquisition module is over against a driver so as to acquire real-time video and image data of the driver;
an image preprocessing module: recognizing an input image, and preprocessing the input image before feature extraction, segmentation and matching;
dangerous driving behavior detection analysis module: performing feature extraction, segmentation and matching processing on an input image, performing face and eye positioning, analyzing dangerous driving behaviors, and giving an alarm after matching with a preset alarm type is successful;
the vehicle positioning module is used for positioning the driving process of the vehicle in real time;
and the sound-light alarm module: a parent voice packet can be input for reminding and alarming;
the server end comprises a second control unit, and the second control unit is connected with:
alarm information gathers classification module: summarizing, screening and discriminating alarm information to perform classified storage, namely alarm type data storage and audio-visual data storage;
a driving management module: carrying out driver ID management and driving work management to realize driving work assignment and driving route planning;
a positioning recording module: recording and matching the track, positioning the vehicle in real time and assisting rescue work management;
the remote monitoring module: real-time picture monitoring, history record playback and remote voice communication;
and an automatic generation report module.
Further setting the following steps: the dangerous driving behavior comprises at least one of the following behaviors: fatigue driving behavior, calling behavior, smoking behavior, distracted driving behavior, off-duty behavior and behavior of long-time visual line separation from the road surface;
the preset alarm types comprise driver off-seat alarm, sight line deviation alarm, driver ID alarm, fatigue driving alarm and bad behavior alarm.
Further setting the following steps: the dangerous driving behavior detection and analysis module further comprises:
a face positioning module: carrying out horizontal gray scale integral projection on the binarized face image to obtain upper and lower eyelid coordinates for face recognition and identity matching with a driver ID;
the human eye positioning module: the eyes can be positioned and the sight line direction can be determined;
a fatigue detection module: calculating the proportion of the duration of eye closure or the proportion of the duration of mouth opening to judge the fatigue driving behavior;
accessory detection module: accessories such as smoking, mobile phones, snacks, safety belts and the like can be detected;
and a voice detection module: the driver chat and quarrel can be detected and sent to the server.
Further setting the following steps: the video acquisition module adopts a raspberry group camera, and can perform shielding and deviation self-checking.
In conclusion, the beneficial technical effects of the invention are as follows:
(1) the real-time monitoring of dangerous driving behaviors of the driver can be achieved, the driver can be matched with the preset alarm type, the alarm is given after the matching is successful, and information such as dangerous driving behavior information, vehicle positioning information and driver ID is sent to the server side through the Internet of vehicles, so that the dangerous driving behaviors of the driver are prevented and reduced.
(2) The family members of the driver can record the voice through the sound-light alarm module to be stored in real time, machine alarm sound is replaced, the phrase of reminding is defined by users, and the fact that mechanical and electronic sound causes dysphoria to people is avoided.
(3) The client and the server are in data communication and store a detection result for a period of time, the terminal information interaction is strong, family members and even traffic polices can check or call out the driving state of a driver in real time, when the traffic polices patrol, the driver is called to stop, whether dangerous driving behaviors such as fatigue driving exist or not can be judged according to the detection result stored at the server for the latest time, and the traffic polices can carry out reasonable punishment according to the monitoring information. Meanwhile, the family can check the driving state displayed by the server and remind the driver, so that the prevention effect is achieved, and traffic accidents caused by dangerous driving behaviors are reduced.
(4) The client can share the current position to the server in real time, and when the client is abnormally collided and damaged to a certain extent or completely, an emergency call alarm can be automatically sent out, and a short message can be automatically sent to contact with family members, so that early treatment can be found early, and the key moment of saving the family members can be caught.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of a work flow of a dangerous driving behavior detection and positioning method in embodiment 1;
FIG. 2 is a schematic view of the workflow of image preprocessing in example 1;
FIG. 3 is a schematic view of a flow of detection and determination of a long-time off-road behavior of a line of sight in embodiment 1;
fig. 4 is a schematic flowchart of the eye closure determination fatigue driving in embodiment 1;
FIG. 5 is a schematic flowchart of the fatigue driving judgment by yawning in embodiment 1;
fig. 6 is a schematic view of the detection and determination flow of the telephone call behavior, the smoking behavior, and the distracted driving behavior in embodiment 1;
fig. 7 is a block diagram of the hardware configuration of the dangerous driving behavior detection positioning system in embodiment 2;
fig. 8 is a flow chart of dangerous driving behavior detection data in the working process of the system of embodiment 2.
Reference numerals: 100. a client; 110. a first control unit; 120. a video acquisition module; 130. an image preprocessing module; 140. a dangerous driving behavior detection and analysis module; 141. a face positioning module; 142. a human eye positioning module; 143. a fatigue detection module; 144. an accessory detection module; 145. a voice detection module; 150. a sound and light alarm module; 160. a vehicle positioning module; 200. a server side; 210. a second control unit; 220. the alarm information summarizing and classifying module; 230. a driving management module; 240. a positioning recording module; 250. a remote monitoring module; 260. and a report module is automatically generated.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that the terms "mounted," "connected," and "connected" are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected unless otherwise explicitly stated or limited; can be mechanically or electrically connected; either directly or indirectly through intervening media, or both elements may be interconnected. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The technical terms referred to in the present document are first briefly described below:
PERCLOS: PERCLOS (percent of eye close time) is intended to be the percentage of the duration of the closed eye over a certain time.
PEROPEN: PEROPEN (Percentage of Mouth Opening time) means the percentage of the Mouth Opening duration in a specific time.
Example 1
Referring to fig. 1, the dangerous driving behavior position detecting method applied to the internet of vehicles disclosed by the invention comprises the following steps:
a1, acquiring real-time monitoring videos and images of a driver;
a2, recognizing an input image and carrying out image preprocessing to eliminate irrelevant information and extract and restore useful real information;
a3, performing feature extraction, segmentation and matching processing on the input image, and analyzing dangerous driving behaviors;
a4, extracting relevant dangerous driving behaviors, matching the dangerous driving behaviors with preset alarm types, and giving an alarm after successful matching, wherein the alarm information comprises information of the dangerous driving behaviors, vehicle positioning information, a driver ID and the like.
Referring to fig. 2, step a2 specifically includes the following steps:
b1, performing graying processing on the input color picture: in the color space RGB model, if three components of red, green and blue are equal, the color represents a gray level color, and the same value is a gray level value, so that each pixel of the gray level image can be stored in a byte in the range of 0-255 in the computer, and after the gray level processing is performed, the obtained gray level image is subjected to binarization processing to increase the speed of the subsequent processing.
B2, extracting a histogram of the grayed picture: the histogram is a frequency function of gray levels and represents the frequency of each gray level pixel or the number of pixels per gray level from low to high in the gray level graph. The abscissa is the gray level and the ordinate is the frequency of occurrence of the pixel or the number of pixels in which the gray level occurs, such a distribution graph is a histogram, describing the case of image gray level distribution.
B3, histogram equalization: generally, the luminance values of the gray-scale map are from 0 to 255, but the histogram is very likely to show that the actual luminance is concentrated in the middle range of the luminance range, and then histogram equalization is a method for increasing the range. One principle of histogram equalization is that one distribution (the input luminance distribution) is mapped to another distribution (a wider, ideally uniform luminance distribution), that is, we want to spread the values of the y-axis in the original distribution as much as possible in the new distribution. It is also stated that the mapping function should try to be a cumulative score map. The brightness value range of the gray level image is 0-255, if the number of the pixels with the brightness value of 1 is lower than a threshold value, the brightness of the pixels with the brightness of 1 can be simply set to be 0, similarly, the luminance of the pixels with the brightness of 1 can be found from high to low, if the number of the pixels with the brightness value of 254 is lower than the threshold value, the luminance of the pixels can be set to be 255, so that the luminance of the pixels can be found from small to large, the two luminances can be found from large to small, the number of the pixels is just larger than the area between the two threshold values, the pixels can be considered as an effective area, namely the area from a blue frame is expanded to the area of 0-255, the equalization effect can be realized, and the dynamic range of the image is expanded.
B4, histogram equalization transformation.
B5, histogram calculation.
B6, median filtering: the basic principle of median filtering is to replace the value of a point in a digital image or digital sequence with the average value of the values of the edge points in a neighborhood of the point, and to eliminate the influence of isolated noise points by values that are relatively close to the pixel values of the edge. In the gradation, a noise point is positively present therein due to the influence of the calculation formula of the gradation map itself, and the quality of the image has a large influence. As a means for smoothing and denoising images, median filtering can remove isolated point noise points, can keep the edge characteristics of the images, and is more suitable for the environment with better light conditions.
B7, geometric enhancement treatment: the generalization capability of the model can be enhanced by the method of geometrically changing the image by translation, rotation, shearing and the like.
B8, color enhancement processing: mainly luma transform, such as using hsv (hue failure value) enhancement.
B9, fuzzy processing: such as Gaussian filtering, square filtering, median filtering and the like, the generalization capability of the model to the fuzzy image can be enhanced;
b10, random erasure enhancement: and randomly selecting an area, and then covering by adopting a random value to simulate an occlusion scene. And simulating shielding so as to improve the generalization capability of the model.
B11, threshold segmentation: the threshold segmentation is performed using the OTSU algorithm.
B12, gray level transformation: the gray scale change can be performed in the following manner: 1. image negation; 2. contrast stretching; 3. compressing a dynamic range; 4. and (5) gray level slicing.
The dangerous driving behavior comprises at least one of the following behaviors: fatigue driving behavior, telephone calling behavior, smoking behavior, distracted driving behavior, off-duty behavior and behavior of long-time off-road sight.
Referring to fig. 3, the method for detecting and determining the behavior of the sight line leaving the road surface for a long time includes the following steps:
c1, performing horizontal gray scale integral projection on the binarized face image to obtain upper and lower eyelid coordinates, and performing face positioning;
c2, firstly determining the position of the lips in the face image by using a color analysis method;
c3, performing edge detection and positioning of eyes according to the human face skin color area;
c4, determining the position of the pupil according to the fact that the pupil image is darker than the surrounding pixels;
and C5, determining the sight line direction according to the relative position relation of the pupil and the eye corner, and if the sight line of the human eye deviates from the right front, determining that the sight line deviates from the road surface for a long time.
Referring to fig. 4 and 5, wherein the fatigue driving behavior includes closed-eye and yawning, the method for determining fatigue driving by closed-eye includes the following steps:
d1, calculating the distance between the upper and lower eyelids of the driver to recognize the eye opening/closing state, and recognizing the eye closing state when the area covered by the eyelids occupies 80% or more of the entire eye area, or recognizing the eye opening state if not.
D2, calculating the eye PERCLOS value at each time within 1 minute, wherein the eye PERCLOS value is 100% of the eye closure frame number/the specified time period total frame number.
D3, comparing the eye PERCLOS value with the eye fatigue setting threshold value, if the eye PERCLOS value is larger than or equal to the eye fatigue setting threshold value, judging that the driver is in a fatigue driving state, and giving a fatigue driving alarm.
The method for judging fatigue driving by yawning comprises the following steps:
e1, calculating the opening distance of the mouth of the driver to identify the open-close state of the mouth;
e2, calculating an oral portion PEROPEN value at each time within 1 minute, wherein the oral portion PEROPEN value is 100% of the opening frame number of the mouth/the total frame number of the designated time period;
e3, comparing the oral PEROPEN value with the oral fatigue set threshold, if the oral PEROPEN value is larger than or equal to the oral fatigue set threshold, judging that the driver is in a fatigue driving state, and giving a fatigue driving alarm.
Referring to fig. 6, the method for detecting the call-making behavior, the smoking behavior and the distracted driving behavior comprises the following steps:
f1, detecting accessories in the image, wherein the accessories comprise articles such as cigarettes, mobile phones, safety belts and snacks, and carrying out voice monitoring;
f2, judging whether the alarm limit is reached, if the alarm limit is reached (if the phenomena of calling, smoking, not fastening the safety belt and the like occur), alarming the bad behavior.
Example 2
Referring to fig. 7 and 8, a dangerous driving behavior detection and positioning system applied to a vehicle networking, which employs the method described in embodiment 1, includes a client 100 (a driver real-time monitoring device) and a server 200 (a vehicle and driver management platform) connected thereto, where the client 100 and the server 200 are connected by a vehicle network communication. The client 100 is installed and fixed beside the inside rear view mirror so as not to obstruct the driver's view. It should be noted that the server 200 may be one or more terminals, including a mobile phone, a computer, and the like, so that each server 200 can receive or view the corresponding information sent by the client 100. The client 100 comprises a first control unit 110, and the first control unit 110 is connected with a video acquisition module 120, an image preprocessing module 130, a dangerous driving behavior detection and analysis module 140, a vehicle positioning module 150 and an audible and visual alarm module 160.
The video acquisition module 120 adopts a raspberry group camera, which is installed at a distance of 0.5m from the driver and faces the driver (the camera is about 0.5m away from the subject and is matched with the installation positions of the driver seat and the theoretical camera in the actual driving environment). The video acquisition module 120 can perform shielding and deviation self-checking, and if the system camera is shielded in the driving process and the shielding area of the camera exceeds 36%, a prompt tone is sent to remind a driver to remove a shielding object; meanwhile, the functions of positioning and self-detection can be realized by the position of the face and the fact that whether the face is in an accurate position in the image, so that the driver can be effectively supervised.
The image preprocessing module 130 may recognize an input image, and preprocess the input image before feature extraction, segmentation, and matching, thereby eliminating irrelevant information from the image, restoring useful real information, enhancing detectability of relevant information, and simplifying data to the maximum extent, thereby improving reliability of feature extraction, image segmentation, matching, and recognition.
The dangerous driving behavior detection and analysis module 140 may perform feature extraction, segmentation, and matching processing on the input image, perform face and eye positioning, analyze dangerous driving behavior, and alarm when matching with a preset alarm type is successful. Wherein the dangerous driving behavior comprises at least one of the following behaviors: fatigue driving behavior, calling behavior, smoking behavior, distracted driving behavior, off duty behavior, and long-time off-road behavior. The preset alarm types comprise a driver off-seat alarm, a sight line deviation alarm, a driver ID alarm, a fatigue driving alarm and a bad behavior alarm.
The dangerous driving behavior detection and analysis module 140 further comprises a human face positioning module 141, an eye positioning module 142, a fatigue detection module 143, an accessory detection module 144, and a voice detection module 145.
The face positioning module 141 performs horizontal gray scale integral projection on the binarized face image to obtain upper and lower eyelid coordinates for face recognition and identity matching with the driver ID. If the face is not detected at the driving position, determining the off-duty behavior, and alarming when the driver leaves the seat; and if the face recognition identity is not matched with the driver ID, giving an alarm for the driver ID.
The eye positioning module 142 may position both eyes and determine the direction of the line of sight, and if the line of sight of the eyes deviates from the front for a long time, it is determined that the line of sight deviates from the road surface for a long time, and an alarm is given on the deviation of the line of sight.
The fatigue detection module 143 identifies the eye closing state by calculating the eyelid distance and the mouth opening state by calculating the mouth distance, and determines the fatigue driving behavior by calculating the ratio of the eye closing duration to the mouth opening duration. And if the eye fatigue set threshold value or the oral fatigue set threshold value is met, judging the fatigue driving behavior and carrying out fatigue driving alarm.
The system adopts continuous frames as input, improves the accuracy of human behavior detection by a space-time flow fusion mode, and extracts time sequence information better by the space-time flow fusion and the continuous frame input so as to express video characteristics better. Experiments show that when continuous frames (6 frames) are used as input and space-time flow fusion is adopted, the system has higher accuracy while the requirement of online real-time performance is ensured, so that accidental events identified by mistake are reduced.
For example, in the process of determining fatigue driving by eye closure, if eyes in two consecutive frames are in a fatigue state, the time interval between the two times can be regarded as continuous eye closure time, so that the continuous eye closure time of multiple frames is counted all the time until the percentage of the continuous eye closure time in the consumed time of the system exceeds 80%, the fatigue driving is determined, otherwise, the counted time is reset to zero, and then repeated iteration is performed.
The accessory detection module 144 can detect accessories such as smoking, mobile phones, snacks, safety belts, etc., and if the phenomena of making a call, smoking, eating snacks, not fastening safety belts, etc. occur, the accessory detection module respectively determines a behavior of making a call, a behavior of smoking, a behavior of distracting driving, and gives an alarm for a bad behavior.
The voice detection module 145 may monitor the chat, quarrel, etc. behavior of the driver, and send the monitored behavior to the server 200, and if the server 200 monitors the corresponding behavior, perform an alarm of a bad behavior.
The vehicle positioning module 150 can position the driving process of the vehicle in real time, share the current position with the server 200 in real time, and automatically send an emergency call alarm and automatically send a short message to contact with the family when the client 100 is damaged to some extent or completely due to abnormal collision, so as to perform early-discovery early-treatment and catch the key moment of saving people.
When the dangerous driving behavior detection and analysis module 140 detects the matched dangerous driving type, the first control unit 110 controls the audible and visual alarm module 160 to give an alarm. The sound and light alarm module 160 can record the voice packet of the parent to remind the alarm, record the voice through family members to store in real time to replace the machine alarm sound, and self-define the reminding phrase, so that the dysphoria of the driver caused by the mechanical electronic sound can be avoided.
The server 200 includes a second control unit 210, and the second control unit 210 is connected to an alarm information summarizing and classifying module 220, a driving management module 230, a positioning recording module 240, a remote monitoring module 250, and an automatic report generating module 260.
The alarm information summarizing and classifying module 220 can summarize, screen and discriminate the alarm information for classified storage, namely, alarm type data storage and audio-visual data storage.
The driving management module 230 stores a plurality of driver ID information and types of driving tasks, and can perform driver ID management and driving work management, so as to implement driving work assignment, driving route planning, and monitoring duration.
The positioning recording module 240 can receive the information of the vehicle positioning module 150, so as to realize recording and matching of the driving track and real-time positioning of the vehicle, and assist in rescue work management.
The video acquisition module 120 acquires the picture in real time and uploads the picture to the server 200 in real time, so that the real-time monitoring of the behavior of the driver in the vehicle is remotely realized, and the remote monitoring module 250 can be used for real-time picture monitoring, history record playback and remote voice call. If the driver behavior is not standard, the driver can be reminded appropriately through the remote voice call. If punishment needs to be made on the driver, the system can play back the video of the situation in the driver's car for later viewing.
The automatic generation report module 260 can realize the automatic generation of the driving behavior report of each driver, and facilitate the necessary reminding and warning processing for drivers who violate too many rules.
The working principle and the beneficial effects of the embodiment are as follows:
the system can realize real-time monitoring of dangerous driving behaviors of the driver, is matched with preset alarm types, gives an alarm after successful matching, and sends information such as dangerous driving behavior information, vehicle positioning information and driver ID to the server 200 through the Internet of vehicles, so that the dangerous driving behaviors of the driver are prevented and reduced.
To illustrate the accuracy and performance of the present system, three experimental results are described below:
fatigue driving accuracy verification
In order to accurately analyze the degree of change of the muscles of the eyes, the mouth and the face so as to accurately judge the driving behavior, threshold values of the eyes and the mouth are determined. The test environment of the test is performed in a laboratory simulating a real driving environment, a subject still tests in the real driving environment, the head movement range of a general driver is within about 35 degrees from top to bottom and about 34 degrees from left to right, and the limitation on the limb movement of the driver is completed before the test. The format of the pictures collected by the camera is JPG, 24 frames per second, the size is set to 640 x 480, the time interval of a single test is 1 minute, and 24 x 60 x 1 frames of images are collected. As shown in table 1 below, the results of the partial fatigue eye test are shown, and after a large amount of test data, the eye fatigue threshold is determined to be 80%. As shown in table 2 below, the oral fatigue threshold is 82% after a large amount of test data.
Namely: the system records the starting time T1, the ending time T2, the number of frames of closed eyes and the number of frames of open mouths, wherein T is 2-T1 is 1 minute, and fatigue driving is determined when the total eye closing time T1 is more than 48 seconds or the total mouth opening time T2 is more than 49 seconds in 1 minute.
Figure BDA0003035207640000201
TABLE 1 fatigue eye test results
Figure BDA0003035207640000202
TABLE 2 fatigue oral area test results
(II) System accuracy verification
By testing a portion of the data retained in advance, the results are shown in table 3 below: the number of detected targets is 3428, and the number of detected targets is 3017. The average accuracy of the system in detecting a single object is about 92.73%, and the coincidence ratio of the detected position of the object to the real position is about 85.39%.
Categories Detection accuracy
Head (Head) 95.26%
Face (Face) 94.43%
Close eye (closed eye) 91.09%
Hand (Hand) 89.51%
Yawn(yawning) 94.23%
Smoke (smoking) 94.72%
Call (telephone Call) 92.49%
Safety belt 90.50%
TABLE 3 accuracy verification results
(III) System Performance verification
The detection time of the system for a single picture is about 100-130 ms, and real-time target detection can be completely carried out in the driving process. And the occupancy rate of system resources is lower, and the occurrence of the blocking situation in the driving process is avoided. The network delay is low, and the transmission of data is not influenced.
Those skilled in the art will appreciate that all or part of the flow of the method implementing the above embodiments may be implemented by instructing the associated hardware by a computer program, which may be stored in a computer-readable storage medium. The computer readable storage medium is a magnetic disk, an optical disk, a read-only memory or a random access memory.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The dangerous driving behavior detection and positioning method applied to the Internet of vehicles is characterized by comprising the following steps of:
a1, acquiring real-time monitoring videos and images of a driver;
a2, recognizing an input image and carrying out image preprocessing to eliminate irrelevant information and extract and restore useful real information;
a3, performing feature extraction, segmentation and matching processing on the input image, and analyzing dangerous driving behaviors;
a4, extracting relevant dangerous driving behaviors, matching the dangerous driving behaviors with preset alarm types, and giving an alarm after the matching is successful, wherein the alarm information comprises information of the dangerous driving behaviors, vehicle positioning information, driver ID and the like.
2. The dangerous driving behavior detection and positioning method applied to the Internet of vehicles as claimed in claim 1, wherein the step A2 specifically comprises the following steps:
b1, performing gray processing on the input color picture;
b2, extracting a histogram of the grayed picture;
b3, histogram equalization;
b4, histogram equalization transformation;
b5, calculating a histogram;
b6, performing median filtering processing;
b7, geometric enhancement treatment;
b8, color enhancement processing;
b9, fuzzy processing;
b10, random erasure enhancement processing;
b11, threshold segmentation;
b12, gray scale transformation.
3. The dangerous driving behavior detection and positioning method applied to the Internet of vehicles according to claim 1, wherein the dangerous driving behavior comprises at least one of the following behaviors: fatigue driving behavior, calling behavior, smoking behavior, distracted driving behavior, off-duty behavior, and behavior of long-time off-road line of sight.
4. The dangerous driving behavior detection and positioning method applied to the Internet of vehicles according to claim 3, wherein the detection and judgment method of the behavior that the sight line is separated from the road surface for a long time comprises the following steps:
c1, performing horizontal gray scale integral projection on the binarized face image to obtain upper and lower eyelid coordinates, and performing face positioning;
c2, firstly determining the position of the lips in the face image by using a color analysis method;
c3, performing edge detection and positioning of eyes according to the human face skin color area;
c4, determining the position of the pupil according to the fact that the pupil image is darker than the surrounding pixels;
and C5, determining the sight line direction according to the relative position relation of the pupil and the eye corner, and if the sight line of the human eyes deviates from the right front, judging that the sight line deviates from the road surface for a long time.
5. The dangerous driving behavior detection and positioning method applied to the Internet of vehicles according to claim 3, wherein the fatigue driving behaviors comprise eye closure and yawning, and the eye closure fatigue driving judgment method comprises the following steps:
d1, calculating the distance between the upper eyelid and the lower eyelid of the driver to identify the open-closed state of the eyes;
d2, calculating the PERCLOS value of the eye at each moment in 1 minute, wherein the PERCLOS value of the eye is 100 percent of the closed frame number of the eye/the total frame number of the appointed time period;
d3, comparing the PERCLOS value of the eyes with the eye fatigue setting threshold value, if the PERCLOS value of the eyes is more than or equal to the eye fatigue setting threshold value, judging that the driver is in a fatigue driving state, and giving a fatigue driving alarm;
the method for judging fatigue driving by yawning comprises the following steps:
e1, calculating the opening distance of the mouth of the driver to identify the open-close state of the mouth;
e2, calculating an oral portion PEROPEN value at each time within 1 minute, wherein the oral portion PEROPEN value is 100% of the opening frame number of the mouth/the total frame number of the designated time period;
e3, comparing the oral PEROPEN value with the oral fatigue set threshold, if the oral PEROPEN value is larger than or equal to the oral fatigue set threshold, judging that the driver is in a fatigue driving state, and giving a fatigue driving alarm.
6. The dangerous driving behavior detection and positioning method applied to the Internet of vehicles as claimed in claim 3, wherein the detection method of the call-making behavior, the smoking behavior and the distracted driving behavior comprises the following steps:
f1, detecting accessories in the image;
f2, judging whether the alarm limit is reached, and if the alarm limit is reached, alarming the bad behavior.
7. A dangerous driving behavior detection and positioning system applied to the internet of vehicles, adopting the method of any one of the preceding claims 1 to 6, characterized by comprising a client (100) (driver real-time monitoring device) and a server (200) (vehicle and driver management platform) in communication connection therewith, wherein the client (100) comprises a first control unit (110), and the first control unit (110) is connected with:
video capture module (120): the video acquisition module (120) is over against the driver so as to acquire real-time video and image data of the driver;
image pre-processing module (130): recognizing an input image, and preprocessing the input image before feature extraction, segmentation and matching;
dangerous driving behavior detection and analysis module (140): performing feature extraction, segmentation and matching processing on an input image, performing face and eye positioning, analyzing dangerous driving behaviors, and giving an alarm after matching with a preset alarm type is successful;
the vehicle positioning module (150) is used for positioning the running process of the vehicle in real time;
and an audible and visual alarm module (160): a parent voice packet can be input for reminding and alarming;
the server (200) comprises a second control unit (210), and the second control unit (210) is connected with:
alarm information summarizing and classifying module (220): summarizing, screening and discriminating alarm information to perform classified storage, namely alarm type data storage and audio-visual data storage;
driving management module (230): carrying out driver ID management and driving work management to realize driving work assignment and driving route planning;
positioning recording module (240): recording and matching the track, positioning the vehicle in real time and assisting rescue work management;
remote monitoring module (250): real-time picture monitoring, history record playback and remote voice communication;
and an automatic generation report module (260).
8. The dangerous driving behavior detection and positioning system applied to the Internet of vehicles as claimed in claim 7, wherein the dangerous driving behavior comprises at least one of the following behaviors: fatigue driving behavior, calling behavior, smoking behavior, distracted driving behavior, off-duty behavior and behavior of long-time visual line separation from the road surface;
the preset alarm types comprise driver off-seat alarm, sight line deviation alarm, driver ID alarm, fatigue driving alarm and bad behavior alarm.
9. The dangerous driving behavior detection and positioning system applied to the Internet of vehicles as claimed in claim 7, wherein the dangerous driving behavior detection and analysis module (140) further comprises:
face location module (141): carrying out horizontal gray scale integral projection on the binarized face image to obtain upper and lower eyelid coordinates for face recognition and identity matching with a driver ID;
human eye localization module (142): the eyes can be positioned and the sight line direction can be determined;
fatigue detection module (143): calculating the proportion of the duration of eye closure or the proportion of the duration of mouth opening to judge the fatigue driving behavior;
accessory detection module (144): accessories such as smoking, mobile phones, snacks, safety belts and the like can be detected;
and a speech detection module (145): the driver's chat and quarrel behaviors can be detected and sent to the server (200).
10. The dangerous driving behavior detection and positioning system applied to the Internet of vehicles as claimed in claim 7, wherein the video acquisition module adopts a raspberry-type camera, so that shielding and deviation self-detection can be performed.
CN202110463260.2A 2021-04-23 2021-04-23 Dangerous driving behavior detection and positioning method and system applied to Internet of vehicles Pending CN113239754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110463260.2A CN113239754A (en) 2021-04-23 2021-04-23 Dangerous driving behavior detection and positioning method and system applied to Internet of vehicles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110463260.2A CN113239754A (en) 2021-04-23 2021-04-23 Dangerous driving behavior detection and positioning method and system applied to Internet of vehicles

Publications (1)

Publication Number Publication Date
CN113239754A true CN113239754A (en) 2021-08-10

Family

ID=77129735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110463260.2A Pending CN113239754A (en) 2021-04-23 2021-04-23 Dangerous driving behavior detection and positioning method and system applied to Internet of vehicles

Country Status (1)

Country Link
CN (1) CN113239754A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537166A (en) * 2021-09-15 2021-10-22 北京科技大学 Alarm method, alarm device and storage medium
CN114241424A (en) * 2022-02-17 2022-03-25 江苏智慧汽车研究院有限公司 Unmanned vehicle driving route planning system and method for surveying and mapping
CN114312306A (en) * 2022-01-04 2022-04-12 一汽解放汽车有限公司 Driving glasses control method, driving glasses, computer device and storage medium
CN115798247A (en) * 2022-10-10 2023-03-14 深圳市昊岳科技有限公司 Smart bus cloud platform based on big data
CN116311181A (en) * 2023-03-21 2023-06-23 重庆利龙中宝智能技术有限公司 Method and system for rapidly detecting abnormal driving

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930278A (en) * 2012-10-16 2013-02-13 天津大学 Human eye sight estimation method and device
WO2015027598A1 (en) * 2013-08-30 2015-03-05 北京智谷睿拓技术服务有限公司 Reminding method and reminding device
US20160179193A1 (en) * 2013-08-30 2016-06-23 Beijing Zhigu Rui Tuo Tech Co., Ltd. Content projection system and content projection method
CN106934365A (en) * 2017-03-09 2017-07-07 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of reliable glaucoma patient self-detection method
CN107330378A (en) * 2017-06-09 2017-11-07 湖北天业云商网络科技有限公司 A kind of driving behavior detecting system based on embedded image processing
CN109774722A (en) * 2017-11-15 2019-05-21 欧姆龙株式会社 Information processing unit, methods and procedures, driver's monitoring system and preservation media
CN111179551A (en) * 2019-12-17 2020-05-19 西安工程大学 Real-time monitoring method for dangerous chemical transport driver
CN112257696A (en) * 2020-12-23 2021-01-22 北京万里红科技股份有限公司 Sight estimation method and computing equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930278A (en) * 2012-10-16 2013-02-13 天津大学 Human eye sight estimation method and device
WO2015027598A1 (en) * 2013-08-30 2015-03-05 北京智谷睿拓技术服务有限公司 Reminding method and reminding device
US20160179193A1 (en) * 2013-08-30 2016-06-23 Beijing Zhigu Rui Tuo Tech Co., Ltd. Content projection system and content projection method
CN106934365A (en) * 2017-03-09 2017-07-07 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of reliable glaucoma patient self-detection method
CN107330378A (en) * 2017-06-09 2017-11-07 湖北天业云商网络科技有限公司 A kind of driving behavior detecting system based on embedded image processing
CN109774722A (en) * 2017-11-15 2019-05-21 欧姆龙株式会社 Information processing unit, methods and procedures, driver's monitoring system and preservation media
CN111179551A (en) * 2019-12-17 2020-05-19 西安工程大学 Real-time monitoring method for dangerous chemical transport driver
CN112257696A (en) * 2020-12-23 2021-01-22 北京万里红科技股份有限公司 Sight estimation method and computing equipment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537166A (en) * 2021-09-15 2021-10-22 北京科技大学 Alarm method, alarm device and storage medium
CN114312306A (en) * 2022-01-04 2022-04-12 一汽解放汽车有限公司 Driving glasses control method, driving glasses, computer device and storage medium
CN114312306B (en) * 2022-01-04 2024-03-19 一汽解放汽车有限公司 Control method of driving glasses, computer device and storage medium
CN114241424A (en) * 2022-02-17 2022-03-25 江苏智慧汽车研究院有限公司 Unmanned vehicle driving route planning system and method for surveying and mapping
CN114241424B (en) * 2022-02-17 2022-05-31 江苏智慧汽车研究院有限公司 Unmanned vehicle driving route planning system and method for surveying and mapping inspection
CN115798247A (en) * 2022-10-10 2023-03-14 深圳市昊岳科技有限公司 Smart bus cloud platform based on big data
CN115798247B (en) * 2022-10-10 2023-09-22 深圳市昊岳科技有限公司 Intelligent public transportation cloud platform based on big data
CN116311181A (en) * 2023-03-21 2023-06-23 重庆利龙中宝智能技术有限公司 Method and system for rapidly detecting abnormal driving
CN116311181B (en) * 2023-03-21 2023-09-12 重庆利龙中宝智能技术有限公司 Method and system for rapidly detecting abnormal driving

Similar Documents

Publication Publication Date Title
CN113239754A (en) Dangerous driving behavior detection and positioning method and system applied to Internet of vehicles
CN108791299B (en) Driving fatigue detection and early warning system and method based on vision
CN111079476B (en) Driving state analysis method and device, driver monitoring system and vehicle
CN108960065B (en) Driving behavior detection method based on vision
WO2020078464A1 (en) Driving state detection method and apparatus, driver monitoring system, and vehicle
US8164463B2 (en) Driver management apparatus and travel management system
US9460601B2 (en) Driver distraction and drowsiness warning and sleepiness reduction for accident avoidance
CN106709420B (en) Method for monitoring driving behavior of commercial vehicle driver
CN110588512A (en) Dangerous driving identification and early warning device, method and system
CN105788176A (en) Fatigue driving monitoring and prompting method and system
CN110532976A (en) Method for detecting fatigue driving and system based on machine learning and multiple features fusion
CN105469035A (en) Driver's bad driving behavior detection system based on binocular video analysis
CN101593425A (en) A kind of fatigue driving monitoring method and system based on machine vision
CN112633057B (en) Intelligent monitoring method for abnormal behavior in bus
CN111661059B (en) Method and system for monitoring distracted driving and electronic equipment
CN109389794A (en) A kind of Intellectualized Video Monitoring method and system
CN108609018B (en) For analyzing Forewarning Terminal, early warning system and the parser of dangerous driving behavior
CN111355902A (en) Method for acquiring images in vehicle by using camera and vehicle-mounted monitoring camera
CN112699802A (en) Driver micro-expression detection device and method
CN113838265A (en) Fatigue driving early warning method and device and electronic equipment
CN106874831A (en) Driving behavior method for detecting and its system
CN116416281A (en) Grain depot AI video supervision and analysis method and system
CN111540208A (en) Method for preventing driving without license and fatigue driving based on block chain technology
CN113312958B (en) Method and device for adjusting dispatch priority based on driver state
CN115205724A (en) Alarming method and device based on abnormal behavior and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210810

RJ01 Rejection of invention patent application after publication