CN111460938A - Vehicle driving behavior real-time monitoring method and device - Google Patents

Vehicle driving behavior real-time monitoring method and device Download PDF

Info

Publication number
CN111460938A
CN111460938A CN202010201673.9A CN202010201673A CN111460938A CN 111460938 A CN111460938 A CN 111460938A CN 202010201673 A CN202010201673 A CN 202010201673A CN 111460938 A CN111460938 A CN 111460938A
Authority
CN
China
Prior art keywords
target
probability
detection
image
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010201673.9A
Other languages
Chinese (zh)
Other versions
CN111460938B (en
Inventor
王小刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Leading Technology Co Ltd
Original Assignee
Nanjing Leading Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Leading Technology Co Ltd filed Critical Nanjing Leading Technology Co Ltd
Priority to CN202010201673.9A priority Critical patent/CN111460938B/en
Publication of CN111460938A publication Critical patent/CN111460938A/en
Application granted granted Critical
Publication of CN111460938B publication Critical patent/CN111460938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Abstract

The invention provides a method and a device for monitoring the driving behavior of a vehicle in real time, wherein the method comprises the following steps: acquiring a monitoring image, and determining a first image area and a second image area according to area division configuration information; carrying out global face detection on the first image area, and determining the target number of the first image area; dividing the second image area into a plurality of target detection areas, detecting the target detection areas and determining the local face probability, the human body posture probability and the moving target probability; judging whether the target detection area has a target or not according to each probability of each target detection area so as to determine the number of the targets in the second image area; according to the sum of the target quantity of the first image area and the target quantity of the second image area, the vehicle driving behavior is monitored, and by the method provided by the invention, the problems that whether the driver is overloaded and whether the driver is empty or not is singly refreshed and whether the network car is reserved or not is not used can be solved, so that the operation order of the network car reserving industry is further standardized, and the passenger experience is improved.

Description

Vehicle driving behavior real-time monitoring method and device
Technical Field
The invention relates to the field of vehicle running detection, in particular to a method and a device for monitoring vehicle running behaviors in real time.
Background
The current network car appointment driving monitoring technology is divided into two types: one is focused on systems based on image recognition, such as fatigue driving warning systems for driver monitoring; the other type is a system for acquiring vehicle running data through a sensor on the vehicle or a vehicle CAN bus and the like, or acquiring external environment data, and determining a vehicle running route and the like by combining the vehicle running data and the external environment data. The vehicle driving data acquired by the two systems can be used for further analyzing the vehicle driving route and monitoring the driving behavior of the vehicle by combining the vehicle driving route and the vehicle order.
However, in the current field of vehicle monitoring, there is no method for real-time monitoring of the number of passengers in the vehicle, especially for detecting the number of passengers in the vehicle, due to the lack of monitoring of the number of actual passengers in the vehicle, even if the specific route of the vehicle is known, whether the driver has the problems of overload driving, order brushing for empty vehicles and private network appointment and the like cannot be determined, because the in-vehicle condition of the network appointment vehicle does not support the real-time monitoring of the number of the staff with multiple dimensions and no dead angles, and the addition of one or more cameras inevitably increases the use cost, and on the other hand, the increase of the number of cameras in the car directly causes the privacy concerns and complaints of the car passengers in the network appointment, the use experience is poor, therefore, the number of passengers in the vehicle cannot be well detected, and the violation of the driver of the networked car reservation cannot be well monitored.
Disclosure of Invention
The invention provides a method and a device for monitoring vehicle driving behaviors in real time, which are used for solving the problem that illegal behaviors of a vehicle ordering driver on a network cannot be monitored because the quantity of real-time monitoring personnel on the vehicle ordering on the network cannot be well monitored in the vehicle ordering on the network.
The invention provides a vehicle driving behavior real-time monitoring method in a first aspect, which comprises the following steps:
acquiring a monitoring image shot by a built-in camera at a vehicle-mounted end, and determining a first image area without a shielding object and a second image area with the shielding object in the monitoring image according to area division configuration information;
carrying out global face detection on the first image area, and determining the target number of the first image area;
dividing a second image area into a plurality of target detection areas, carrying out local face detection on the target detection areas and determining local face probability, carrying out human body posture detection on the target detection areas and determining human body posture probability, and carrying out moving target detection on the target detection areas and determining moving target probability;
judging whether a target exists in the target detection area according to the local face probability, the human body posture probability and the moving target probability of each target detection area so as to determine the target number of the second image area;
and monitoring the vehicle driving behavior according to the sum of the target number of the first image area and the target number of the second image area.
Optionally, the performing local face detection on the target detection region and determining a local face probability includes:
inputting a local face detection model into a target detection area to obtain a local face probability, wherein the local face detection model is obtained by taking a local face image marked whether a local face exists as input and taking the output local face probability as a target to carry out network model training, and the network model comprises a multi-stage classifier.
Optionally, the detecting the human body posture and determining the probability of the human body posture on the target detection area include:
inputting a human body posture detection model into the target detection area to obtain a human body posture probability, wherein the human body posture detection model is obtained by taking a human body posture image marked whether a human body posture exists as input and taking the output human body posture probability as a target to carry out network model training, and the network model comprises an SVM classifier.
Optionally, the performing moving object detection on the object detection region and determining a moving object probability includes:
determining a foreground region of a target detection region by using a frame difference method, inputting the foreground region into a moving target detection model to determine a moving target probability, wherein the moving target detection model is obtained by taking a moving target image marked whether a moving target exists as input and taking an output moving target probability as a target to carry out network model training.
Optionally, determining the number of targets in the second image region according to the local face probability, the human body posture probability, and the moving target probability of each target detection region, including:
inputting the local face probability, the human body posture probability and the moving target probability of a target detection area into a target detection model to obtain the existing target probability, and determining that the target exists in the target detection area when the existing target probability is greater than a probability threshold, wherein the target detection model is obtained by taking the local face probability, the human body posture probability and the moving target probability of an image for marking whether the target exists as input and taking the output existing target probability as a target to carry out network model training, and the network model comprises a regressor;
and determining the target number of the second image area according to the target number of each target detection area.
Optionally, the method further comprises:
and detecting the lost articles in the monitoring image, and sending an alarm of the existence of the lost articles when the current travel order is determined not to exist and the lost articles exist in the monitoring image.
Optionally, the monitoring the vehicle driving behavior according to the sum of the target numbers of the first image area and the second image area comprises:
when the sum of the target number is larger than the maximum number of accommodated passengers, sending a vehicle overload alarm; or
When the sum of the target quantity is larger than 0 and no travel order is determined at present, sending an alarm for illegal use of the vehicle; or
And when the sum of the target quantity is 0 and the current travel order exists, sending a vehicle list brushing alarm.
A second aspect of the invention provides a real-time monitoring device for vehicle driving behavior, the device comprising a memory for storing instructions;
a processor for reading instructions in the memory, the implementation comprising:
acquiring a monitoring image shot by a built-in camera at a vehicle-mounted end, and determining a first image area without a shielding object and a second image area with the shielding object in the monitoring image according to area division configuration information;
carrying out global face detection on the first image area, and determining the target number of the first image area;
dividing a second image area into a plurality of target detection areas, carrying out local face detection on the target detection areas and determining local face probability, carrying out human body posture detection on the target detection areas and determining human body posture probability, and carrying out moving target detection on the target detection areas and determining moving target probability;
judging whether a target exists in the target detection area according to the local face probability, the human body posture probability and the moving target probability of each target detection area so as to determine the target number of the second image area;
and monitoring the vehicle driving behavior according to the sum of the target number of the first image area and the target number of the second image area.
Optionally, the processor is configured to perform local face detection on the target detection region and determine a local face probability, and includes:
inputting a local face detection model into a target detection area to obtain a local face probability, wherein the local face detection model is obtained by taking a local face image marked whether a local face exists as input and taking the output local face probability as a target to carry out network model training, and the network model comprises a multi-stage classifier.
Optionally, the processor is configured to perform human posture detection on the target detection region and determine a human posture probability, and includes:
inputting a human body posture detection model into the target detection area to obtain a human body posture probability, wherein the human body posture detection model is obtained by taking a human body posture image marked whether a human body posture exists as input and taking the output human body posture probability as a target to carry out network model training, and the network model comprises an SVM classifier.
Optionally, the processor is configured to perform moving object detection on the object detection area and determine a moving object probability, and includes:
determining a foreground region of a target detection region by using a frame difference method, inputting the foreground region into a moving target detection model to determine a moving target probability, wherein the moving target detection model is obtained by taking a moving target image marked whether a moving target exists as input and taking an output moving target probability as a target to carry out network model training.
Optionally, the processor is configured to determine the number of targets in the second image region according to the local face probability, the human posture probability, and the moving target probability of each target detection region, and includes:
inputting the local face probability, the human body posture probability and the moving target probability of a target detection area into a target detection model to obtain the existing target probability, and determining that the target exists in the target detection area when the existing target probability is greater than a probability threshold, wherein the target detection model is obtained by taking the local face probability, the human body posture probability and the moving target probability of an image for marking whether the target exists as input and taking the output existing target probability as a target to carry out network model training, and the network model comprises a regressor;
and determining the target number of the second image area according to the target number of each target detection area.
Optionally, the processor is further configured to:
and detecting the lost articles in the monitoring image, and sending an alarm of the existence of the lost articles when the current travel order is determined not to exist and the lost articles exist in the monitoring image.
Optionally, the processor is configured to monitor the vehicle driving behavior according to a sum of the target numbers of the first image area and the second image area, and includes:
when the sum of the target number is larger than the maximum number of accommodated passengers, sending a vehicle overload alarm; or
When the sum of the target quantity is larger than 0 and no travel order is determined at present, sending an alarm for illegal use of the vehicle; or
And when the sum of the target quantity is 0 and the current travel order exists, sending a vehicle list brushing alarm.
The invention provides a vehicle driving behavior real-time monitoring device, which comprises the following modules:
the monitoring image acquisition module is used for acquiring a monitoring image shot by a built-in camera at the vehicle-mounted end and determining a first image area without a shielding object and a second image area with the shielding object in the monitoring image according to area division configuration information;
the first image area target number determining module is used for carrying out global face detection on the first image area and determining the target number of the first image area;
the second image area detection module is used for dividing a second image area into a plurality of target detection areas, carrying out local face detection on the target detection areas and determining local face probability, carrying out human body posture detection on the target detection areas and determining human body posture probability, and carrying out moving target detection on the target detection areas and determining moving target probability;
the second image area target number determining module is used for judging whether a target exists in the target detection area according to the local face probability, the human body posture probability and the moving target probability of each target detection area so as to determine the target number of the second image area;
and the vehicle behavior determining module is used for monitoring the vehicle driving behavior according to the sum of the target number of the first image area and the target number of the second image area.
Optionally, the second image region detecting module is configured to perform local face detection on the target detection region and determine a local face probability, and includes:
inputting a local face detection model into a target detection area to obtain a local face probability, wherein the local face detection model is obtained by taking a local face image marked whether a local face exists as input and taking the output local face probability as a target to carry out network model training, and the network model comprises a multi-stage classifier.
Optionally, the second image region detection module is configured to perform human posture detection on the target detection region and determine a human posture probability, and includes:
inputting a human body posture detection model into the target detection area to obtain a human body posture probability, wherein the human body posture detection model is obtained by taking a human body posture image marked whether a human body posture exists as input and taking the output human body posture probability as a target to carry out network model training, and the network model comprises an SVM classifier.
Optionally, the second image region detecting module is configured to perform moving object detection on the object detection region and determine a moving object probability, and includes:
determining a foreground region of a target detection region by using a frame difference method, inputting the foreground region into a moving target detection model to determine a moving target probability, wherein the moving target detection model is obtained by taking a moving target image marked whether a moving target exists as input and taking an output moving target probability as a target to carry out network model training.
Optionally, the second image region target number determining module is configured to determine the target number of the second image region according to the local face probability, the human body posture probability, and the moving target probability of each target detection region, and includes:
inputting the local face probability, the human body posture probability and the moving target probability of a target detection area into a target detection model to obtain the existing target probability, and determining that the target exists in the target detection area when the existing target probability is greater than a probability threshold, wherein the target detection model is obtained by taking the local face probability, the human body posture probability and the moving target probability of an image for marking whether the target exists as input and taking the output existing target probability as a target to carry out network model training, and the network model comprises a regressor;
and determining the target number of the second image area according to the target number of each target detection area.
Optionally, the lost article detection module is further configured to:
and detecting the lost articles in the monitoring image, and sending an alarm of the existence of the lost articles when the current travel order is determined not to exist and the lost articles exist in the monitoring image.
Optionally, the vehicle behavior determination module is configured to monitor the vehicle driving behavior according to a sum of the target numbers of the first image area and the second image area, and includes:
when the sum of the target number is larger than the maximum number of accommodated passengers, sending a vehicle overload alarm; or
When the sum of the target quantity is larger than 0 and no travel order is determined at present, sending an alarm for illegal use of the vehicle; or
And when the sum of the target quantity is 0 and the current travel order exists, sending a vehicle list brushing alarm.
A fourth aspect of the present invention provides a computer medium, wherein the computer readable storage medium stores computer instructions, and the computer instructions, when executed by a processor, implement a method for monitoring driving behavior of a vehicle in real time according to any one of the methods provided by the first aspect of the present invention.
The method for monitoring the vehicle driving behavior in real time provided by the invention provides a method for detecting the number of targets in a vehicle under the condition that various targets are difficult to detect, such as shielding, illumination, postures and the like, and can solve the problems that whether a driver is overloaded and whether an empty vehicle is singly brushed and whether a network is reserved for private use can not be monitored in real time by monitoring the number of the targets, further standardizes the operation order of the network reservation industry, and improves the passenger experience.
Drawings
FIG. 1 is a schematic diagram of a real-time monitoring system for vehicle driving behavior;
FIG. 2 is a flow chart of a method for monitoring vehicle driving behavior in real time;
FIG. 3A is a schematic view of the position of a camera inside a vehicle;
FIG. 3B is a schematic view of a front view image taken by a built-in camera at the vehicle-mounted end;
FIG. 3C is a schematic diagram of a division of the second region;
FIG. 4 is a complete flow chart of a method for real-time monitoring of vehicle driving behavior;
FIG. 5 is a structural diagram of a real-time monitoring device for vehicle driving behavior;
fig. 6 is a block diagram of a real-time monitoring device for vehicle driving behavior.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
For convenience of understanding, terms referred to in the embodiments of the present invention are explained below:
(1) an intelligent double-recording acquisition terminal DVR (digital Video recorder), also called digital Video recorder, adopts hard disk Video recording compared with the traditional analog Video recorder, so is often called hard disk Video recorder, also called DVR, is a set of computer system for image storage processing, and has the functions of long-time Video recording, sound recording, remote monitoring and control of images/voice;
(2) the vehicle background terminal IOV (Internet Of vehicles) is one type Of vehicle networking equipment, vehicle-mounted equipment on a vehicle effectively utilizes dynamic information Of all vehicles in an information network platform through a wireless communication technology, different functional services are provided during vehicle operation, the vehicle background terminal can provide guarantee for the distance between the vehicles, reduce the probability Of vehicle collision accidents, help vehicle owners to navigate in real time, and improve the efficiency Of traffic operation through communication with other vehicles and network systems;
(3) TBOX (telematics BOX) is used as a wireless gateway, a remote communication interface is provided for the whole vehicle through functions of 4G/5G remote wireless communication, GPS satellite positioning, acceleration sensor, CAN communication and the like, and services including vehicle data acquisition, vehicle track recording, vehicle fault monitoring, vehicle remote inquiry and control (locking and unlocking, air conditioning control, vehicle window control, transmitter torque limitation, engine starting and stopping), driving behavior analysis, hotspot sharing and the like are provided;
the application provides a vehicle behavior real-time supervision system that traveles utilizes distinctive mobile unit configuration and specific target detection algorithm real-time supervision net car appointment the interior passenger's of car number, for the driver swipes, the private use of bus, passenger lose article, condition such as vehicle overload provides real-time detection, further standardization net car appointment operation order, improves user experience.
As shown in fig. 1, the system specifically includes the following devices: as an optional implementation manner, the system further includes a T-BOX intelligent remote control terminal 104, the DVR intelligent dual-recording collection terminal 101 includes a vehicle-mounted built-in camera for shooting the driving and taking conditions in the vehicle, the AI-BOX artificial intelligent sensing terminal 102 is used for processing images and videos sent by the vehicle-mounted built-in camera, judging the real-time conditions in the vehicle and obtaining vehicle travel order conditions, and the IOV vehicle background terminal 103 is used for receiving the results of the vehicle travel behaviors obtained after the image processing. The DVR intelligent double-recording acquisition terminal 101, the AI-BOX artificial intelligence perception terminal 102 and the IOV vehicle background terminal 103 can be integrated in the T-BOX intelligent remote control terminal 104, or can be distributed at different positions of the vehicle to carry out data connection through a network;
after the vehicle is started, the vehicle-mounted end built-in camera in the DVR shoots the information in the vehicle, the shot area mainly comprises a main driving area, a secondary driving area and a gap area between seats, and the target number in the vehicle is determined according to the target number in each area.
The embodiment of the application provides a real-time monitoring method for vehicle driving behaviors, which is applied to an AI-BOX artificial intelligence sensing terminal 102, and the method comprises the following steps, as shown in FIG. 2:
step S201, acquiring a monitoring image shot by a built-in camera at a vehicle-mounted end, and determining a first image area without a shielding object and a second image area with the shielding object in the monitoring image according to area division configuration information;
the monitoring image shot by the built-in camera of the vehicle-mounted end is divided into a plurality of areas according to the area division configuration information, for example, the built-in camera of the vehicle-mounted end in the net appointment vehicle is usually installed at the position of the center of a windshield of the vehicle, and the effect of shooting passengers in the vehicle at the position is better, as an optional implementation mode, the images of different areas can be shot by a plurality of cameras so as to achieve a better monitoring effect, but because the number of the cameras in the vehicle is increased, the privacy concerns and complaints of the net appointment vehicle passengers are directly caused, so the detailed explanation of the scheme of increasing the number of the cameras is not carried out in the embodiment;
as shown in fig. 3A, which is a side view of a vehicle, a vehicle-mounted end built-in camera 301 is installed at the center of a windshield of the vehicle to monitor a shadow area in the image, a front view image in the vehicle is taken by the vehicle-mounted end built-in camera 301, as shown in fig. 3B, according to a preset area division, a monitoring image taken by the vehicle-mounted end built-in camera is divided into a first image area 302 without a barrier, since the number of targets to be monitored in the present application is the number of passengers, a taken area to which a driver belongs can be filtered, and a second image area 303 with a barrier outside the first image area, a specific area division mode can be used for dividing different areas according to the arrangement position of the camera, the focal length of the camera, the arrangement in the vehicle, and the like; in general, the first image area refers to an area which can be completely shot by a built-in camera at the vehicle-mounted end, and the second image area is a vehicle rear-row area which cannot be completely shot due to the fact that a vehicle seat is shielded.
Step S202, carrying out global face detection on the first image area, and determining the target number of the first image area;
the method for detecting the global face comprises the steps of carrying out global face detection on a first image area, wherein the method for detecting the global face can adopt a deep neural network to carry out face detection, judging the number of targets in the first image area, and carrying out global face detection on the first image area in such a way that a specific non-shielded area in a vehicle is detected, for example, the face detection is respectively carried out on a copilot area in the vehicle and a non-shielded area in the back row of the vehicle, and the number of the targets in the first image area is determined according to whether the targets exist in a plurality of areas in the first image area;
since the number of targets to be finally judged does not include the driver in the vehicle, the position of the driver is not detected in the first image region, or the face characteristics of the driver are stored in the face detection algorithm in advance, and when the driver is detected by the global face, the face information of the driver is not mistakenly judged as the target in the first image region.
Step S203, dividing a second image area into a plurality of target detection areas, carrying out local face detection on the target detection areas and determining local face probability, carrying out human body posture detection on the target detection areas and determining human body posture probability, and carrying out moving target detection on the target detection areas and determining moving target probability;
dividing the second image area into a plurality of target detection areas, as shown in fig. 3C, dividing the second image area into a plurality of detection areas, specifically, dividing the second image area into a first target detection area 304 and a second target detection area 305, where the dividing manner is not limited to the method provided in this embodiment, the second image area may be divided according to a default seating position of a passenger behind an obstruction, for example, the vehicle is a vehicle accommodating five people at maximum, the rear row area is divided into 3 parts on average, each blocked part where there is seat blocking is defined as a target detection area, local face detection is performed on the target detection area, and a local face probability is extracted, and whether there is a target in the target detection area can be determined by the local face probability; detecting the human body posture of the target detection area and extracting the human body posture probability, and determining whether a target exists in the target detection area or not according to the human body posture probability; the moving target detection is carried out on the target detection area and the probability of the moving target is determined, whether the moving target exists in the target detection area can be determined through the probability of the moving target, the characteristic description of the whole target can be further enriched through the determination of the moving target, and the accuracy of target detection is enhanced;
step S204, judging whether the target detection area has a target or not according to the local face probability, the human body posture probability and the moving target probability of each target detection area so as to determine the target number of the second image area;
and judging whether a target exists in the target detection area or not according to the local face probability, the human body posture probability and the moving target probability of the target detection area, summarizing target detection results of all the target detection areas, and determining the target number of the second image area.
And step S205, monitoring the vehicle running behavior according to the sum of the target number of the first image area and the target number of the second image area.
Adding the target number of the first image area and the target number of the second image area to obtain the sum of the target number in the vehicle;
when the sum of the target number is larger than the maximum number of accommodated passengers, sending a vehicle overload alarm to a vehicle background terminal; or
When the sum of the target quantity is larger than 0 and no travel order is determined at present, sending an illegal vehicle use alarm to a vehicle background terminal; or
And when the sum of the target quantity is 0 and the current travel order is determined to exist, sending a vehicle list refreshing alarm to the vehicle background terminal.
Since it is easy for detection errors to occur in determining the number of targets in the vehicle through independent single-frame images, the sum of the number of targets in the vehicle is generally determined using the average of the number of targets in multiple-frame images to reduce the detection errors.
The trip order judging method can be used for connecting a TBOX in a vehicle with a network car booking platform APP (Application) of a network car booking driver, and acquiring the current order processing condition of the driver in real time.
Specifically, various driving behaviors of the vehicle are sent to the IOV vehicle background terminal, and the IOV vehicle background terminal sends specific illegal time periods of the driving behaviors, order information and driver information to the network car reservation monitoring platform through the TBOX wireless communication system, so that the purpose of monitoring the driving behaviors of the vehicle in real time is achieved.
The method for monitoring the vehicle driving behavior in real time provided by the invention provides a method for detecting the number of targets in a vehicle under the condition that various targets are difficult to detect, such as shielding, illumination, postures and the like, and can solve the problems that whether a driver is overloaded and whether an empty vehicle is singly brushed and whether a network is reserved for private use can not be monitored in real time by monitoring the number of the targets, further standardizes the operation order of the network reservation industry, and improves the passenger experience.
As an optional implementation manner, after a monitoring image shot by a built-in camera at a vehicle-mounted end is obtained, the monitoring image is subjected to lost article detection, and when it is determined that no trip order is currently available and lost articles exist in the monitoring image, an alarm indicating the existence of the lost articles is sent;
specifically, utilize specific loss article detection model to carry over the article and detect in the car, loss article detection model be to, use the interior image of the car that marks the interior loss article of car as the input, whether to have in the output car and lose the article and train the target and obtain, through the loss article testing result of loss article detection model output and the trip situation of current vehicle, confirm whether need remind the passenger to have the loss article through driver or net car APP.
As an optional implementation, the performing local face detection on the target detection region and extracting a local face probability includes:
inputting a target detection area into a local face detection model to obtain a local face probability, wherein the local face detection model is obtained by taking a human posture image marked whether a human posture exists as input and taking the local face probability marked as output as a target to carry out network model training, and the network model comprises a multi-stage classifier;
the local face detection model mainly collects local face images with shielding parts of faces in the network appointment scene, marks whether local faces exist in the local face images, inputs the local face images marked whether the local faces exist into the multi-stage classifier, and performs network model training by taking the output marked local face probability as a target;
inputting a target detection area into a local face detection model, outputting a plurality of fraction values of the target detection area by a plurality of classifiers, wherein the fraction values are the probability of local faces existing in the target detection area under the detection of the classifiers, and after the fraction values of the classifiers are obtained, performing area smoothing and normalization on the fraction values of the classifiers to obtain the local face probability determined by the local face detection model, specifically, the formula step of determining the local face probability by the local face detection model comprises the following steps:
Figure BDA0002419597800000131
Figure BDA0002419597800000132
Figure BDA0002419597800000133
where Score (r, c) is a Score value obtained at a certain position (r, c), which may be any pixel point in the target detection area, N is the number of stages of the multi-stage classifier, Sign (F)i) Is the probability value, TH, of the presence of a local face at a certain locationiFor the threshold of the i-level classifier, Rows and Cols are the height and width of the target detection region, and the Norm function is a normalization function, firstly, the probability value of the local face existing in the position of the target detection region (r, c) is detected through the i-level classifier, and when the probability value is greater than THiAnd when the threshold value is used, the probability value of the position is brought into Score (r, c), the fraction value of the position (r, c) is calculated according to the number of the classifiers, finally, the fraction value of each position point in the target detection area is subjected to normalization smoothing processing to obtain the probability that the local face exists in the target detection area, and the target detection area is input into a local face detection model to obtain the normalized local face probability of the target detection area.
As an optional implementation, performing human posture detection on the target detection region and determining a human posture probability includes:
the human body posture detection model mainly determines whether a target exists or not from the whole posture classification of a human body. Firstly, collecting images of human body postures and images of human body postures which do not exist in a net appointment vehicle scene, taking the human body posture images in the net appointment vehicle scene marked whether the human body postures exist as input, and carrying out network model training by taking output human body posture probability as a target to obtain the network model, wherein the network model comprises an SVM classifier, and the formula for determining the human body posture probability by a human body posture detection model comprises the following steps:
Figure BDA0002419597800000141
Figure BDA0002419597800000142
Figure BDA0002419597800000143
Figure BDA0002419597800000144
firstly, extracting a strong gradient characteristic map at a certain position (r, c) of a target detection region, wherein I in the formulaGray(r, c) represents a gradient feature, wherein the gradient feature includes both a horizontal gradient GxAlso includes a vertical gradient GyBecause the edge information of the human body is obvious, the threshold TH2 is used to filter some information with weak gradient information to obtain the strong gradient feature of the position point, and if the position has the strong gradient feature, the probability of the human body posture is high.
IAngle(r, c) is the angular characteristic of a certain position (r, c) after quantization, Quant is a quantization function, the angle is divided into 8 direction angles through quantization and segmentation,
Figure BDA0002419597800000145
is at IAngle(r, c) on the basis, scanning is carried out in a circular range with the radius of r, the Count function is used for counting the times of human body postures appearing in 8 direction angles, because the human body posture directions in a directional diagram generally have consistency, the characteristics without consistency are generally background characteristics, the TH3 is used for screening some backgrounds and irrelevant information with weak direction information to obtain the strong angle characteristic of the position point, and if the position point is located in the position point, the TH3 is used for screening the backgrounds and the irrelevant information with weak direction information to obtain the strong angle characteristic of the position pointAnd if the strong angle characteristic exists, the probability of the human posture is high. For example, in a region where the radius r is 3 and the image is centered at (r, c), the number of features appearing in each direction is counted, and there is a number of directions>TH3(TH3 is typically set to 5) indicating that the (r, c) position has good directional consistency and a strong angular characteristic.
The probability of the human posture finally obtained is
Figure BDA0002419597800000151
With w0,w1As weights, typically w0Initialized to 0.3, and w1The initialization is 0.7, which is mainly to determine the human body posture probability of the point by combining the strong gradient characteristic and the strong angle characteristic of each position point of the target detection area, and to obtain the human body posture probability of the target detection area by normalizing the human body posture probability of each position point.
As an optional implementation, the performing moving object detection on the object detection area and determining a moving object probability includes:
determining a foreground region of a target detection region by using a frame difference method, inputting the foreground region into a moving target detection model to obtain a moving target probability, wherein the moving target detection model is obtained by taking a moving target image which is marked whether a moving target exists as an input and taking the feature of the moving target which is output and marked as a target to carry out network model training.
The frame difference method is a method for obtaining the contour of a moving object by performing difference operation on two continuous frames of images of a video image sequence. The method comprises the steps of obtaining two adjacent frames of images, when abnormal target motion occurs in a target detection area, enabling the two adjacent frames of images to have obvious difference, wherein the difference position is a foreground area, obtaining an absolute value of pixel value difference of corresponding positions of the images through the foreground area, judging the probability of the moving target, wherein the probability of the moving target is mainly used for judging auxiliary information of whether the target exists, and through the output of the probability of the moving target, the accuracy of whether the target exists or not determined by the local face probability and the human body posture probability can be further determined.
As an optional implementation manner, the determining the number of targets in the second image region according to the local face probability, the human body posture probability and the moving target probability of each target detection region includes:
inputting the local face probability, the human body posture probability and the moving target probability of a target detection area into a target detection model to obtain the existing target probability, and determining that the target exists in the target detection area when the existing target probability is greater than a probability threshold, wherein the target detection model is obtained by taking the local face probability, the human body posture probability and the moving target probability of an image for marking whether the target exists as input and taking the output existing target probability as a target to carry out network model training, and the network model comprises a regressor; and determining the target number of the second image area according to the target number of each target detection area.
And taking the local face probability, the human body posture probability and the moving target probability of the image marked whether the target exists as input, and taking the output existing target probability as a target to carry out network model training to obtain the target. And inputting the local face probability, the human body posture probability and the moving target probability corresponding to a target detection area marked with whether a target exists in the regressor to obtain the probability of the existence of the target, determining that the target exists in the target detection area when the probability of the existence of the target is greater than a probability threshold, and determining the number of the targets in the second image area according to the number of the targets in each target detection area.
As an optional implementation manner, since the detection targets in the vehicle may exist in multiple target detection areas of the second image area at the same time, or may exist in the first image area and the second image area at the same time, a target screening algorithm may be used to detect whether the same detected target appears in at least two areas, and if the same detected target appears, the target that is repeatedly calculated is removed, or local face detection, human body posture detection, and moving target detection are directly performed on the monitored image, so as to obtain the number of targets existing in the entire monitored image, which may prevent an error caused by the existence of multiple areas in one target.
As an optional implementation manner, in the embodiment of the present invention, in addition to detecting the number of targets in the vehicle, the face detection may be performed on the driver in the first image area before the vehicle is started, so that it is ensured that the vehicle and the order can be received only by the specified driver of the vehicle, and the face detection may also be performed on the driver in the first image area during the vehicle driving process. Specifically, when the vehicle is in a running process, an image containing a vehicle driver is acquired in real time, the first image is identified, and when the vehicle driver is determined to have dangerous driving behaviors in the driving process, alarm information is sent to the background equipment, wherein the dangerous driving behaviors comprise one or more of the following: smoking, calling, fatigue driving, drunk driving, etc., and also may determine whether other passengers have behavior that interferes with driver driving (e.g., quarrel with the driver, obstruct the driver's view, etc.) based on the identification of the first image area, and if so, send alarm information to the background device.
The embodiment of the invention provides a complete flow chart of a method for monitoring the driving behavior of a vehicle in real time, which comprises the following steps as shown in fig. 4:
step S401, acquiring a monitoring image shot by a built-in camera at a vehicle-mounted end, and determining a first image area without a shielding object and a second image area with the shielding object in the monitoring image according to area division configuration information;
step S402, carrying out global face detection on the first image area, and determining the target number of the first image area;
step S403, dividing a second image area into a plurality of target detection areas, performing local face detection on the target detection areas and determining local face probability, performing human posture detection on the target detection areas and determining human posture probability, and performing moving target detection on the target detection areas and determining moving target probability;
step S404, determining the sum of the target quantity, executing step S405 when the sum of the target quantity is larger than the maximum number of accommodated passengers, executing step S406 when the sum of the target quantity is larger than 0 and it is determined that no travel order currently exists, and executing step S407 when the sum of the target quantity is 0 and it is determined that a travel order currently exists;
step S405, sending a vehicle overload alarm to the background equipment;
step S406, sending an illegal vehicle use alarm to the background equipment;
and step S407, sending a vehicle list brushing alarm to the background equipment.
As shown in fig. 5, the device 500 may generate a large difference due to different configurations or performances, and may include one or more processors (CPU) 501 (e.g., one or more processors), a memory 502, and one or more storage media 503 (e.g., one or more mass storage devices) for storing applications 504 or data 506. Memory 502 and storage medium 503 may be, among other things, transient or persistent storage. The program stored in the storage medium 503 may include one or more modules (not shown), and each module may include a series of instruction operations in the information processing apparatus. Further, the processor 501 may be configured to communicate with the storage medium 503 to execute a series of instruction operations in the storage medium 503 on the apparatus 500.
The device 500 may also include one or more power supplies 509, one or more wired or wireless network interfaces 507, one or more input-output interfaces 508, and/or one or more operating systems 505, such as Windows Server, Mac OS X, Unix, L inux, FreeBSD, etc.
The device is used for realizing the following method:
acquiring a monitoring image shot by a built-in camera at a vehicle-mounted end, and determining a first image area without a shielding object and a second image area with the shielding object in the monitoring image according to area division configuration information;
carrying out global face detection on the first image area, and determining the target number of the first image area;
dividing a second image area into a plurality of target detection areas, carrying out local face detection on the target detection areas and determining local face probability, carrying out human body posture detection on the target detection areas and determining human body posture probability, and carrying out moving target detection on the target detection areas and determining moving target probability;
judging whether a target exists in the target detection area according to the local face probability, the human body posture probability and the moving target probability of each target detection area so as to determine the target number of the second image area;
and monitoring the vehicle driving behavior according to the sum of the target number of the first image area and the target number of the second image area.
Optionally, the performing, by the processor, local face detection on the target detection region and determining a local face probability includes:
inputting a local face detection model into a target detection area to obtain a local face probability, wherein the local face detection model is obtained by taking a local face image marked whether a local face exists as input and taking the output local face probability as a target to carry out network model training, and the network model comprises a multi-stage classifier.
Optionally, the processor performs human posture detection on the target detection region and determines a human posture probability, including:
inputting a human body posture detection model into the target detection area to obtain a human body posture probability, wherein the human body posture detection model is obtained by taking a human body posture image marked whether a human body posture exists as input and taking the output human body posture probability as a target to carry out network model training, and the network model comprises an SVM classifier.
Optionally, the processor performs moving object detection on the object detection area and determines a moving object probability, including:
determining a foreground region of a target detection region by using a frame difference method, inputting the foreground region into a moving target detection model to determine a moving target probability, wherein the moving target detection model is obtained by taking a moving target image marked whether a moving target exists as input and taking an output moving target probability as a target to carry out network model training.
Optionally, the determining, by the processor, the number of targets in the second image region according to the local face probability, the human posture probability, and the moving target probability of each target detection region includes:
inputting the local face probability, the human body posture probability and the moving target probability of a target detection area into a target detection model to obtain the existing target probability, and determining that the target exists in the target detection area when the existing target probability is greater than a probability threshold, wherein the target detection model is obtained by taking the local face probability, the human body posture probability and the moving target probability of an image for marking whether the target exists as input and taking the output existing target probability as a target to carry out network model training, and the network model comprises a regressor;
and determining the target number of the second image area according to the target number of each target detection area.
Optionally, the processor is further configured to:
and detecting the lost articles in the monitoring image, and sending an alarm of the existence of the lost articles when the current travel order is determined not to exist and the lost articles exist in the monitoring image.
Optionally, the processor monitors the vehicle driving behavior according to the sum of the target numbers of the first image area and the second image area, and comprises:
when the sum of the target number is larger than the maximum number of accommodated passengers, sending a vehicle overload alarm; or
When the sum of the target quantity is larger than 0 and no travel order is determined at present, sending an alarm for illegal use of the vehicle; or
And when the sum of the target quantity is 0 and the current travel order exists, sending a vehicle list brushing alarm.
The embodiment of the invention provides a vehicle driving behavior real-time monitoring device, which comprises the following modules as shown in fig. 6:
the monitoring image acquisition module 601 is used for acquiring a monitoring image shot by a built-in camera at the vehicle-mounted end, and determining a first image area without a shielding object and a second image area with the shielding object in the monitoring image according to area division configuration information;
a first image area target number determining module 602, configured to perform global face detection on the first image area, and determine a target number of the first image area;
a second image region detection module 603, configured to divide a second image region into multiple target detection regions, perform local face detection on the target detection regions and determine local face probabilities, perform human body posture detection on the target detection regions and determine human body posture probabilities, perform moving target detection on the target detection regions and determine moving target probabilities;
a second image region target number determining module 604, configured to determine whether a target exists in the target detection region according to the local face probability, the human body posture probability, and the moving target probability of each target detection region, so as to determine the target number of the second image region;
a vehicle behavior determination module 605, configured to monitor a vehicle driving behavior according to a sum of the target numbers of the first image area and the second image area.
Optionally, the second image region detecting module 603 is configured to perform local face detection on the target detection region and determine a local face probability, and includes:
inputting a local face detection model into a target detection area to obtain a local face probability, wherein the local face detection model is obtained by taking a local face image marked whether a local face exists as input and taking the output local face probability as a target to carry out network model training, and the network model comprises a multi-stage classifier.
Optionally, the second image region detecting module 603 is configured to perform human posture detection on the target detection region and determine a human posture probability, and includes:
inputting a human body posture detection model into the target detection area to obtain a human body posture probability, wherein the human body posture detection model is obtained by taking a human body posture image marked whether a human body posture exists as input and taking the output human body posture probability as a target to carry out network model training, and the network model comprises an SVM classifier.
Optionally, the second image area detecting module 603 is configured to perform moving object detection on the object detection area and determine a moving object probability, and includes:
determining a foreground region of a target detection region by using a frame difference method, inputting the foreground region into a moving target detection model to determine a moving target probability, wherein the moving target detection model is obtained by taking a moving target image marked whether a moving target exists as input and taking an output moving target probability as a target to carry out network model training.
Optionally, the second image region target number determining module 604 is configured to determine the target number of the second image region according to the local face probability, the human body posture probability and the moving target probability of each target detection region, and includes:
inputting the local face probability, the human body posture probability and the moving target probability of a target detection area into a target detection model to obtain the existing target probability, and determining that the target exists in the target detection area when the existing target probability is greater than a probability threshold, wherein the target detection model is obtained by taking the local face probability, the human body posture probability and the moving target probability of an image for marking whether the target exists as input and taking the output existing target probability as a target to carry out network model training, and the network model comprises a regressor;
and determining the target number of the second image area according to the target number of each target detection area.
Optionally, the lost article detection module 606 is further configured to:
and detecting the lost articles in the monitoring image, and sending an alarm of the existence of the lost articles when the current travel order is determined not to exist and the lost articles exist in the monitoring image.
Optionally, the vehicle behavior determination module 605 is configured to monitor the vehicle driving behavior according to a sum of the target numbers of the first image area and the second image area, and includes:
when the sum of the target number is larger than the maximum number of accommodated passengers, sending a vehicle overload alarm; or
When the sum of the target quantity is larger than 0 and no travel order is determined at present, sending an alarm for illegal use of the vehicle; or
And when the sum of the target quantity is 0 and the current travel order exists, sending a vehicle list brushing alarm.
The embodiment of the invention provides a computer medium, wherein a computer instruction is stored in the computer readable storage medium, and when the computer instruction is executed by a processor, the method for monitoring the driving behavior of the vehicle in real time is implemented according to any one of the methods provided by the embodiment 1 of the invention.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A real-time monitoring method for vehicle driving behaviors is characterized by comprising the following steps:
acquiring a monitoring image shot by a built-in camera at a vehicle-mounted end, and determining a first image area without a shielding object and a second image area with the shielding object in the monitoring image according to area division configuration information;
carrying out global face detection on the first image area, and determining the target number of the first image area;
dividing a second image area into a plurality of target detection areas, carrying out local face detection on the target detection areas and determining local face probability, carrying out human body posture detection on the target detection areas and determining human body posture probability, and carrying out moving target detection on the target detection areas and determining moving target probability;
judging whether a target exists in the target detection area according to the local face probability, the human body posture probability and the moving target probability of each target detection area so as to determine the target number of the second image area;
and monitoring the vehicle driving behavior according to the sum of the target number of the first image area and the target number of the second image area.
2. The method of claim 1, wherein performing local face detection on the target detection region and determining local face probabilities comprises:
inputting a local face detection model into a target detection area to obtain a local face probability, wherein the local face detection model is obtained by taking a local face image marked whether a local face exists as input and taking the output local face probability as a target to carry out network model training, and the network model comprises a multi-stage classifier.
3. The method of claim 1, wherein performing human pose detection on the target detection region and determining human pose probabilities comprises:
inputting a human body posture detection model into the target detection area to obtain a human body posture probability, wherein the human body posture detection model is obtained by taking a human body posture image marked whether a human body posture exists as input and taking the output human body posture probability as a target to carry out network model training, and the network model comprises an SVM classifier.
4. The method of claim 1, wherein performing moving object detection on the object detection region and determining a moving object probability comprises:
determining a foreground region of a target detection region by using a frame difference method, inputting the foreground region into a moving target detection model to determine a moving target probability, wherein the moving target detection model is obtained by taking a moving target image marked whether a moving target exists as input and taking an output moving target probability as a target to carry out network model training.
5. The method of claim 1, wherein determining the number of targets in the second image region according to the local face probability, the human pose probability and the moving target probability of each target detection region comprises:
inputting the local face probability, the human body posture probability and the moving target probability of a target detection area into a target detection model to obtain the existing target probability, and determining that the target exists in the target detection area when the existing target probability is greater than a probability threshold, wherein the target detection model is obtained by taking the local face probability, the human body posture probability and the moving target probability of an image for marking whether the target exists as input and taking the output existing target probability as a target to carry out network model training, and the network model comprises a regressor;
and determining the target number of the second image area according to the target number of each target detection area.
6. The method of claim 1, further comprising:
and detecting the lost articles in the monitoring image, and sending an alarm of the existence of the lost articles when the current travel order is determined not to exist and the lost articles exist in the monitoring image.
7. The method of claim 1, wherein monitoring vehicle driving behavior based on a sum of the target numbers of the first image region and the second image region comprises:
when the sum of the target number is larger than the maximum number of accommodated passengers, sending a vehicle overload alarm; or
When the sum of the target quantity is larger than 0 and no travel order is determined at present, sending an alarm for illegal use of the vehicle; or
And when the sum of the target quantity is 0 and the current travel order exists, sending a vehicle list brushing alarm.
8. A real-time monitoring device for vehicle driving behavior is characterized by comprising a memory for storing instructions;
a processor for reading the instructions in the memory to implement a method for monitoring the driving behavior of a vehicle according to any one of claims 1 to 7 in real time.
9. A vehicle driving behavior real-time monitoring device is characterized by comprising the following modules:
the monitoring image acquisition module is used for acquiring a monitoring image shot by a built-in camera at the vehicle-mounted end and determining a first image area without a shielding object and a second image area with the shielding object in the monitoring image according to area division configuration information;
the first image area target number determining module is used for carrying out global face detection on the first image area and determining the target number of the first image area;
the second image area detection module is used for dividing a second image area into a plurality of target detection areas, carrying out local face detection on the target detection areas and determining local face probability, carrying out human body posture detection on the target detection areas and determining human body posture probability, and carrying out moving target detection on the target detection areas and determining moving target probability;
the second image area target number determining module is used for judging whether a target exists in the target detection area according to the local face probability, the human body posture probability and the moving target probability of each target detection area so as to determine the target number of the second image area;
and the vehicle behavior determining module is used for monitoring the vehicle driving behavior according to the sum of the target number of the first image area and the target number of the second image area.
10. A computer medium, characterized in that the computer readable storage medium stores computer instructions which, when executed by a processor, implement a method for real-time monitoring of vehicle driving behavior according to any one of claims 1 to 7.
CN202010201673.9A 2020-03-20 2020-03-20 Vehicle driving behavior real-time monitoring method and device Active CN111460938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010201673.9A CN111460938B (en) 2020-03-20 2020-03-20 Vehicle driving behavior real-time monitoring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010201673.9A CN111460938B (en) 2020-03-20 2020-03-20 Vehicle driving behavior real-time monitoring method and device

Publications (2)

Publication Number Publication Date
CN111460938A true CN111460938A (en) 2020-07-28
CN111460938B CN111460938B (en) 2022-04-08

Family

ID=71680808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010201673.9A Active CN111460938B (en) 2020-03-20 2020-03-20 Vehicle driving behavior real-time monitoring method and device

Country Status (1)

Country Link
CN (1) CN111460938B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287808A (en) * 2020-10-27 2021-01-29 江苏云从曦和人工智能有限公司 Motion trajectory analysis warning method, device, system and storage medium
CN112926414A (en) * 2021-02-05 2021-06-08 北京嘀嘀无限科技发展有限公司 Image processing method and device and electronic equipment
CN112926881A (en) * 2021-03-29 2021-06-08 广州宸祺出行科技有限公司 Detection method and system for preventing driver from swiping bill based on vehicle-mounted system
CN113158799A (en) * 2021-03-18 2021-07-23 精英数智科技股份有限公司 Open pit coal mine irregular driving detection method, device and system
CN113597072A (en) * 2021-08-23 2021-11-02 成都世纪光合作用科技有限公司 Lamp control method and device and electronic equipment
CN114312580A (en) * 2021-12-31 2022-04-12 上海商汤临港智能科技有限公司 Method and device for determining seats of passengers in vehicle and vehicle control method and device
CN113158799B (en) * 2021-03-18 2024-04-26 精英数智科技股份有限公司 Method, device and system for detecting irregular driving of open pit coal mine

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709420A (en) * 2016-11-21 2017-05-24 厦门瑞为信息技术有限公司 Method for monitoring driving behaviors of driver of commercial vehicle
CN108986400A (en) * 2018-09-03 2018-12-11 深圳市尼欧科技有限公司 A kind of third party based on image procossing, which multiplies, drives safety automatic-alarming method
US20190137996A1 (en) * 2017-11-06 2019-05-09 Pony.ai, Inc. Coordinated control of self-driving vehicles under emergency situations
CN109800654A (en) * 2018-12-24 2019-05-24 百度在线网络技术(北京)有限公司 Vehicle-mounted camera detection processing method, apparatus and vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709420A (en) * 2016-11-21 2017-05-24 厦门瑞为信息技术有限公司 Method for monitoring driving behaviors of driver of commercial vehicle
US20190137996A1 (en) * 2017-11-06 2019-05-09 Pony.ai, Inc. Coordinated control of self-driving vehicles under emergency situations
CN108986400A (en) * 2018-09-03 2018-12-11 深圳市尼欧科技有限公司 A kind of third party based on image procossing, which multiplies, drives safety automatic-alarming method
CN109800654A (en) * 2018-12-24 2019-05-24 百度在线网络技术(北京)有限公司 Vehicle-mounted camera detection processing method, apparatus and vehicle

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287808A (en) * 2020-10-27 2021-01-29 江苏云从曦和人工智能有限公司 Motion trajectory analysis warning method, device, system and storage medium
CN112926414A (en) * 2021-02-05 2021-06-08 北京嘀嘀无限科技发展有限公司 Image processing method and device and electronic equipment
CN113158799A (en) * 2021-03-18 2021-07-23 精英数智科技股份有限公司 Open pit coal mine irregular driving detection method, device and system
CN113158799B (en) * 2021-03-18 2024-04-26 精英数智科技股份有限公司 Method, device and system for detecting irregular driving of open pit coal mine
CN112926881A (en) * 2021-03-29 2021-06-08 广州宸祺出行科技有限公司 Detection method and system for preventing driver from swiping bill based on vehicle-mounted system
CN113597072A (en) * 2021-08-23 2021-11-02 成都世纪光合作用科技有限公司 Lamp control method and device and electronic equipment
CN114312580A (en) * 2021-12-31 2022-04-12 上海商汤临港智能科技有限公司 Method and device for determining seats of passengers in vehicle and vehicle control method and device
CN114312580B (en) * 2021-12-31 2024-03-22 上海商汤临港智能科技有限公司 Method and device for determining seats of passengers in vehicle and vehicle control method and device

Also Published As

Publication number Publication date
CN111460938B (en) 2022-04-08

Similar Documents

Publication Publication Date Title
CN111460938B (en) Vehicle driving behavior real-time monitoring method and device
CN110487562B (en) Driveway keeping capacity detection system and method for unmanned driving
CN107352032B (en) Method for monitoring people flow data and unmanned aerial vehicle
CN110223511A (en) A kind of automobile roadside is separated to stop intelligent monitoring method and system
CN111325872B (en) Driver driving abnormity detection method based on computer vision
US20160026865A1 (en) Vision-based system for dynamic weather detection
CN106541968B (en) The recognition methods of the subway carriage real-time prompt system of view-based access control model analysis
CN111429329B (en) Method and device for monitoring network car booking behavior
CN104778444A (en) Method for analyzing apparent characteristic of vehicle image in road scene
Nakashima et al. Passenger counter based on random forest regressor using drive recorder and sensors in buses
DE102019102195A1 (en) SYSTEMS AND METHOD FOR COLLISION DETECTION IN AUTONOMOUS VEHICLES
CN112347814A (en) Passenger flow estimation and display method, system and computer readable storage medium
CN110255318B (en) Method for detecting idle articles in elevator car based on image semantic segmentation
US11900701B2 (en) Left object detection device and left object detection method
CN111460920A (en) Target tracking and segmenting system for complex scene of airport
CN113034378A (en) Method for distinguishing electric automobile from fuel automobile
CN112700473B (en) Carriage congestion degree judging system based on image recognition
CN115600124A (en) Subway tunnel inspection system and inspection method
CN110188645B (en) Face detection method and device for vehicle-mounted scene, vehicle and storage medium
CN111241918B (en) Vehicle tracking prevention method and system based on face recognition
CN109770922B (en) Embedded fatigue detection system and method
CN111950499A (en) Method for detecting vehicle-mounted personnel statistical information
Chen et al. Automatic head detection for passenger flow analysis in bus surveillance videos
KR102492290B1 (en) Drone image analysis system based on deep learning for traffic measurement
CN113139488B (en) Method and device for training segmented neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant