CN113160575A - Traffic violation detection method and system for non-motor vehicles and drivers - Google Patents

Traffic violation detection method and system for non-motor vehicles and drivers Download PDF

Info

Publication number
CN113160575A
CN113160575A CN202110275271.8A CN202110275271A CN113160575A CN 113160575 A CN113160575 A CN 113160575A CN 202110275271 A CN202110275271 A CN 202110275271A CN 113160575 A CN113160575 A CN 113160575A
Authority
CN
China
Prior art keywords
vehicle
motor vehicle
driver
determining
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110275271.8A
Other languages
Chinese (zh)
Inventor
闫军
刘艳洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Super Vision Technology Co Ltd
Original Assignee
Super Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Super Vision Technology Co Ltd filed Critical Super Vision Technology Co Ltd
Priority to CN202110275271.8A priority Critical patent/CN113160575A/en
Publication of CN113160575A publication Critical patent/CN113160575A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Abstract

The invention discloses a traffic violation detection method for non-motor vehicles and drivers and a system for executing the method, wherein the method comprises the steps of collecting a plurality of continuous video frames in an image to be detected; determining a driving area of the non-motor vehicle in the video frame, vehicle information of the non-motor vehicle in the area and biological characteristic information of a driver, and determining a driving track of the non-motor vehicle and identity information of the driver according to the information of the non-motor vehicle and the driver; it is further determined whether there is illegal activity by the non-motorized vehicle and the driver. The invention utilizes the technologies of artificial intelligence, big data and the like to analyze and process the acquired high-order video image, can correctly detect and identify the non-motor vehicle in the road traffic monitoring video, and simultaneously provides the data support of the illegal action for the traffic control department, thereby improving the supervision of the non-motor vehicle, being used for assisting in controlling the travel of the non-motor vehicle and ensuring the traffic driving safety.

Description

Traffic violation detection method and system for non-motor vehicles and drivers
Technical Field
The invention relates to the field of computers, in particular to a method and a system for detecting traffic violation behaviors of non-motor vehicles and drivers.
Background
In recent years, with the increase of the usage amount of electric bicycles, the traffic violation phenomenon of non-motor vehicles has become a social problem to be solved urgently in cities. In the urban traffic management field, the violation phenomenon of non-motor vehicles lacks effective control measures, and the problems that the non-motor vehicles are large in quantity and high in complexity and are difficult to effectively identify and the like easily cause the violation phenomenon that the non-motor vehicles are difficult to control, so that the trouble is caused when other motor vehicles run, and the potential safety hazard is buried when the pedestrians go out safely.
Patent CN109884338A proposes a method, an apparatus, a device and a storage medium for detecting the retrograde motion of a shared electric vehicle, which determine a driving track by acquiring the current GPS data of the shared electric vehicle in real time, compare the driving track with preset map information, and determine whether the shared electric vehicle is retrograde motion according to the comparison result. Patent CN111243272A proposes a method for monitoring traffic behaviors of a non-motor vehicle and a system for detecting illegal behaviors, in which an active RFID tag preset with unique identity information corresponding to the non-motor vehicle is installed on the non-motor vehicle, and a detection area, a low frequency exciter, an RFID base station, a camera, etc. are arranged in an area to be monitored for illegal behaviors. The patent CN108281002A proposes a method and system for detecting the reverse running of a non-motor vehicle based on active RFID, which includes installing an active RFID tag on the non-motor vehicle and registering information used for identification, such as a vehicle owner, a license plate, etc., in a server; the system comprises a retrograde detection reader for reading an active RFID label on a vehicle, a camera for recording the driving condition of the vehicle on a road and a remote server for storing and analyzing data, wherein the remote server is connected with a short message platform; when the vehicle passes through the detection device, the reverse running analysis is completed and the vehicle is uploaded to the terminal server.
The two patents (CN111243272A and CN108281002A) have high implementation cost, the relative cost of adding the auxiliary scientific pendant to the non-motor vehicle is high, possible maintenance cost also exists, and great resistance exists for wide application and comprehensive popularization. The patent (CN109884338A) has no universality, can only solve the problem of partial shared single-vehicle traffic violation, and cannot provide an effective detection method for all non-motor vehicles. Meanwhile, the positioning accuracy rate is low through the GPS, and when the distance is within a certain range, the problem of inaccurate identification may occur.
In view of the above, it is urgently needed to invent a method for detecting illegal driving of a non-motor vehicle, which is used for assisting in controlling the driving of the non-motor vehicle and ensuring the traffic driving safety. At present, the traffic violation of the non-motor vehicles is analyzed and explored based on a video recognition technology, high-definition video monitoring equipment on a communication road records the driving pictures of the non-motor vehicles, and the judgment of the violation is realized by combining the actual road conditions and an artificial intelligence algorithm.
Disclosure of Invention
The invention aims to provide a method and a system for accurately detecting, analyzing and judging the illegal behaviors of a non-motor vehicle and a driver, so that the supervision degree and the supervision accuracy of the non-motor vehicle are improved, the data guarantee of the illegal behaviors can be provided for a traffic control department, and the traffic driving safety is guaranteed.
In order to achieve the above object, the present invention provides a method for detecting traffic violation of a non-motor vehicle and a driver, the method comprising: acquiring a plurality of continuous video frames in an image to be detected, which is acquired by video equipment;
detecting and identifying a plurality of video frames, and determining a driving area of the non-motor vehicle in the video frames;
extracting vehicle information of the non-motor vehicle and biological characteristic information of a driver in a non-motor vehicle driving area, and determining a driving track of the non-motor vehicle and identity information of the driver;
according to the driving track of the driver in the driving area of the non-motor vehicle and the identity information of the driver; it is determined whether there is illegal activity by the non-motor vehicle and the driver.
As a further improvement of the invention, a plurality of video frames are detected and identified, and the driving area of the non-motor vehicle in the video frames is determined, specifically comprising; enhancing the image brightness of a video frame to be detected, and acquiring an enhanced video frame image with high contrast and definition; dividing the position and the color of the lane line in the enhanced video frame image, and determining the position of the lane line; and determining the driving area of the non-motor vehicle according to the position of the lane line in the enhanced video frame.
As a further improvement of the invention, a plurality of video frames are detected, and the driving area of the non-motor vehicle in the video frames is determined, specifically comprising;
enhancing the image brightness of a video frame to be detected, and acquiring an enhanced video frame image with high contrast and definition;
dividing the position and the color of the lane line in the enhanced video frame image, and determining the position of the lane line;
and determining the driving area of the non-motor vehicle according to the color and the position of the lane line in the enhanced video frame.
As a further improvement of the invention, a preset vehicle detection model is further included before the vehicle information in the non-motor vehicle driving area is identified, and the vehicle information is detected and identified through the vehicle detection model to determine the color and the license plate number of the vehicle.
As a further improvement of the present invention, determining the driving trajectory of the non-motor vehicle based on the vehicle information in the driving area of the non-motor vehicle specifically includes:
the position coordinates of the vehicle in any one of the video frames are determined,
extracting the position coordinates of the vehicle in a plurality of video frames before and after the video frame,
and determining the motion track of the vehicle according to the coordinate change of the vehicle in the multiple video frames.
As a further improvement of the present invention, extracting the biometric information of the driver specifically includes detecting shoulder region information of the driver, determining head information from the shoulder region, confirming face information from the head information, and confirming identity information of the driver from the head, shoulder, and face information.
As a further improvement of the invention, the driving track of the driver in the driving area of the non-motor vehicle and the identity information of the driver are obtained; determining the type of non-motor vehicle and whether the driver has illegal activity further comprises determining the type of non-motor vehicle.
As a further improvement of the present invention, determining an unlawful act based on the type of non-motor vehicle comprises: and if the non-motor vehicle is a bicycle, determining whether the driver has illegal behaviors according to the running track and the running direction of the vehicle, wherein the illegal behaviors comprise one or more of vehicle retrograde motion and line-pressing running.
As a further improvement of the present invention, if the non-motor vehicle is an electric vehicle, determining whether there is an illegal action of the non-motor vehicle according to a running track, a running direction and a running speed of the vehicle, wherein the illegal action includes one or more of vehicle overspeed, vehicle reverse running and line pressing running.
As a further improvement of the present invention, if the non-motor vehicle is an electric vehicle, the method further comprises detecting whether the driver wears a helmet and whether the helmet is worn correctly, and if the driver does not wear a helmet or the helmet wearing manner is wrong, determining that the driver is illegal.
As a further improvement of the invention, the method also comprises the steps of presetting a plurality of helmet detection models before detecting whether the driver helmet is worn correctly, and judging whether the helmet is worn correctly according to the output result of the helmet detection models.
As a further improvement of the invention, after judging whether the illegal behaviors exist in the non-motor vehicle and the driver, the method further comprises the step of uploading the illegal behaviors to a cloud platform.
The invention also discloses a traffic violation detection system for non-motor vehicles and drivers, which comprises: the device comprises a collecting device, a processing device and a processing device, wherein the collecting device is used for obtaining a plurality of continuous video frames in an image to be detected collected by video equipment; the identification device is used for detecting and identifying a plurality of video frames and determining the driving area of the non-motor vehicle in the video frames; the extraction device is used for extracting vehicle information of the non-motor vehicle and biological characteristic information of the driver in a driving area of the non-motor vehicle, and determining a driving track of the non-motor vehicle and identity information of the driver;
the judging device is used for judging the driving track of the driver in the driving area of the non-motor vehicle and the identity information of the driver; it is determined whether there is illegal activity by the non-motor vehicle and the driver.
As a further improvement of the invention, the recognition device is further configured to enhance the image brightness of the video frame to be detected, and obtain an enhanced video frame image with high contrast and high definition;
dividing a lane line in the enhanced video frame image according to the color difference, and determining the position of the lane line;
determining a non-motor vehicle driving area based on the location of the lane lines in the enhanced video frame
As a further improvement of the present invention, the extracting device is further configured to extract a change state of the target position of the non-motor vehicle from the plurality of videos, and determine the motion trajectory of the non-motor vehicle.
The system further comprises a presetting device which is used for presetting one or more of a vehicle detection model, a helmet detection model, a face recognition model and/or a head and shoulder detection model.
As a further improvement of the present invention, the determining module is further configured to determine a shoulder area of the driver, determine head information according to the shoulder area information, determine face information according to the head information, and determine identity information of the driver according to the head, shoulder and face information.
As a further development of the invention, the recognition device is also used to determine the type of non-motor vehicle from the image to be measured.
As a further improvement of the present invention, the determination device is further configured to determine an illegal action according to a type of a non-motor vehicle lane, and if the non-motor vehicle is a bicycle, determine whether the driver has the illegal action according to a driving track and a driving direction of the vehicle, where the illegal action includes one or more of vehicle back running and line pressing running.
As a further improvement of the present invention, if the non-motor vehicle is an electric vehicle, the determining device is further configured to determine whether there is an illegal act on the non-motor vehicle according to a running track, a running direction and a running speed of the vehicle, where the illegal act includes one or more of vehicle overspeed, vehicle reverse running and line pressing running.
As a further improvement of the present invention, if the non-motor vehicle is an electric vehicle, the identification device is further configured to detect whether the driver wears a helmet and whether the helmet is worn correctly, and if the driver does not wear a helmet or the helmet wearing manner is wrong, determine that the driver is illegal.
As a further improvement of the present invention, the system further comprises a sending device, and the sending device is configured to upload the illegal activity to a cloud platform.
Based on actual road scene information, the method and the system for detecting the traffic violation of the non-motor vehicle and the driver provided by the invention are adopted, the acquired high-order video images are analyzed and processed by utilizing technologies such as artificial intelligence, big data and the like, biological characteristic information such as the driving area of the non-motor vehicle, the driving track of the non-motor vehicle, the head, the shoulder and the face of the driver and the like is fully extracted, the analysis and the judgment of the violation of the non-motor vehicle and the driver are realized by utilizing the information, the non-motor vehicle can be correctly detected and identified in a road traffic monitoring video, and meanwhile, data support of the violation is provided for traffic control departments, so that the supervision of the non-motor vehicle is improved, the method and the system are used for assisting in controlling the traveling of the non-motor vehicle and ensuring the traffic driving safety.
Drawings
FIG. 1 is a schematic illustration of a method of detecting traffic violations of a non-motorized vehicle and a driver in accordance with the present invention;
FIG. 2 is a schematic diagram of an enhanced network in the present invention;
figure 3 is a diagram of the architecture of the centrnet network of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Specifically, as shown in fig. 1, the invention discloses a method for detecting traffic violation of non-motor vehicles and drivers, comprising the following steps:
s1: acquiring a plurality of continuous video frames in an image to be detected, which is acquired by video equipment;
in the embodiment, the video equipment generally selects high-level video equipment arranged on two sides of a road, and the video acquisition equipment is mainly used for monitoring the normal operation of the urban road, and particularly can be used for road-side parking image acquisition, traffic violation behavior acquisition, security detection and the like; the information that the high-order video equipment can collect contains: lane lines in actual road scenes, real-time driving images of non-motor vehicles and video information; the method comprises the steps that images and video information of a driver and the passing conditions of pedestrians and other vehicles passing through the road section are acquired through acquired images, a plurality of continuous video frames in the images are acquired, and the continuous video frames are detected and identified to judge corresponding behaviors.
In order to determine an illegal activity, it is necessary to preliminarily determine a region where the illegal activity may exist, specifically:
s2: detecting and identifying a plurality of video frames, and determining a driving area of the non-motor vehicle in the video frames;
after a plurality of continuous video frames are collected, the video frames need to be identified, a driving area of a non-motor vehicle in the video frames is determined so as to further predict the traffic behavior of the area, and the driving area of the non-motor vehicle is determined according to the position and the color of a lane line, which specifically comprises the following steps:
s201: a RetinexNet network is used for carrying out image enhancement on a video frame image to be detected, so that the image contrast and definition are improved;
the RetinexNet network can be divided into a decomposition network, an enhancement network and a reconstruction network; specifically, the image enhancement of the image to be detected by using the RetinexNet network comprises the following steps: (1) decoupling the image by using a decomposition network Decom-Net, (2) activating the decoupled image by using a 5-layer convolutional neural network and a relu function to obtain a light map and a reflection map; (3) the method comprises the steps that a light map obtained in the front is enhanced through an enhanced network Enhance-Net, as shown in a diagram of an enhanced network structure, in the process, a 9-layer convolution neural network and relu are mainly used for activating a processed image, meanwhile, resize operation of a nearest difference value is carried out in the middle, the enhanced light map is multiplied by an original reflection map to obtain an enhanced result, and loss is weighted by using a reflection map gradient as a weight, so that detail textures and boundary information are not damaged while smooth constraint is guaranteed; (4) and finally, recovering the normal illumination image through a reconstruction network so as to solve the problem of inaccurate detection caused by insufficient light in the road environment and at night.
RetinexNet mainly uses the reflection component R of a normal illumination imagenormalIllumination component InormalAnd the reflection component R of the low-light imagelowIllumination component IlowThe constraint relationship between the four componentsOptimizing the model, wherein the constraint relation is embodied in a target function, and a loss function of the network comprises three terms: reconstruction loss LreconConstant reflection loss LirAnd light smooth loss Lis. The overall loss function is defined as formula (1):
L=LreconirLirisLis (1)
wherein λ isir,λisCoefficients for equalizing the constant reflection loss and the luminance smoothing loss are respectively expressed.
The reconstruction loss corresponding to the decomposition network enables the reflection component and the illumination component decomposed by the model to reconstruct the corresponding original image as much as possible, and the loss function is defined as formula (2):
Figure BDA0002976372950000051
according to Retinex image decomposition theory, the reflection component R is independent of illumination, so the reflection components R of paired low/normal illumination images should be consistent as much as possible, and the reconstruction loss function corresponding to the network is enhanced[11]As defined in formula (3):
Figure BDA0002976372950000052
Figure BDA0002976372950000053
note: the two reconstruction losses differ: by the use of RlowPair of gradient maps of
Figure BDA0002976372950000054
Weighting is performed.
Illumination component smoothing loss LisThe improvement is obtained on the basis of Total Variation Loss, and is defined as formula (4):
Figure RE-GDA0003080041010000054
wherein the content of the first and second substances,
Figure BDA0002976372950000056
representing gradient operations (including
Figure BDA0002976372950000057
),λgThe structural strength balance coefficient is expressed as,
Figure BDA0002976372950000058
the smoothness constraint of the image gradient fierce area is reduced.
BM3D algorithm pair R is adopted in the enhanced networklowThe amplified noise is suppressed, and an illumination-related strategy is introduced to realize the RlowAnd (4) adjusting. Meanwhile, an encoder-decoder architecture is adopted, and multi-scale connection is introduced, so that the network can capture context information related to illumination distribution in a large range, the self-adaptive adjustment capability of the network can be improved, and the I-pair function is realizedlowAnd (4) adjusting.
Finally adjusted RlowAnd IlowAnd multiplying to obtain a corresponding normal illumination image.
S202, based on the example segmentation network Deeplabv3, carrying out lane line detection and segmentation on the enhanced image, and determining the position of a lane line;
specifically, the enhanced image is input into a segmentation network Deeplabv 3; then, an example segmentation network Deeplabv3 based on an encoder and decoder architecture is used for lane line detection and example segmentation, so that lane lines can be segmented effectively in various complex environments.
S203: and analyzing the current road scene according to the position of the divided lane line, and determining the driving area of the non-motor vehicle.
S3: extracting vehicle information of a non-motor vehicle and biological characteristic information of a driver in a non-motor vehicle driving area, and determining a driving track of the non-motor vehicle and identity information of the driver, wherein the vehicle information comprises information such as vehicle color and license plate number, and the identity information of the driver comprises one or more of information of the shoulder, the head and the face of the driver;
and further determining the running track of the non-motor vehicle according to the vehicle information, firstly determining the position coordinates of the vehicle in any video frame, extracting the position coordinates of the vehicle in a plurality of video frames before and after the video frame, and determining the motion track of the vehicle according to the coordinate change of the vehicle in a plurality of video frames.
In the embodiment, the vehicle detection model is an improved non-motor vehicle detection and identification model based on YOLOv3, the non-motor vehicle missing detection rate is reduced by designing a new feature fusion structure, the positioning accuracy is improved by using GIOU loss,
YOLOv3 primarily utilizes multi-scale features for object detection; in the embodiment, the object classification uses GIOU to replace the existing Logistic and softmax, the network corresponds to the input 416 × 3 images, three prediction results with different scales are obtained through the dark net, each scale corresponds to N channels and contains prediction information; the final network output result is 10 dimensions corresponding to each prediction, which are 4 (coordinate value), 1 (confidence score), and C (category number, in this embodiment, 5).
In this embodiment, YOLOv3 uses three feature maps with different scales to perform object detection, and can detect features with finer granularity, such as detection of objects such as vehicles, non-motor vehicles, drivers, human faces, heads and shoulders, where 3 scales are 1/32, 1/16, and 1/8;
after the 79 th layer, the prediction result of 1/32(13 x 13) is obtained through several convolution operations, the down-sampling multiple is high, the receptive field of the feature map is large, and an object with a large size in the detected image is used for detecting the vehicle target;
then the result is subjected to concat through upsampling and the result of the 61 st layer, and a prediction result of 1/16 is obtained through several convolution operations; it has a mesoscale receptive field suitable for detecting mesoscale objects. The method is used for detecting the non-motor vehicle and the driver in the embodiment.
The result of layer 91 is upsampled and then concat is performed on the result of layer 36, and after several convolution operations, 1/8 result is obtained, which has the smallest receptive field and is suitable for detecting small-sized objects, and is used for detecting the face, head and shoulder and other information of the driver in this embodiment.
When YOLOV3 predicts bbox, GIOU Loss (Generalized interaction over unit) is used to improve positioning accuracy, which is a frame prediction Loss calculation method based on IoU, specifically, IOU value is used as a Loss function, and the GIOU Loss function is defined as:
Figure BDA0002976372950000061
wherein C is a circumscribed rectangle of A and B. The value of GIOU is obtained by subtracting the union of A and B from C and dividing by C, and then subtracting this value from IoU in boxes A and B. The experimental result shows that the improved model obtains a detection result superior to YOLOv3 on a real complex scene non-motor vehicle data set, and improves the average detection accuracy (mAP) of detection by 3.6%, namely the accuracy of the biological characteristics of the vehicle and the driver is improved based on the improved model.
The biological characteristic information of the driver extracted through the steps comprises head information and face information; and confirming the identity information of the driver according to the head, shoulder and shoulder information.
In the present invention, the head-shoulder and face detection algorithm includes, but is not limited to, Yolov3 and centeret models.
The core idea of the concrete centret network is to regard the target as a point, namely, the central point of the target bounding box, the target detection problem is converted into the key point estimation problem, and other target attributes such as size, 3D position, direction and posture are subjected to parameter regression by taking the estimated central point as the reference. I.e. the position of the target is determined by estimating the upper left and lower right corner points of the target.
The centrnet network has the following characteristics: (1) the anchor points distributed by the CenterNet are only placed at positions, have no size, and do not need to manually set a threshold value to distinguish the foreground from the background; (2) each target has only one positive anchor point, so NMS is not needed subsequently, and the key point is obtained by a local peak value on the feature map; (3) the centret uses a larger resolution profile output (1/4 raw image) than conventional target detection, and therefore does not require multi-scale profiles such as FPN. The BackBone network adopts the following 4 types: ResNet-18, ResNet-101, DLA-34, Hourglass-104. In the experiment, the Deformable convolutional layer was used to optimize ResNet and DLA-34, while the Hourglass-104 network was unchanged.
FIG. 3 shows a diagram of the structure of a CenterNet network, whose loss function consists of three parts, (1) classification loss, (2) center shift loss, and (3) size loss.
Assuming an input image
Figure RE-GDA0003080041010000071
The goal is to generate a keypoint heatmap
Figure RE-GDA0003080041010000072
Y has a value of [0,1]If the number is 1, the detected key point is determined, and 0 is the background. Where R is the output's string, i.e., the size scaling, used in the experiment 4; c is the total number of categories. Different full convolutional codec networks were used for prediction during the experiment.
For each key point p of the true value of the class C, calculate its low resolution equation
Figure RE-GDA0003080041010000073
And performing Gaussian treatment on Y:
Figure RE-GDA0003080041010000074
focal los is used to reduce the penalty of pixel-level logistic regression:
(1) loss of classification[13]
Figure RE-GDA0003080041010000075
Wherein alpha and beta are the hyper-parameters of focal loss, and are respectively set to be 2 and 4 in the experiment; n is the number of keypoints in an image.
(2) Center shift loss:
since the image is downsampled by convolution, the key points of the group Truth are biased, and local offset prediction is added to each key point in the paper (the same prediction value is used for all classes)
Figure BDA0002976372950000074
This offset is trained using L1 loss, with supervision only at the keypoint p position, and neglected elsewhere.
Figure BDA0002976372950000075
(3) Loss of dimension:
is provided with
Figure RE-GDA0003080041010000078
Is the bounding box of object k, so its center position is
Figure RE-GDA0003080041010000079
For this purpose, the size of the target can be estimated
Figure RE-GDA00030800410100000710
The center point position is increased by L1 Loss:
Figure RE-GDA00030800410100000711
wherein, the scale is not normalized, and the original pixel coordinate is directly used. To adjust the influence of the loss, the loss is multiplied by a coefficient, and the whole training target loss function[8]Comprises the following steps:
Ldet=LksizeLsizeoffLoff (5)
wherein λ issize=0.1,λ off1 overall network prediction would output C +4 values at each position (i.e. keypoint class C, offset x, y, size w, h) all outputs share a fully convolved backoff.
I.e. the position of the target point is determined by the algorithm, i.e. the position of the target, i.e. the face, head or shoulders etc., is further determined.
S4: according to the driving track of the driver in the driving area of the non-motor vehicle and the identity information of the driver; determining whether the non-motor vehicle and the driver have illegal behaviors;
determining the type of the non-motor vehicle before determining whether the non-motor vehicle and the driver have illegal behaviors; and if the non-motor vehicle is a bicycle, determining whether the driver has illegal behaviors according to the running track and the running direction of the vehicle, wherein the illegal behaviors comprise one or more of vehicle retrograde motion and line pressing running.
When the target position of the non-motor vehicle is identified to change, determining whether the non-motor vehicle drives in the wrong direction according to the change direction of the target position in the plurality of video frames, and if the non-motor vehicle drives in the wrong direction, extracting and marking the image;
and when the track of the non-motor vehicle is identified to be crossed or overlapped with the lane line of the driving area, confirming the vehicle line pressing, and further extracting and marking the image.
And if the non-motor vehicle is an electric vehicle, determining whether the non-motor vehicle has illegal behaviors according to the running track, the running direction and the running speed of the vehicle, wherein the illegal behaviors comprise one or more of vehicle overspeed, vehicle retrograde motion and line pressing running.
Specifically, two frames of images at continuous intervals are acquired, the running speed of the target vehicle is determined according to the position change and the time change of the two frames of images, the calculated speed is compared with the preset maximum speed, the overspeed of the vehicle can be judged if the speed is higher than the preset maximum speed, the overspeed behavior of the vehicle is marked, and in addition, the judgment method for the reverse running and the line pressing running of the electric vehicle can be the same as that of a bicycle.
Further, if the non-motor vehicle is an electric vehicle, the method further comprises detecting whether the driver wears a helmet and whether the helmet is worn correctly, and if the driver does not wear the helmet or the helmet wearing mode is wrong, determining that the driver is illegal.
The method also comprises the steps of presetting a plurality of helmet detection models before detecting whether the helmet of the driver is worn correctly, and judging whether the helmet is worn correctly according to the output result of the helmet detection models.
In this embodiment, in the helmet detection of the non-motor driver, MobileNet is mainly used as a feature extraction network, the helmet is detected in a human body region, and whether the helmet is worn correctly is determined according to output results of a human body detection model and a helmet classification detection model. The human body detection model has detection accuracy rate of 91.52% aiming at human body; the recall rate reaches 89.25 percent; the helmet classification detection model has the detection accuracy rate of 88.32% for the helmet; the recall rate reaches 85.08%; the detection accuracy rate aiming at the head reaches 88.02 percent; the recall rate reaches 86.02%. The detection effect of the detection method provided by the invention is verified in a real environment, and the average accuracy rate is increased by 2.79%; the detection speed is also increased by two times.
In the invention, a vehicle with illegal behaviors is associated with a driver, when the vehicle has illegal behaviors, the biological characteristics of the corresponding driver can be extracted and marked, and specifically, the biological characteristics of the driver can be extracted, and an image deblurring algorithm based on a Conditional generation adaptive Networks (DGAN) can be selected. The algorithm takes a Group-SE module which combines a lightweight-level grouping convolution and an improved SE (sequence-and-Excitation) attention mechanism as a main component of a generator, and takes an improved DenseNet which introduces global dense connection as a discriminator core so as to solve the problems of low efficiency and the like when a deblurring technology is applied to a face recognition algorithm. The algorithm has good performances in the aspects of improving the image quality, reducing the model parameter quantity and the like, and the recognition rate is respectively improved by 3.95%.
As a further optimization of the invention, when the non-motor vehicle and/or the illegal action exists, the illegal action is acquired and the illegal parking action recognition result is uploaded to the cloud platform for further handling, for example, by sending a short message for reminding and the like.
Example two
The invention also discloses a traffic violation detection system for non-motor vehicles and drivers, which executes the method, and the system comprises the following steps: the device comprises a collecting device, a processing device and a processing device, wherein the collecting device is used for obtaining a plurality of continuous video frames in an image to be detected, which are collected by video equipment; the identification device is used for detecting and identifying a plurality of video frames and determining the driving area of the non-motor vehicle in the video frames; the extraction device is used for extracting vehicle information of the non-motor vehicle and biological characteristic information of the driver in a driving area of the non-motor vehicle, and determining a driving track of the non-motor vehicle and identity information of the driver;
the judging device is used for judging the driving track of the driver in the driving area of the non-motor vehicle and the identity information of the driver; it is determined whether there is illegal activity by the non-motor vehicle and the driver.
The identification device is also used for detecting a plurality of video frames and determining the driving area of the non-motor vehicle in the video frames, and specifically comprises the following steps;
enhancing the image brightness of a video frame to be detected, and acquiring an enhanced video frame image with high contrast and definition;
dividing the positions and color blocks of the lane lines in the enhanced video frame image, and determining the positions of the lane lines;
determining the driving area of the non-motor vehicle according to the color and position of the lane line in the enhanced video frame
The extraction device is also used for extracting the change state of the target position of the non-motor vehicle in the plurality of videos and determining the motion track of the non-motor vehicle.
The system further comprises a presetting device which is used for presetting one or more of a vehicle detection model, a helmet detection model, a face recognition model and/or a head and shoulder detection model.
As a further improvement of the present invention, the determining module is further configured to determine a shoulder area of the driver, determine head information according to the shoulder area information, determine face information according to the head information, and determine identity information of the driver according to the head, shoulder and face information.
As a further development of the invention, the recognition device is also used to determine the type of non-motor vehicle from the image to be measured.
As a further improvement of the present invention, the determination device is further configured to determine an illegal action according to a type of a non-motor vehicle lane, and if the non-motor vehicle is a bicycle, determine whether the driver has the illegal action according to a driving track and a driving direction of the vehicle, where the illegal action includes one or more of vehicle back running and line pressing running.
As a further improvement of the present invention, if the non-motor vehicle is an electric vehicle, the determining device is further configured to determine whether there is an illegal act on the non-motor vehicle according to a running track, a running direction and a running speed of the vehicle, where the illegal act includes one or more of vehicle overspeed, vehicle reverse running and line pressing running.
As a further improvement of the present invention, if the non-motor vehicle is an electric vehicle, the identification device is further configured to detect whether the driver wears a helmet and whether the helmet is worn correctly, and if the driver does not wear a helmet or the helmet wearing manner is wrong, determine that the driver is illegal.
The system further comprises a sending device, and the sending device is used for uploading the illegal activities to the cloud platform.
According to the invention, the non-motor vehicle driving area information is determined by segmenting and identifying the lane lines in the high-order video image and based on the actual road scene information; analyzing the running track, the running speed and the running direction of the non-motor vehicle by detecting and tracking the non-motor vehicle; the method comprises the steps of determining information such as head and shoulders, faces and whether a driver wears a helmet and the like through detection and identification of a non-motor vehicle driver, realizing highly reliable analysis and judgment of illegal behaviors of the non-motor vehicle and the driver by combining with the multi-stage analysis result, and finally uploading identity information, license plate numbers and face identification information of the non-motor vehicle and the driver to a cloud platform to provide effective technologies and data support for traffic control, punishment education and other works of traffic police departments.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not intended to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. To those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".
Those of skill in the art will further appreciate that the various illustrative logical blocks, units, and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate the interchangeability of hardware and software, various illustrative components, elements, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design requirements of the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The various illustrative logical blocks, or elements, described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be located in a user terminal. In the alternative, the processor and the storage medium may reside in different components in a user terminal.
In one or more exemplary designs, the functions described above in connection with the embodiments of the invention may be implemented in hardware, software, firmware, or any combination of the three. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media that facilitate transfer of a computer program from one place to another. Storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, such computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store program code in the form of instructions or data structures and which can be read by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Additionally, any connection is properly termed a computer-readable medium, and so, for example, if the software is transmitted from a website, server, or other remote source over a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wirelessly, e.g., infrared, radio, and microwave. Such discs (disk) and disks (disc) include compact disks, laser disks, optical disks, DVDs, floppy disks and blu-ray disks where disks usually reproduce data magnetically, while disks usually reproduce data optically with lasers. Combinations of the above may also be included in the computer-readable medium.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (21)

1. A method for detecting traffic violations of non-motorized vehicles and drivers, the method comprising:
acquiring a plurality of continuous video frames in an image to be detected, which is acquired by video equipment;
detecting and identifying a plurality of video frames, and determining a driving area of a non-motor vehicle in the video frames;
recognizing vehicle information of the non-motor vehicle and biological characteristic information of a driver in a non-motor vehicle driving area, and determining a driving track of the non-motor vehicle and identity information of the driver;
according to the driving track of the driver in the driving area of the non-motor vehicle and the identity information of the driver; it is determined whether there is illegal activity by the non-motor vehicle and the driver.
2. The method according to claim 1, characterized in that a plurality of video frames are detected, a driving area of the non-motor vehicle in the video frames is determined, and the method comprises the following steps;
enhancing the image brightness of a video frame to be detected, and acquiring an enhanced video frame image with high contrast and definition;
dividing the positions and color blocks of the lane lines in the enhanced video frame image, and determining the positions of the lane lines;
and determining the driving area of the non-motor vehicle according to the color and the position of the lane line in the enhanced video frame.
3. The method of claim 1, wherein identifying the vehicle information in the non-motor vehicle driving area further comprises presetting a vehicle detection model, detecting and identifying the vehicle information through the vehicle detection model, and determining the vehicle color and the license plate number.
4. The method of claim 3, wherein determining the travel trajectory of the non-motor vehicle based on the vehicle information in the non-motor vehicle travel area specifically comprises:
the position coordinates of the vehicle in any one of the video frames are determined,
extracting the position coordinates of the vehicle in a plurality of video frames before and after the video frame,
and determining the motion track of the vehicle according to the coordinate change of the vehicle in the multiple video frames.
5. The method of claim 1, wherein extracting the biometric information of the driver specifically comprises detecting shoulder region information of the driver, determining head information from the shoulder regions, confirming face information from the head information, and confirming identity information of the driver from the head, shoulder, and face information.
6. The method of claim 3, wherein the method further comprises the steps of determining a driving track of the driver in the non-motor vehicle driving area and the identity information of the driver; determining the type of non-motor vehicle and whether the driver has illegal activity further comprises determining the type of non-motor vehicle.
7. The method of claim 6, wherein determining the unlawful act based on the non-motor vehicle type comprises: and if the non-motor vehicle is a bicycle, determining whether the driver has illegal behaviors according to the running track and the running direction of the vehicle, wherein the illegal behaviors comprise one or more of vehicle retrograde motion and line pressing running.
8. The method of claim 6, wherein if the non-motor vehicle is an electric vehicle, determining whether the non-motor vehicle has illegal behaviors according to the running track, the running direction and the running speed of the vehicle, wherein the illegal behaviors comprise one or more of vehicle overspeed, vehicle reverse running and line pressing running.
9. The method of claim 8, wherein if the non-motorized vehicle is an electric vehicle, the method further comprises detecting whether the driver is wearing a helmet and whether the helmet is worn correctly, and if the driver is not wearing a helmet or if the helmet is worn incorrectly, determining that the driver is illegal.
10. The method of claim 9, wherein before detecting whether the driver's helmet is worn correctly, the method further comprises presetting a plurality of helmet detection models, and determining whether the helmet is worn correctly according to an output result of the helmet detection models.
11. The method according to any one of claims 7 to 10, wherein: the method also comprises uploading the illegal activities to a cloud platform after judging whether the illegal activities exist in the non-motor vehicle and the driver.
12. A traffic violation detection system for non-motor vehicles and drivers, characterized by: the system comprises:
the device comprises a collecting device, a processing device and a processing device, wherein the collecting device is used for obtaining a plurality of continuous video frames in an image to be detected, which are collected by video equipment;
the identification device is used for detecting a plurality of video frames and determining the driving area of the non-motor vehicle in the video frames;
the extraction device is used for identifying the vehicle information of the non-motor vehicle and the biological characteristic information of the driver in the driving area of the non-motor vehicle and determining the driving track of the non-motor vehicle and the identity information of the driver;
the judging device is used for judging the driving track of the driver in the driving area of the non-motor vehicle and the identity information of the driver; it is determined whether there is illegal activity by the non-motor vehicle and the driver.
13. The system of claim 12, wherein the identifying means is further configured to,
enhancing the image brightness of a video frame to be detected, and acquiring an enhanced video frame image with high contrast and definition;
dividing the positions and color blocks of the lane lines in the enhanced video frame image, and determining the positions of the lane lines;
and determining the driving area of the non-motor vehicle according to the color and the position of the lane line in the enhanced video frame.
14. The system according to claim 13, wherein the extracting device is further configured to determine a driving track of the non-motor vehicle according to vehicle information in a driving area of the non-motor vehicle, and specifically includes:
the position coordinates of the vehicle in any one of the video frames are determined,
extracting the position coordinates of the vehicle in a plurality of video frames before and after the video frame,
and determining the motion track of the vehicle according to the coordinate change of the vehicle in the multiple video frames.
15. The system of claim 14, further comprising a presetting device for presetting one or more of a vehicle detection model, a helmet detection model, a face recognition model, and/or a head-shoulder detection model.
16. The system of claim 12, wherein the determination module is further configured to determine a shoulder area of the driver, determine head information based on the shoulder area information, determine face information based on the head information, and determine identity information of the driver based on the head, shoulder, and face information.
17. The system of claim 14, wherein the identification device is further configured to determine a non-motorized vehicle type from the image to be tested.
18. The system of claim 12, wherein the determining device is further configured to determine an illegal activity according to a type of the non-motor vehicle lane, and if the non-motor vehicle is a bicycle, determine whether the illegal activity exists in the driver according to a driving track and a driving direction of the vehicle, and the illegal activity includes one or more of vehicle back running and line pressing driving.
19. The system of claim 12, wherein if the non-motor vehicle is an electric vehicle, the determining device is further configured to determine whether the non-motor vehicle has an illegal behavior according to a driving track, a driving direction and a driving speed of the vehicle, wherein the illegal behavior comprises one or more of vehicle overspeed, vehicle reverse running and line pressing running.
20. The system of claim 19, wherein if the non-motorized vehicle is an electric vehicle, the identifying means is further configured to detect whether the driver is wearing a helmet and whether the helmet is worn correctly, and if the driver is not wearing a helmet or the helmet is worn incorrectly, determine that the driver is illegal.
21. The system according to any one of claims 12 to 20, wherein: the system further comprises a sending device, and the sending device is used for uploading the illegal activities to the cloud platform.
CN202110275271.8A 2021-03-15 2021-03-15 Traffic violation detection method and system for non-motor vehicles and drivers Pending CN113160575A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110275271.8A CN113160575A (en) 2021-03-15 2021-03-15 Traffic violation detection method and system for non-motor vehicles and drivers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110275271.8A CN113160575A (en) 2021-03-15 2021-03-15 Traffic violation detection method and system for non-motor vehicles and drivers

Publications (1)

Publication Number Publication Date
CN113160575A true CN113160575A (en) 2021-07-23

Family

ID=76887111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110275271.8A Pending CN113160575A (en) 2021-03-15 2021-03-15 Traffic violation detection method and system for non-motor vehicles and drivers

Country Status (1)

Country Link
CN (1) CN113160575A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537183A (en) * 2021-09-17 2021-10-22 江苏巨亨智能科技有限公司 Electric vehicle helmet identification method based on artificial intelligence
CN114023088A (en) * 2021-11-03 2022-02-08 江苏尤特斯新技术有限公司 Intelligent street-crossing signal lamp system and illegal behavior evidence-obtaining and warning method
CN114120366A (en) * 2021-11-29 2022-03-01 上海应用技术大学 Non-motor vehicle helmet detection method based on generation countermeasure network and yolov5
CN114971485A (en) * 2022-06-09 2022-08-30 长安大学 Traffic safety management method, system and storable medium
CN115273456A (en) * 2022-06-14 2022-11-01 北京车网科技发展有限公司 Method and system for judging illegal driving of two-wheeled electric vehicle and storage medium
CN115497304A (en) * 2022-09-14 2022-12-20 中国银行股份有限公司 Crossing violation monitoring method and device, storage medium and equipment
CN115639605A (en) * 2022-10-28 2023-01-24 中国地质大学(武汉) Automatic high-resolution fault identification method and device based on deep learning
CN116721552A (en) * 2023-06-12 2023-09-08 北京博宏科元信息科技有限公司 Non-motor vehicle overspeed identification recording method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392220A (en) * 2014-11-27 2015-03-04 苏州福丰科技有限公司 Three-dimensional face recognition airport security inspection method based on cloud server
CN109448026A (en) * 2018-11-16 2019-03-08 南京甄视智能科技有限公司 Passenger flow statistical method and system based on head and shoulder detection
WO2020087743A1 (en) * 2018-11-01 2020-05-07 深圳云天励飞技术有限公司 Non-motor vehicle traffic violation supervision method and apparatus and electronic device
CN112115939A (en) * 2020-08-26 2020-12-22 深圳市金溢科技股份有限公司 Vehicle license plate recognition method and device
CN112164228A (en) * 2020-09-15 2021-01-01 深圳市点创科技有限公司 Helmet-free behavior detection method for driving electric vehicle, electronic device and storage medium
CN112381859A (en) * 2020-11-20 2021-02-19 公安部第三研究所 System, method, device, processor and storage medium for realizing intelligent analysis, identification and processing for video image data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392220A (en) * 2014-11-27 2015-03-04 苏州福丰科技有限公司 Three-dimensional face recognition airport security inspection method based on cloud server
WO2020087743A1 (en) * 2018-11-01 2020-05-07 深圳云天励飞技术有限公司 Non-motor vehicle traffic violation supervision method and apparatus and electronic device
CN109448026A (en) * 2018-11-16 2019-03-08 南京甄视智能科技有限公司 Passenger flow statistical method and system based on head and shoulder detection
CN112115939A (en) * 2020-08-26 2020-12-22 深圳市金溢科技股份有限公司 Vehicle license plate recognition method and device
CN112164228A (en) * 2020-09-15 2021-01-01 深圳市点创科技有限公司 Helmet-free behavior detection method for driving electric vehicle, electronic device and storage medium
CN112381859A (en) * 2020-11-20 2021-02-19 公安部第三研究所 System, method, device, processor and storage medium for realizing intelligent analysis, identification and processing for video image data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHEN WEI等: "Deep Retinex Decomposition for Low-Light Enhancement", 《ARXIV PREPRINT ARXIV: 1808.04560》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537183A (en) * 2021-09-17 2021-10-22 江苏巨亨智能科技有限公司 Electric vehicle helmet identification method based on artificial intelligence
CN114023088A (en) * 2021-11-03 2022-02-08 江苏尤特斯新技术有限公司 Intelligent street-crossing signal lamp system and illegal behavior evidence-obtaining and warning method
CN114120366A (en) * 2021-11-29 2022-03-01 上海应用技术大学 Non-motor vehicle helmet detection method based on generation countermeasure network and yolov5
CN114120366B (en) * 2021-11-29 2023-08-25 上海应用技术大学 Non-motor helmet detection method based on generation of countermeasure network and yolov5
CN114971485A (en) * 2022-06-09 2022-08-30 长安大学 Traffic safety management method, system and storable medium
CN115273456A (en) * 2022-06-14 2022-11-01 北京车网科技发展有限公司 Method and system for judging illegal driving of two-wheeled electric vehicle and storage medium
CN115273456B (en) * 2022-06-14 2023-08-29 北京车网科技发展有限公司 Method, system and storage medium for judging illegal running of two-wheeled electric vehicle
CN115497304A (en) * 2022-09-14 2022-12-20 中国银行股份有限公司 Crossing violation monitoring method and device, storage medium and equipment
CN115639605A (en) * 2022-10-28 2023-01-24 中国地质大学(武汉) Automatic high-resolution fault identification method and device based on deep learning
CN116721552A (en) * 2023-06-12 2023-09-08 北京博宏科元信息科技有限公司 Non-motor vehicle overspeed identification recording method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113160575A (en) Traffic violation detection method and system for non-motor vehicles and drivers
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
Singh et al. Deep spatio-temporal representation for detection of road accidents using stacked autoencoder
CN106599792B (en) Method for detecting hand driving violation behavior
Liu et al. A vision-based pipeline for vehicle counting, speed estimation, and classification
CN111104903B (en) Depth perception traffic scene multi-target detection method and system
CN109598943A (en) The monitoring method of vehicle violation, apparatus and system
Yogameena et al. Deep learning‐based helmet wear analysis of a motorcycle rider for intelligent surveillance system
CN109191830A (en) A kind of congestion in road detection method based on video image processing
CN104978567A (en) Vehicle detection method based on scenario classification
CN111047874B (en) Intelligent traffic violation management method and related product
CN112967252B (en) Rail vehicle machine sense hanger assembly bolt loss detection method
CN111178235A (en) Target quantity determination method, device, equipment and storage medium
CN114170580A (en) Highway-oriented abnormal event detection method
CN114332776A (en) Non-motor vehicle occupant pedestrian lane detection method, system, device and storage medium
CN115546742A (en) Rail foreign matter identification method and system based on monocular thermal infrared camera
Chen et al. A computer vision algorithm for locating and recognizing traffic signal control light status and countdown time
CN114419603A (en) Automatic driving vehicle control method and system and automatic driving vehicle
CN115512315B (en) Non-motor vehicle child riding detection method, electronic equipment and storage medium
CN115147450B (en) Moving target detection method and detection device based on motion frame difference image
CN114693722B (en) Vehicle driving behavior detection method, detection device and detection equipment
CN110765900A (en) DSSD-based automatic illegal building detection method and system
CN112633163B (en) Detection method for realizing illegal operation vehicle detection based on machine learning algorithm
Pan et al. Fake license plate recognition in surveillance videos
Kodwani Automatic Vehicle Detection, Tracking and Recognition of License Plate in Real Time Videos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210723