CN113011331A - Method and device for detecting whether motor vehicle gives way to pedestrians, electronic equipment and medium - Google Patents

Method and device for detecting whether motor vehicle gives way to pedestrians, electronic equipment and medium Download PDF

Info

Publication number
CN113011331A
CN113011331A CN202110295491.7A CN202110295491A CN113011331A CN 113011331 A CN113011331 A CN 113011331A CN 202110295491 A CN202110295491 A CN 202110295491A CN 113011331 A CN113011331 A CN 113011331A
Authority
CN
China
Prior art keywords
frame
target element
pedestrian crossing
detection
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110295491.7A
Other languages
Chinese (zh)
Other versions
CN113011331B (en
Inventor
王健
皖彦淇
岳名扬
祝偲博
任慧慧
杨珺淞
申南玲
白璐
李昀浩
马钰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202110295491.7A priority Critical patent/CN113011331B/en
Publication of CN113011331A publication Critical patent/CN113011331A/en
Application granted granted Critical
Publication of CN113011331B publication Critical patent/CN113011331B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention is suitable for the technical field of intelligent traffic, and provides a method and a device for detecting whether a motor vehicle gives way to pedestrians, electronic equipment and a storage medium, wherein the method comprises the following steps: the method comprises the steps of carrying out target element recognition on collected video frames by using a trained target recognition network to obtain a target element set, tracking the motion trail of target elements in the target element set by using a preset target tracking algorithm, and detecting whether the current video frame has the violation behaviors that motor vehicles do not give away the courtesy of pedestrians by using a preset violation behavior detection algorithm based on recognized main pedestrian crossing lines and the tracked motion trail of the target elements, so that the detection accuracy of the motor vehicles do not give away the courtesy of pedestrians is improved.

Description

Method and device for detecting whether motor vehicle gives way to pedestrians, electronic equipment and medium
Technical Field
The invention belongs to the technical field of intelligent traffic, and particularly relates to a method and a device for detecting whether a motor vehicle gives way to pedestrians, electronic equipment and a storage medium.
Background
Intelligent traffic is a necessary product of the development of technologies such as electronics, computers, automation and the like to a certain degree, and in recent years, a plurality of policies are intensively developed in China to support the development of intelligent traffic. In developed countries in the world, such as the united states, europe and the like, intelligent traffic has been commonly applied to the traffic construction field. Along with the tendency of saturation of the quantity of motor vehicles in China, an intelligent traffic system is used for improving close cooperation of people, vehicles and roads, so that the traffic transportation efficiency is improved, traffic jam is relieved, the passing capacity of a road network is improved, and traffic accidents are reduced. The traditional traffic violation detection mainly confirms whether the vehicle behavior violates the traffic rules or not through vehicle information captured by a camera in a multi-round manual repeated auditing and verifying mode. The method consumes a large amount of time cost and labor cost, and the manual auditing mode is doped with a plurality of subjective factors such as auditor fatigue, emotion and the like, so that the auditing efficiency is low, and the result is lack of fairness and accuracy.
Even though the current various cities advance traffic intellectuality at a high speed, the detection and supervision of simple violation behaviors such as running red light, pressing lines, occupying non-motor vehicle lanes and the like can be automated, how to realize the efficient, quick, accurate and fair automatic detection of complex courtesy behaviors of pedestrians is realized on the premise of saving manpower, equipment and technical cost, and the problem still becomes urgent to be solved.
Disclosure of Invention
The invention aims to provide a method and a device for detecting whether a motor vehicle gives a gift to a pedestrian, an electronic device and a storage medium, and aims to solve the problem that the detection accuracy of whether the motor vehicle gives a gift to a pedestrian is not high enough in the prior art.
In one aspect, the present invention provides a method for detecting whether a motor vehicle gives a courtesy to a pedestrian, the method comprising the steps of:
carrying out target element recognition on the collected video frames by using a trained target recognition network to obtain a target element set;
tracking the motion trail of the target element in the target element set by using a preset target tracking algorithm;
and detecting whether the violation behaviors that the motor vehicle does not give away the pedestrians are present in the current video frame by using a preset violation behavior detection algorithm based on the identified main pedestrian crossing line and the tracked motion trail of the target element.
Preferably, the target identification network is an SSD MobileNet-v2 network.
Preferably, before the step of performing target element recognition on the acquired video frame by using the trained target recognition network, the method further includes:
and if the preset pedestrian crossing line identification condition is met, identifying the main pedestrian crossing line, wherein the pedestrian crossing line identification condition comprises that the current video frame is the first frame of the video, or the current video frame is the video frame corresponding to the preset pedestrian crossing line identification period, and/or the frame interval between the current video frame and the video frame which is identified before but has no pedestrian crossing line identified is equal to a preset first interval threshold value.
Preferably, the first interval threshold is 10.
Preferably, the step of identifying the main crosswalk line further includes:
performing pedestrian crossing line example segmentation on the current video frame by using a trained segmentation network to obtain at least one rough positioning pedestrian crossing detection frame;
and precisely positioning the main crosswalk line based on all the rough positioning crosswalk detection frames.
Preferably, the segmentation network is a Mask R-CNN network, and the step of training the Mask R-CNN network includes:
acquiring a lane line data set;
if the lane line marks in the lane line data set only contain the side profiles of the lane lines, preprocessing the lane lines to obtain a preprocessed lane line data set, wherein the lane lines comprise pedestrian crossing lines, and the preprocessed lane line marks contain segmentation masks;
and inputting the preprocessed lane line data set into the Mask R-CNN network for training to obtain a trained Mask R-CNN network.
Preferably, the step of preprocessing the lane line in the lane line data set further includes:
if the mark of the lane line only comprises one line, respectively diffusing preset pixel widths at two sides by taking the line as an axis;
and if the mark of the lane line comprises two straight lines, respectively connecting the head ends and the tail ends of the two straight lines to form a quadrilateral closed area, and filling the quadrilateral closed area according to the type of the lane line.
Preferably, the pixel width is 5 pixel widths.
Preferably, the step of accurately positioning the main crosswalk line based on all the rough positioning crosswalk detection frames includes:
and screening the main square crosswalk line from all the coarse positioning crosswalk detection frames by adopting a preset condition scoring algorithm based on the confidence coefficient of each coarse positioning crosswalk detection frame, wherein the condition scoring algorithm combines the position of the coarse positioning crosswalk detection frame in a video frame and/or the proportion of the coarse positioning crosswalk detection frame relative to the video frame.
Preferably, the step of screening the master-shaped crosswalk line from all the rough-positioning crosswalk detection frames by using a preset conditional scoring algorithm includes:
scoring each coarse positioning pedestrian crossing detection frame by adopting a preset condition scoring algorithm to obtain a condition total score of each coarse positioning pedestrian crossing detection frame;
and acquiring a coarse positioning pedestrian crossing detection frame with the highest total score of conditions, and taking the coarse positioning pedestrian crossing detection frame with the highest total score of conditions as a main pedestrian crossing line detection frame.
Preferably, the formula used by the conditional scoring algorithm is as follows:
Figure BDA0002984193490000031
wherein the content of the first and second substances,
Figure BDA0002984193490000041
the condition total score of the ith rough positioning crosswalk detection frame is represented,
Figure BDA0002984193490000042
respectively representing the distances of the upper left corner of the ith rough positioning pedestrian crossing detection box in the horizontal direction and the vertical direction relative to the upper left corner of the current video frame,
Figure BDA0002984193490000043
respectively showing the width and the height, width, of the ith coarse positioning pedestrian crossing detection frameIMG、heightIMGRespectively representing the width and height of a video frame,
Figure BDA0002984193490000044
represents the confidence of the ith coarse positioning crosswalk detection frame, and I () is an indication function, (R1)t/h,R2t/h) Indicates a first predetermined interval, (R1)w/h,R2w/h) Denotes a second preset interval, s1 denotes the first preset fixed value, and s2 denotes the second preset fixed value.
Preferably, the first preset interval is (0.4, 0.6), the first preset fixed value is 0.4, the second preset interval is (0.07, 0.14), and the second preset fixed value is 0.14.
Preferably, after the step of obtaining the rough positioning crosswalk detection frame with the highest total score of the conditions, the method further includes:
judging whether the width ratio of the coarse positioning pedestrian crossing detection frame with the highest total score of the conditions to the video frame is smaller than a preset width ratio threshold value or not;
if the sum of the width ratio of the rough positioning pedestrian crossing detection frame with the highest total score of the conditions to the video frame reaches the width ratio threshold, the expanded rough positioning pedestrian crossing detection frame with the highest total score of the conditions is used as the main pedestrian crossing line detection frame;
and if not, taking the coarse positioning pedestrian crossing detection frame with the highest total condition score as the main pedestrian crossing line detection frame.
Preferably, the width ratio threshold is 0.85.
Preferably, the target element set includes a category and a detection box of each target element, the target tracking algorithm is a maximum intersection ratio screening method, and the step of tracking the motion trajectory of the target element in the target element set by using a preset target tracking algorithm includes:
for each target element in the target element set, obtaining a first sequence frame list set according to all current tracking path sequence frame lists, wherein the first sequence frame list set comprises all tracking path sequence frame lists which are the same as the types of the target elements, each tracking path sequence frame list comprises the types of the tracked target elements and detection frames of the target elements which are arranged in time sequence, and each tracking path sequence frame list corresponds to a motion track of one target element;
calculating the intersection ratio of the detection frame of the target element and the last detection frame of each tracking path sequence frame list in the first sequence frame list set;
judging whether an intersection comparison target frame exists or not, wherein the intersection comparison target frame is the last detection frame of which the intersection comparison is greater than a preset intersection comparison threshold;
if the current target element exists, determining the motion track of the tracked target element, and adding the detection frame of the target element into a tracking path sequence frame list where the last detection frame with the largest intersection ratio is located;
and if the target element does not exist, determining that the target element is a new element, and creating a tracking path sequence frame list according to the detection frame and the type of the target element.
Preferably, the intersection ratio threshold is 0.75.
Preferably, before the step of obtaining the first sequence box list set according to all current tracking path sequence box lists, the method further includes:
acquiring a second frame interval of the current video frame and the video frame corresponding to the last detection frame in each current tracking path sequence frame list;
and discarding the tracking path sequence frame list with the second frame interval not less than the preset second interval threshold.
Preferably, the second interval threshold is 5.
Preferably, after the step of determining whether there is a last detection box with an intersection ratio greater than a preset intersection ratio threshold, the method further includes:
if the intersection ratio target frame exists, carrying out logarithm extraction test on the maximum intersection ratio and the second maximum intersection ratio;
if the logarithm test is passed, determining the motion track of the target element;
if the logarithm check is not passed, tracking the motion track of the target element by adopting a preset cosine distance comparison method;
the logarithm test is used according to the formula:
(log2IoUmax-log2IoUsecond_max)<ε
wherein the content of the first and second substances,IoUmaxdenotes the maximum crossing ratio, IoUsecond_maxRepresents the second largest cross-over ratio, and epsilon is a constant and is expressed as a preset logarithmic check threshold.
Preferably, the step of tracking the motion trajectory of the target element by using a preset cosine distance comparison method includes:
acquiring a latest feature map of the target element in a current video frame, and a first feature map and a second feature map which respectively correspond to a last detection frame and a second last detection frame of each tracking path sequence frame list in a second sequence frame list set, wherein the second sequence frame list set comprises a tracking path sequence frame list in which all the intersection and comparison target frames are located;
normalizing the latest feature map and all the first feature maps and the second feature maps to obtain feature maps with uniform sizes;
respectively calculating a first cosine distance and a second cosine distance between the latest feature map after normalization processing and each first feature map and each second feature map;
respectively carrying out logarithm extraction on each first cosine distance and each second cosine distance to obtain a corresponding first characteristic similarity factor and a corresponding second characteristic similarity factor;
performing linear weighting calculation on each first characteristic similarity factor and a second characteristic similarity factor corresponding to each first characteristic similarity factor to obtain a minimum weighted characteristic similarity factor;
and taking the motion track corresponding to the tracking path sequence frame list corresponding to the minimum weighting characteristic similarity factor as the motion track of the target element, and adding the detection frame of the target element to the tracking path sequence frame list corresponding to the minimum weighting characteristic similarity factor.
Preferably, the preset violation detection algorithm is a violation warning point algorithm, and the step of detecting whether a violation that the motor vehicle does not give away a pedestrian is present in the current video frame by using the preset violation detection algorithm includes:
predicting an illegal warning point in the current video frame according to the motion track of each pedestrian on the main crosswalk line;
judging whether the violation warning point is in the upper half area of the last detection frame of any identified motor vehicle;
and if so, determining that the violation is detected in the current video frame.
Preferably, the violation warning point is calculated by:
Figure BDA0002984193490000071
Figure BDA0002984193490000072
wherein x iswp、ywpRespectively represent the coordinates of the violation warning points,
Figure BDA0002984193490000073
and c represents the number of the detection frames in the tracking path sequence frame list of the pedestrian.
Preferably, before the step of predicting the violation warning point in the current video frame according to the motion trajectory of each pedestrian on the main crosswalk line, the method includes:
judging whether each pedestrian in the current video frame is on the main pedestrian crossing line;
the step of judging whether each pedestrian in the current video frame is on the main pedestrian crossing line comprises the following steps:
judging whether the central point of the lower edge of the pedestrian detection frame of the pedestrian is positioned in the pedestrian crosswalk detection frame;
and if the pedestrian is in the pedestrian crossing detection frame, judging that the pedestrian is on the pedestrian crossing line.
Preferably, after the step of detecting whether there is an illegal act that the motor vehicle does not give away a pedestrian in the current video frame by using a preset illegal act detection algorithm, the method further includes:
if the illegal behavior is detected, a preset license plate recognition algorithm is used for recognizing the license plate of the motor vehicle with the illegal behavior;
and visually displaying the license plate, the violation and/or the time when the violation occurs.
In another aspect, the present invention provides a device for detecting whether a motor vehicle gives a courtesy to a pedestrian, the device comprising:
the target element identification unit is used for carrying out target element identification on the collected video frames by using a trained target identification network to obtain a target element set;
a motion trail tracking unit, configured to track a motion trail of a target element in the target element set by using a preset target tracking algorithm; and
and the illegal behavior recognition unit is used for detecting whether the illegal behavior of the motor vehicle which does not give the best to the pedestrians exists in the current video frame by using a preset illegal behavior detection algorithm based on the recognized main pedestrian crossing line and the tracked motion track of the target element.
In another aspect, the present invention also provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method when executing the computer program.
In another aspect, the present invention also provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method as described above.
The method comprises the steps of carrying out target element recognition on collected video frames by using a trained target recognition network to obtain a target element set, tracking the motion trail of target elements in the target element set by using a preset target tracking algorithm, and detecting whether the current video frame has the violation behaviors that motor vehicles do not give way to pedestrians by using a preset violation behavior detection algorithm based on the recognized main pedestrian crossing line and the tracked motion trail of the target elements, so that the detection accuracy of the motor vehicles do not give way to pedestrians by giving way to the pedestrians is improved.
Drawings
Fig. 1A is a flowchart illustrating an implementation of a method for detecting whether a motor vehicle gives a courtesy to a pedestrian according to an embodiment of the present invention;
fig. 1B is a diagram illustrating an exemplary effect of performing target element identification on a captured video frame according to an embodiment of the present invention;
fig. 2 is a flowchart of an implementation of identifying a main pedestrian crossing line according to a second embodiment of the present invention;
fig. 3 is a flowchart of an implementation of a training Mask R-CNN network according to a third embodiment of the present invention;
FIG. 4 is a flowchart of an implementation of tracking a motion trajectory of a target element by using a maximum intersection ratio screening method according to a fourth embodiment of the present invention; and
fig. 5 is a flowchart of an implementation of tracking a motion trajectory of a target element by using a cosine distance comparison method according to a fifth embodiment of the present invention.
Fig. 6A is a flowchart illustrating an implementation of detecting an illegal action of a motor vehicle that does not give way to pedestrians by using an illegal warning point algorithm according to a sixth embodiment of the present invention;
fig. 6B is an exemplary diagram for detecting whether an illegal action that a motor vehicle does not give away a pedestrian is present in a current video frame by using an illegal warning point algorithm according to a sixth embodiment of the present invention;
fig. 7 is a schematic structural diagram of a detection device for detecting whether a pedestrian is present or not in a motor vehicle according to a seventh embodiment of the present invention; and
fig. 8 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of specific implementations of the present invention is provided in conjunction with specific embodiments:
the first embodiment is as follows:
fig. 1A shows a flow of implementing the method for detecting whether a pedestrian is present in a motor vehicle according to an embodiment of the present invention, and for convenience of description, only the relevant parts of the embodiment of the present invention are shown, which is detailed as follows:
in step S101, a trained target recognition network is used to perform target element recognition on the acquired video frames, so as to obtain a target element set.
The embodiment of the invention is suitable for electronic equipment, in particular to electronic equipment such as a monitoring camera or a hard disk video recorder connected with the monitoring camera, a computer, a server and the like. In the embodiment of the invention, the video at the intersection is collected through the camera arranged at the intersection, the target elements can comprise motor vehicles, pedestrians and traffic lights, and the target element set can comprise the category and the detection frame of each identified target element.
The target recognition network can be R-CNN, Fast R-CNN, Faster R-CNN, Yolo or SSD, and preferably the target recognition network SSD MobileNet-v2 network to improve the recognition speed of the target element. When the trained target recognition network is used for carrying out target element recognition on the collected video frames to obtain a target element set, specifically, the trained target recognition network can be used for carrying out forward propagation on the collected video frames, a feature map with the highest confidence coefficient is screened out from each group of anchor point feature map groups which are not in the background category, the category and the detection frame of the target element corresponding to the screened feature map are obtained, and the target element set is formed according to the category and the detection frame of the target element.
Before the trained target recognition network is used for carrying out target element recognition on the collected video frames, the target recognition network can be trained by using a target recognition data set to obtain the trained target recognition network. Wherein the objects in the object identification dataset may comprise pedestrians, traffic lights, motor vehicles, etc., which may further comprise categories of motor vehicles, pedestrians, lights, trucks, buses or motorcycles, etc.
Before the step of carrying out target element recognition on the collected video frames by using a trained target recognition network, pedestrian crossing line detection can be carried out on each video frame, a plurality of pedestrian crossing lines may exist in an area based on the responsibility of a camera, and each camera is generally responsible for one area, so that a main pedestrian crossing line of each video frame can be recognized, the position area in each video frame based on the pedestrian crossing line is not changed, the visual angle of the camera is generally fixed, preferably, if a preset pedestrian crossing line recognition condition is met, the main pedestrian crossing line is recognized, and the position change of the main pedestrian crossing line caused by external factors such as the position of the camera, the updating of the road line and the like is corrected in time on the premise of not wasting computing resources. The pedestrian crossing line identification condition comprises that a current video frame is a video first frame, or the current video frame is a video frame corresponding to a preset pedestrian crossing line identification period, and/or the frame interval between the current video frame and a previous video frame which is identified but does not identify a pedestrian crossing line is equal to a preset first interval threshold. Further preferably, the first interval threshold is 10, so as to meet the actual detection requirement on the premise of reducing the computing resources.
Specifically, a pedestrian crossing line recognition period may be preset, and when the current video frame is the first frame of the video, the main crossing line recognition may be performed, and then the main shape crossing line recognition may be periodically performed according to the set pedestrian crossing line recognition period, and in the periodic recognition process, if the frame interval between the current video frame and the video frame which is previously recognized but has no pedestrian crossing line recognized is equal to a first interval threshold (for example, 10), the pedestrian crossing line recognition may be performed on the current video frame. The previous video frame without the pedestrian crossing line being identified can be understood as a video frame that needs to perform the pedestrian crossing line but does not identify the pedestrian crossing line, the human-shaped crossing line identification period can be set by a user, for example, the human-shaped crossing line identification period is performed every 10 seconds, or the human-shaped crossing line identification period is performed every 30 frames, and of course, the human-shaped crossing line identification period can be comprehensively determined by the type of the current camera, the cruise route and/or the cruise period. For example, for a camera without a pan-tilt head, such as a fixed rifle bolt, a relatively large crosswalk line identification period can be set based on the fact that the shooting area of the camera is usually unchanged; for another example, for a camera including a pan/tilt head such as a dome camera, the shooting area is changed due to the fact that the camera can rotate, and if the camera starts automatic cruise, the humanoid crosswalk line identification period can be comprehensively determined according to cruise time and a cruise path.
Fig. 1B is an exemplary diagram illustrating object element recognition of a captured video frame, where the object elements recognized in fig. 1B include a detection frame and confidence of a recognized pedestrian, a detection frame and confidence of a recognized motor vehicle, and a recognized main crosswalk detection frame.
In step S102, a preset target tracking algorithm is used to track the motion trajectory of the target element in the target element set.
In the embodiment of the present invention, the target tracking algorithm may use a correlation algorithm for re-identifying pedestrians and vehicles, such as a ReID algorithm, an SOTA algorithm, a prodvid algorithm, and the like, and for each target element in the identified current video frame, preferably, a preset maximum intersection ratio screening method is used to track the motion trajectory of the target element in the target element set, so that through the calculation amount of linear time complexity, a huge calculation amount required by forward propagation of the re-location algorithm is omitted, the detection efficiency is significantly improved, and the detection accuracy is further improved. The specific implementation manner of using the preset maximum intersection ratio screening method to track the motion trajectory of the target element in each target element set may refer to the related description of the fourth embodiment.
In step S103, based on the identified main pedestrian crossing line and the tracked motion trajectory of the target element, a preset violation detection algorithm is used to detect whether a violation that the motor vehicle does not give away to pedestrians exists in the current video frame.
In the embodiment of the present invention, the violation detection algorithm may be a correlation algorithm that determines whether the paths of the pedestrian and the motor vehicle intersect, and the violation detection algorithm is used to detect whether there is a violation that the motor vehicle does not give away the pedestrian in the current video frame, and preferably, the violation detection algorithm is a violation warning point algorithm, so as to detect whether there is a violation that the motor vehicle does not give away the pedestrian in the current video frame through a violation warning point. The specific implementation manner of using the violation warning point algorithm to detect the violation behaviors of the motor vehicle that do not give the passerby a good will refer to the description of the sixth embodiment, which is not repeated herein.
After detecting whether the illegal behavior of the motor vehicle which does not give the passengers is existed in the current video frame, preferably, if the illegal behavior is detected, a preset license plate recognition algorithm is used for recognizing the license plate of the motor vehicle which has the illegal behavior, and the license plate, the illegal behavior and/or the time of the illegal behavior are/is visually displayed so as to present the illegal behavior in a visual mode. The preset license plate recognition algorithm can adopt a license plate recognition algorithm such as HyperLPR.
In the embodiment of the invention, a trained target recognition network is used for carrying out target element recognition on the collected video frames to obtain a target element set, a preset target tracking algorithm is used for tracking the motion trail of the target elements in the target element set, and based on the recognized main pedestrian crossing line and the tracked motion trail of the target elements, whether the current video frame has the illegal behavior that the motor vehicle does not give good to pedestrians is detected,
example two:
fig. 2 shows an implementation flow of identifying a main pedestrian crossing line according to a second embodiment of the present invention, and for convenience of description, only the relevant portions of the second embodiment of the present invention are shown, which is detailed as follows:
in step S201, a trained segmentation network is used to perform pedestrian crossing line instance segmentation on the current video frame, so as to obtain at least one coarse positioning pedestrian crossing detection frame.
In the embodiment of the invention, a trained segmentation network is used for carrying out pedestrian crossing line example segmentation on the current video frame to obtain at least one coarse positioning pedestrian crossing detection frame. The segmentation network can be a network such as R-CNN, Fast R-CNN or Fast R-CNN, and preferably is a Mask R-CNN network to provide a high-quality segmentation result when detecting an object in an image. The specific implementation manner of training the Mask R-CNN network may correspond to the related description in the third reference embodiment.
Based on the camera visual angle fixation and the prior knowledge that the road line has relative stability, the method runs once when the Mask R-CNN network is initialized, and all the rough positioning pedestrian crossing detection frames are screened and stored in a running result. Further, the confidence of each coarse positioning crosswalk detection frame can be obtained. An example of the data format of each coarse positioning crosswalk detection box is as follows:
crossdata=[{"roi":{"rois":[400,300,240,50],"class_ids":1},"score":0.85,"index":0},{"roi":{"rois":[800,100,1000,150],"class_ids":1},"score":0.65,"index":1}]
in step S202, the main crosswalk line is finely positioned based on all the rough positioning crosswalk detection frames.
In the embodiment of the present invention, all the coarse positioning pedestrian crossing detection frames may be sorted according to the confidence of each coarse positioning pedestrian crossing detection frame, and the pedestrian crossing detection frame with the highest confidence is used as the master pedestrian crossing detection frame.
When the main pedestrian crossing line is screened from all the rough positioning pedestrian crossing detection frames by adopting a preset condition scoring algorithm, preferably, each rough positioning pedestrian crossing detection frame is scored by adopting the preset condition scoring algorithm to obtain a condition total score of each rough positioning pedestrian crossing detection frame, the rough positioning pedestrian crossing detection frame with the highest condition total score is obtained, the rough positioning pedestrian crossing detection frame with the highest condition total score is used as the main pedestrian crossing line detection frame, and a formula used by the condition scoring algorithm is as follows:
Figure BDA0002984193490000131
wherein the content of the first and second substances,
Figure BDA0002984193490000132
the condition total score of the ith rough positioning crosswalk detection frame is shown,
Figure BDA0002984193490000133
respectively showing the distances of the upper left corner of the ith coarse positioning pedestrian crossing detection box relative to the upper left corner of the current video frame in the horizontal direction and the vertical direction,
Figure BDA0002984193490000134
respectively showing the width and the height, width, of the ith coarse positioning pedestrian crossing detection frameIMG、heightIMGRespectively representing the width and height of a video frame,
Figure BDA0002984193490000135
represents the confidence of the detection frame of the ith rough positioning crosswalk, and I () is an indicating function, (R1)t/h,R2t/h) Indicates a first predetermined interval, (R1)w/h,R2w/h) Denotes a second preset section, s1 denotes a first preset fixed value, and s2 denotes a second preset fixed value.
Specifically, if the ratio of the distance from the upper edge of the rough positioning pedestrian crossing detection frame to the upper edge of the video frame to the height of the video frame is within a first preset interval, the condition of the rough positioning pedestrian crossing detection frame is always divided into the confidence coefficient of the rough positioning pedestrian crossing detection frame plus a first preset fixed value, if the ratio of the height of the rough positioning pedestrian crossing detection frame to the height of the video frame is within a second preset interval, the condition of the rough positioning pedestrian crossing line boundary frame is always divided into the confidence coefficient of the rough positioning pedestrian crossing line boundary frame plus a second preset fixed value, and if the two conditions are met simultaneously, the condition of the rough positioning pedestrian crossing line boundary frame is always divided into the confidence coefficient of the rough positioning pedestrian crossing line boundary frame plus the first preset fixed value and the second preset fixed value.
Further preferably, the first preset interval is (0.4, 0.6), the first preset fixed value is 0.4, the second preset interval is (0.07, 0.14), and the second preset fixed value is 0.14, so as to set a relevant value according to an experimental result, thereby further improving the accuracy of the host in positioning the crosswalk.
After the coarse positioning pedestrian crossing detection frame with the highest condition total score is obtained, preferably, whether the width ratio of the coarse positioning pedestrian crossing detection frame with the highest condition total score to the video frame is smaller than a preset width ratio threshold value or not is judged, if so, the coarse positioning pedestrian crossing detection frame with the highest condition total score is expanded towards two sides at equal intervals, so that the width ratio of the coarse positioning pedestrian crossing detection frame with the highest condition total score to the video frame reaches the width ratio threshold value, the expanded coarse positioning pedestrian crossing detection frame with the highest condition total score is used as a master pedestrian crossing line detection frame, if not, the expanded coarse positioning pedestrian crossing detection frame with the highest condition total score is used as a master pedestrian crossing line detection frame, and the master pedestrian crossing detection frame is expanded to facilitate follow-up more accurate prediction of warning points according to pedestrians on pedestrian crossing lines. Further preferably, the width ratio threshold is 0.85, which is set according to an experimental result, so as to further improve the accuracy of the violation warning point prediction.
In the embodiment of the invention, a trained segmentation network is used for carrying out pedestrian crossing line example segmentation on a current video frame to obtain at least one rough positioning pedestrian crossing detection frame, a master pedestrian crossing line is precisely positioned based on all the rough positioning pedestrian crossing detection frames, all the rough positioning pedestrian crossing detection frames are sequenced according to the confidence coefficient of each rough positioning pedestrian crossing detection frame when the master pedestrian crossing line is precisely positioned, and a preset condition scoring algorithm is adopted to screen the master pedestrian crossing line from all the rough positioning pedestrian crossing detection frames, so that the master pedestrian crossing line is more accurately screened through the condition scoring algorithm.
Example three:
fig. 3 shows an implementation flow of a training Mask R-CNN network provided in the third embodiment of the present invention, and for convenience of description, only the parts related to the third embodiment of the present invention are shown, which are detailed as follows:
in step S301, a lane line data set is acquired.
In the embodiment of the present invention, the lane lines in the lane line data set include pedestrian crossing lines, and the lane line data set may adopt a lane line data set such as BDD 100K.
In step S302, if the lane line mark in the lane line data set only includes the side contour of the lane line, the lane line is preprocessed to obtain a preprocessed lane line data set, where the mark of the preprocessed lane line includes the segmentation mask.
In the embodiment of the invention, the mark Mask based on the Mask R-CNN training needs to contain pixels inside an object, so when the Mask R-CNN network is trained, if the lane line mark in the lane line data set only contains the side profile of a lane line, the lane line is preprocessed to obtain a preprocessed lane line data set, and the preprocessed lane line data set is input into the Mask R-CNN network for training to obtain the trained Mask R-CNN network so as to meet the training requirement of the Mask R-CNN network.
In the preprocessing of the lane line in the lane line data set, it is preferable that, if the mark of the lane line includes only one line, a predetermined pixel width is diffused on both sides with the line as an axis, and if the mark of the lane line includes two straight lines, the head ends and the tail ends of the two lines are connected to form a quadrangular closed region, and the quadrangular closed region is filled in accordance with the type of the lane line so that the mark mask of the lane line after the preprocessing includes pixels inside the object.
Further preferably, the pixel width is 5 pixel widths to set the pixel width according to an actual test result.
In step S103, the preprocessed lane line data set is input into a Mask R-CNN network for training, so as to obtain a trained Mask R-CNN network.
Example four:
fig. 4 shows an implementation flow of tracking a motion trajectory of a target element by using a maximum intersection ratio screening method according to a fourth embodiment of the present invention, and for convenience of description, only a part related to the fourth embodiment of the present invention is shown, where the part includes:
in step S401, a first sequence box list set is obtained according to all current tracking path sequence box lists.
In the embodiment of the present invention, the first list set includes all tracking path sequence frame lists that are the same as the category of the target element, each tracking path sequence frame list includes the category of the tracked target element and the detection frames of the target element arranged in time sequence, and each tracking path sequence frame list corresponds to the motion trajectory of one target element, in other words, all the detection frames in each tracking path sequence frame list constitute the motion trajectory of one target element.
In the target detection algorithm, a certain target may be missed in a certain frame, so that no new frame is added to the bounding box sequence corresponding to the target at the time corresponding to the frame, but if the target is detected again in a subsequent frame, the tracking of the same target is interrupted, and it is also possible that the target disappears to cause a loss of an effective frame, so that preferably, before a first sequence frame list set is obtained according to all current tracking path sequence frame lists, a second frame interval of the current video frame and a video frame corresponding to a last detection frame in each current tracking path sequence frame list is obtained, and a tracking path sequence frame list of which the second frame interval is not less than a preset second interval threshold is discarded. Specifically, the frame number of the corresponding video frame when the last detection frame is detected may be additionally recorded for each tracking path sequence frame list, and the frame is called the last valid frame, and the video frame in which the target element is successfully detected without detection is called a valid frame, and a second interval threshold is set, and the frame number difference between the current video frame and the last valid frame of each tracking path sequence frame list is calculated, and if the frame number difference between the current video frame and the last valid frame is within the frame threshold, the tracking path sequence frame list is still retained, otherwise, the target is considered to have not existed in the video, and the tracking path sequence frame list is discarded.
Preferably, the second interval threshold is 5, so that the second interval threshold is set according to an experimental result, and the detection accuracy is further improved.
In step S402, an intersection ratio of the detection box of the target element and the last detection box of each tracking path sequence box list in the first sequence box list set is calculated.
In the embodiment of the invention, the intersection-to-parallel ratio calculation formula is as follows:
Figure BDA0002984193490000171
wherein, boxcurRepresenting the detection box of the target element, q representing the total number of the tracking path sequence box lists in the first sequence box list, boxjAnd the Area represents the Area of the detection frame.
In step S403, it is determined whether there is an intersection target frame, if so, step S404 is executed, otherwise, step S405 is executed.
In the embodiment of the present invention, the intersection ratio is the last detection frame of which the intersection ratio is greater than the preset intersection ratio threshold, and preferably, the intersection ratio threshold is 0.75.
For the case that two or more same-class objects move in the same direction or similar directions, which results in that the distance between the latest frame of one object and the latest valid frame of the other object is too close, the maximum intersection ratio cannot effectively distinguish the trajectories of the two or more objects, after determining whether there is a last detection frame with the intersection ratio greater than a preset intersection ratio threshold, preferably, if there is the intersection ratio target frame, performing a logarithm check on the maximum intersection ratio and the second maximum intersection ratio, and if the result passes the logarithm check, performing step S404, otherwise, using other algorithms such as similarity to further track the motion trajectory of the object element.
The formula used for the log test is:
(log2IoUmax-log2IoUsecond_max)<ε
wherein, IoUmaxDenotes the maximum crossing ratio, IoUsecond_maxRepresents the second largest cross-over ratio, and epsilon is a constant and is expressed as a preset logarithmic check threshold.
Preferably, the motion trajectory of the target element is tracked by using a preset cosine distance comparison method, so as to track the motion trajectory of the target element by using the cosine distance, and the specific implementation manner of tracking the motion trajectory of the target element by using the preset cosine distance comparison method may refer to the related description of the fifth embodiment.
In step S404, the motion trajectory of the target element is determined, and the detection frame of the target element is added to the tracking path sequence frame list where the last detection frame with the largest intersection ratio is located.
In step S405, the target element is determined to be a newly appeared element, and a tracking path sequence frame list is created according to the detection frame and the category of the target element.
Example five:
based on the fourth embodiment, fig. 5 shows an implementation process of tracking a motion trajectory of a target element by using a cosine distance comparison method according to the fifth embodiment of the present invention, and for convenience of description, only a part related to the embodiment of the present invention is shown, where the implementation process includes:
in step S501, the latest feature map of the target element in the current video frame is obtained, and the first feature map and the second feature map corresponding to the last detection box and the second last detection box of each tracking path sequence box list in the second sequence box list set are obtained.
In the embodiment of the present invention, the feature map of the target element currently tracked in the current video frame is obtained, and for convenience of description, the feature map of the target element (target element to be tracked) in the current video frame is represented by the latest feature map, and obtaining feature maps corresponding to the last detection frame and the second last detection frame of each tracking path sequence frame list in the second sequence frame list set, for convenience of description, respectively representing the feature maps corresponding to the last detection frame and the second last detection frame of each tracking path sequence frame list by using a first feature map and a second feature map, wherein the first feature map and the second feature map are feature maps in corresponding valid frames, wherein the second sequence frame list set comprises all the intersection and comparison track path sequence frame lists where the target frame is located, in other words, the second sequence frame list set comprises all tracking path sequence frame lists where the last detection frame with the intersection ratio larger than a preset intersection ratio threshold value is located.
In step S502, normalization processing is performed on the latest feature map and all the first feature map and the second feature map to obtain feature maps of uniform size.
In step S503, the first cosine distance and the second cosine distance between the normalized latest feature map and each of the first feature map and each of the second feature map are calculated, respectively.
In the embodiment of the present invention, the cosine distance calculation formula is:
Figure BDA0002984193490000181
wherein t represents the number of the tracking path sequence frame lists in the third sequence frame list, and A represents the latest feature graph after normalization processing; b isk1A first feature map after normalization corresponding to the last detection box of the kth tracking path sequence box list in the third sequence box list, that is, Bk1A first feature map after the k normalization processing is shown; b isk2A second feature map after normalization corresponding to the last detection box of the kth tracking path sequence box list in the third sequence box list, that is, Bk2Represents the second feature map after the kth normalization, cos (θ)k1Represents the cosine distance, cos (theta), between the latest feature map and the first feature map after the k-th normalization processk2And showing the cosine distance between the latest feature map and the second feature map after the k normalization processing.
In step S504, logarithms are respectively taken for each first cosine distance and each second cosine distance to obtain corresponding first feature similarity factor and second feature similarity factor.
In the embodiment of the present invention, the feature similarity factor calculation formula is:
Figure BDA0002984193490000191
wherein the content of the first and second substances,
Figure BDA0002984193490000192
represents the kth first feature similarity factor,
Figure BDA0002984193490000193
representing the kth second feature similarity factor.
In step S505, a linear weighting calculation is performed on each first feature similarity factor and the second feature similarity factor corresponding to each first feature similarity factor, so as to obtain a minimum weighted feature similarity factor.
In the embodiment of the present invention, the weighted minimum feature similarity factor calculation formula is:
Figure BDA0002984193490000194
wherein a represents the weight of the first feature similarity factor, b represents the weight of the second feature similarity factor, and the weight of the first feature similarity factor is greater than the weight of the second feature similarity factor, i.e. a > b.
In step S506, the motion trajectory corresponding to the tracking path sequence frame list corresponding to the minimum weighted feature similarity factor is taken as the motion trajectory of the target element, and the detection frame of the target element is added to the tracking path sequence frame list corresponding to the minimum weighted feature similarity factor.
In the embodiment of the present invention, the motion trajectory formed by the tracking path sequence box list corresponding to the minimum weighted feature similarity factor is the motion trajectory of the target element.
EXAMPLE six:
Fig. 6A shows a flow of implementing the violation behavior detection method using the violation warning point algorithm to detect that a motor vehicle does not give away a pedestrian according to a sixth embodiment of the present invention, and for convenience of description, only the parts related to the embodiment of the present invention are shown, where the steps include:
in step S601, an illegal warning point in the current video frame is predicted according to the motion trajectory of each pedestrian on the main crosswalk line.
In the embodiment of the invention, the position of each pedestrian in the next video frame can be predicted based on the motion track of each pedestrian, the violation warning point is determined according to the predicted position of the pedestrian, further, the position of the pedestrian in the next frame can be predicted by combining time factors, namely, the closer to the current frame, the greater the effect in prediction, further, each prediction factor can be amplified by adopting a logarithm taking mode, so that the effect of the time factors on the prediction result is more obvious, and finally, the result point (x) is calculatedwp,ywp) And setting as violation warning points. Thus, preferably, the violation warning points are calculated by:
Figure BDA0002984193490000201
Figure BDA0002984193490000202
wherein x iswp、ywpRespectively represent the coordinates of the violation warning points,
Figure BDA0002984193490000203
and c represents the number of the detection frames in the tracking path sequence frame list of the pedestrian.
In specific implementation, one violation warning point can be predicted according to the motion trajectory of each pedestrian, namely that multiple violation warning points may exist in the current video frame.
Before the violation warning point in the current video frame is predicted according to the motion trail of each pedestrian on the main crosswalk line, preferably, whether each pedestrian in the current video frame is on the main crosswalk line is judged, when whether each pedestrian in the current video frame is on the main crosswalk line is judged, calculation can be carried out according to the intersection and parallel ratio of the pedestrian detection frame and the pedestrian crosswalk detection frame, if the intersection and parallel ratio is larger than zero, the pedestrian appearing for the first time is determined to be on the pedestrian crosswalk line, preferably, whether the central point of the lower edge of the pedestrian detection frame is in the pedestrian crosswalk detection frame is judged, and therefore the operation amount in the judgment process is further reduced. The formula used for judging whether the central point of the lower edge of the pedestrian detection frame is in the pedestrian crossing line detection frame is as follows:
Figure BDA0002984193490000211
wherein (x)p,yp) Coordinates, width, representing the upper left corner of the pedestrian detection framep、heightpRespectively representing the width and height of the pedestrian detection frame, leftpc、toppcRespectively showing the distances of the upper left corner of the main crosswalk detection frame in the horizontal direction and the vertical direction relative to the upper left corner of the current video frame, and widthpc、heightpcThe values of f (x) and f (x) are 0 or 1, and when f (x) is 0, the center point of the lower edge of the pedestrian detection frame is in the pedestrian crossing line detection frame, and when f (x) is 1, the center point of the lower edge of the pedestrian detection frame is not in the pedestrian crossing line detection frame.
In step S602, it is determined whether or not the violation warning point is in the upper half area of the last detection frame of any of the recognized vehicles, and if so, step S603 is executed, and if not, step S604 is executed.
In step S603, it is determined that a violation is detected in the current video frame.
In step S604, it is determined that no violation is detected in the current video frame.
Fig. 6B shows an example diagram of detecting whether there is an illegal act of not claiming a pedestrian by a motor vehicle in a current video frame by using an illegal alert point algorithm, and in fig. 6B, an illegal alert point is not located in the upper half area of the last detection box of the motor vehicle, that is, there is no illegal act of not claiming a pedestrian by a motor vehicle in fig. 6B.
In the embodiment of the invention, whether the illegal behavior of the motor vehicle which does not give the passengers is existed in the current video frame is detected through the predicted illegal warning points in the current video frame, and compared with an algorithm for judging whether the advancing path of the motor vehicle and the pedestrians is shielded, the method can be also suitable for the conditions that the pedestrians are static and the pedestrians pass, thereby further improving the illegal behavior identification precision and the scene generalization capability.
Example seven:
fig. 7 shows a structure of a detection device for detecting whether a pedestrian is present or not in a motor vehicle according to a seventh embodiment of the present invention, and for convenience of description, only a part related to the embodiment of the present invention is shown, including:
a target element recognition unit 71, configured to perform target element recognition on the acquired video frame by using a trained target recognition network to obtain a target element set;
a motion trajectory tracking unit 72, configured to track a motion trajectory of a target element in the target element set by using a preset target tracking algorithm; and
and the illegal behavior recognition unit 73 is used for detecting whether an illegal behavior that the motor vehicle does not give away to pedestrians exists in the current video frame by using a preset illegal behavior detection algorithm based on the recognized main pedestrian crossing line and the tracked motion track of the target element.
Preferably, the apparatus comprises:
and the pedestrian crossing line identification unit is used for identifying the main pedestrian crossing line if a preset pedestrian crossing line identification condition is met, wherein the pedestrian crossing line identification condition comprises that the current video frame is the first frame of the video, or the current video frame is the video frame corresponding to the preset pedestrian crossing line identification period, and/or the frame interval between the current video frame and the previous video frame which is identified but does not identify the pedestrian crossing line is equal to a preset first interval threshold value.
Preferably, the crosswalk recognition unit further includes:
the pedestrian crossing segmentation unit is used for segmenting a pedestrian crossing line example of the current video frame by using a trained segmentation network to obtain at least one coarse positioning pedestrian crossing detection frame; and
and the fine positioning unit is used for finely positioning the main crosswalk line based on all the coarse positioning crosswalk detection frames.
Preferably, the fine positioning unit further comprises:
and the fine positioning subunit is used for screening main pedestrian-shaped crosswalk lines from all the coarse positioning crosswalk detection frames by adopting a preset condition scoring algorithm based on the confidence coefficient of each coarse positioning crosswalk detection frame, wherein the condition scoring algorithm combines the positions of the coarse positioning crosswalk detection frames in the video frames and/or the proportion of the coarse positioning crosswalk detection frames relative to the video frames.
Preferably, the fine positioning subunit further comprises:
the condition total score calculating unit is used for scoring each coarse positioning pedestrian crossing detection frame by adopting a preset condition scoring algorithm to obtain a condition total score of each coarse positioning pedestrian crossing detection frame; and
and the main crosswalk determining unit is used for acquiring the coarse positioning crosswalk detection frame with the highest total condition score and taking the coarse positioning crosswalk detection frame with the highest total condition score as a main crosswalk line detection frame.
Preferably, the main course determining unit further includes:
the width ratio judging unit is used for judging whether the width ratio of the coarse positioning pedestrian crossing detection frame with the highest total score of the conditions to the video frame is smaller than a preset width ratio threshold value or not;
the first determining subunit is used for expanding the coarse positioning pedestrian crossing detection frame with the highest total score of conditions to two sides at equal intervals if the width ratio is smaller than the width ratio threshold value, so that the width ratio of the coarse positioning pedestrian crossing detection frame with the highest total score of conditions to the video frame reaches the width ratio threshold value, and taking the expanded coarse positioning pedestrian crossing detection frame with the highest total score of conditions as a main pedestrian crossing line detection frame; and
and the second determining subunit is used for taking the coarse positioning pedestrian crossing detection frame with the highest total condition score as the main pedestrian crossing line detection frame if the width ratio is not smaller than the width ratio threshold.
Preferably, the width ratio threshold is 0.85.
Preferably, the target element set includes a category and a detection box of each target element, the target tracking algorithm is a maximum intersection ratio screening method, and the motion trajectory tracking unit further includes:
a first set obtaining unit, configured to obtain, for each target element in the target element set, a first sequence frame list set according to all current tracking path sequence frame lists, where the first sequence frame list set includes all tracking path sequence frame lists that are the same as categories of the target element, each tracking path sequence frame list includes categories of tracked target elements and detection frames of the target element arranged in time sequence, and each tracking path sequence frame list corresponds to a motion trajectory of one target element;
the intersection ratio calculation unit is used for calculating the intersection ratio of the detection frame of the target element and the last detection frame of each tracking path sequence frame list in the first sequence frame list set;
the device comprises a first judging unit, a second judging unit and a third judging unit, wherein the first judging unit is used for judging whether an intersection ratio target frame exists, and the intersection ratio target frame is a last detection frame of which the intersection ratio is larger than a preset intersection ratio threshold;
the track determining unit is used for determining the motion track of the target element if the intersection ratio target frame exists, and adding the detection frame of the target element into the tracking path sequence frame list where the last detection frame with the maximum intersection ratio exists;
and the new element finding unit is used for determining the target element as a new appearing element if the intersection matching target frame does not exist, and establishing a tracking path sequence frame list according to the detection frame and the category of the target element.
Preferably, the intersection ratio threshold is 0.75.
Preferably, the motion trail tracking unit further includes:
a second frame interval acquiring unit, configured to acquire a second frame interval between the current video frame and a video frame corresponding to a last detection frame in each current tracking path sequence frame list;
and the list discarding unit is used for discarding the tracking path sequence frame list of which the second frame interval is not less than the preset second interval threshold.
Preferably, the second interval threshold is 5.
Preferably, the motion trail tracking unit further includes:
the intersection ratio checking unit is used for carrying out logarithm checking on the maximum intersection ratio and the second maximum intersection ratio if the intersection ratio target frame exists;
the inspection result determining unit is used for determining the motion trail of the target element if the target element passes the logarithm inspection; and
and the tracking subunit is used for tracking the motion trail of the target element by adopting a preset cosine distance comparison method if the logarithm test is not passed.
Preferably, the tracking subunit further comprises:
the characteristic diagram acquiring unit is used for acquiring the latest characteristic diagram of the target element in the current video frame and a first characteristic diagram and a second characteristic diagram which respectively correspond to the last detection frame and the second last detection frame of each tracking path sequence frame list in a second sequence frame list set, wherein the second sequence frame list set comprises all the tracking path sequence frame lists where intersection is located and the target frame is located;
the normalization processing unit is used for performing normalization processing on the latest feature map and all the first feature maps and the second feature maps to obtain feature maps with uniform sizes;
the cosine distance calculating unit is used for calculating the first cosine distance and the second cosine distance between the latest feature map after normalization processing and each first feature map and each second feature map respectively;
the similarity factor calculation unit is used for respectively carrying out logarithm on each first cosine distance and each second cosine distance to obtain a corresponding first characteristic similarity factor and a corresponding second characteristic similarity factor;
the weighting calculation unit is used for carrying out linear weighting calculation on each first characteristic similarity factor and a second characteristic similarity factor corresponding to each first characteristic similarity factor to obtain a minimum weighting characteristic similarity factor; and
and the track determining subunit is configured to use a motion track corresponding to the tracking path sequence frame list corresponding to the minimum weighted feature similarity factor as the motion track of the target element, and add the detection frame of the target element to the tracking path sequence frame list corresponding to the minimum weighted feature similarity factor.
Preferably, the preset violation detection algorithm is a violation warning point algorithm, and the violation identification unit further includes:
the warning point prediction unit is used for predicting violation warning points in the current video frame according to the motion trail of each pedestrian on the main pedestrian crossing line;
the position relation judging unit is used for judging whether the violation warning point is in the upper half area of the last detection frame of any identified motor vehicle;
and the violation behavior determining unit is used for determining that the violation behavior is detected in the current video frame if the violation warning point is in the upper half area of any motor vehicle detection frame in the current video frame.
Preferably, the violation behavior recognition unit further includes:
the pedestrian position judging unit is used for judging whether each pedestrian in the current video frame is positioned on the main crosswalk line or not;
the pedestrian position determination unit further includes:
judging whether the central point of the lower edge of the pedestrian detection frame of the pedestrian is positioned in the pedestrian crosswalk detection frame; and
and the pedestrian position determining unit is used for judging that the pedestrian is positioned on the pedestrian crossing line if the pedestrian is positioned in the pedestrian crossing detection frame.
In the embodiment of the present invention, each unit of the device for detecting whether a motor vehicle gives the best impression to pedestrians may be implemented by a corresponding hardware or software unit, and each unit may be an independent software or hardware unit, or may be integrated into a software or hardware unit, which is not limited herein. The specific implementation of each unit of the detection device for detecting whether a pedestrian is present in a motor vehicle can refer to the description of the foregoing method embodiment, and is not described herein again.
Example eight:
fig. 8 shows a structure of an electronic device according to a fifth embodiment of the present invention, and only a part related to the fifth embodiment of the present invention is shown for convenience of description.
The electronic device 8 of an embodiment of the invention comprises a processor 80, a memory 81 and a computer program 82 stored in the memory 81 and executable on the processor 80. The processor 80, when executing the computer program 82, implements the steps in the above-described method embodiments, such as the steps S101 to S103 shown in fig. 1. Alternatively, the processor 80, when executing the computer program 82, implements the functions of the units in the above-described apparatus embodiments, such as the functions of the units 71 to 73 shown in fig. 7.
Example nine:
in an embodiment of the present invention, a computer-readable storage medium is provided, which stores a computer program that, when executed by a processor, implements the steps in the above-described method embodiment, for example, steps S101 to S103 shown in fig. 1. Alternatively, the computer program may be adapted to perform the functions of the units of the above-described device embodiments, such as the functions of the units 71 to 73 of fig. 7, when executed by the processor.
The computer readable storage medium of the embodiments of the present invention may include any entity or device capable of carrying computer program code, a recording medium, such as a ROM/RAM, a magnetic disk, an optical disk, a flash memory, or the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A method for detecting whether a motor vehicle is courtesy for a pedestrian, comprising the steps of:
carrying out target element recognition on the collected video frames by using a trained target recognition network to obtain a target element set;
tracking the motion trail of the target element in the target element set by using a preset target tracking algorithm;
and detecting whether the violation behaviors that the motor vehicle does not give away the pedestrians are present in the current video frame by using a preset violation behavior detection algorithm based on the identified main pedestrian crossing line and the tracked motion trail of the target element.
2. The method of claim 1, wherein the step of performing object element recognition on the captured video frames using the trained object recognition network is preceded by the step of:
if the preset pedestrian crossing line identification condition is met, identifying the main pedestrian crossing line, wherein the pedestrian crossing line identification condition comprises that the current video frame is a video initial frame, or the current video frame is a video frame corresponding to a preset pedestrian crossing line identification period, and/or the frame interval between the current video frame and a video frame which is identified before but has no pedestrian crossing line identified is equal to a preset first interval threshold value;
the step of identifying the main pedestrian crossing line further comprises:
performing pedestrian crossing line example segmentation on the current video frame by using a trained segmentation network to obtain at least one rough positioning pedestrian crossing detection frame;
finely positioning the main crosswalk line based on all the coarse positioning crosswalk detection frames;
the segmentation network is a Mask R-CNN network, and the step of training the Mask R-CNN network comprises the following steps:
acquiring a lane line data set;
if the lane line marks in the lane line data set only contain the side profiles of the lane lines, preprocessing the lane lines to obtain a preprocessed lane line data set, wherein the lane lines comprise pedestrian crossing lines, and the preprocessed lane line marks contain segmentation masks;
inputting the preprocessed lane line data set into the Mask R-CNN network for training to obtain a trained Mask R-CNN network;
the step of preprocessing the lane lines in the lane line data set further includes:
if the mark of the lane line only comprises one line, respectively diffusing preset pixel widths at two sides by taking the line as an axis, wherein the pixel width is 5 pixel widths;
and if the mark of the lane line comprises two straight lines, respectively connecting the head ends and the tail ends of the two straight lines to form a quadrilateral closed area, and filling the quadrilateral closed area according to the type of the lane line.
3. The method of claim 2, wherein said step of fine positioning said main crosswalk line based on all of said coarse positioning crosswalk detection boxes comprises:
screening the main pedestrian-shaped crosswalk line from all the coarse positioning crosswalk detection frames by adopting a preset condition scoring algorithm based on the confidence coefficient of each coarse positioning crosswalk detection frame, wherein the condition scoring algorithm combines the position of the coarse positioning crosswalk detection frame in a video frame and/or the proportion of the coarse positioning crosswalk detection frame relative to the video frame;
the step of screening the master-shaped crosswalk lines from all the rough positioning crosswalk detection frames by adopting a preset condition scoring algorithm comprises the following steps of:
scoring each coarse positioning pedestrian crossing detection frame by adopting a preset condition scoring algorithm to obtain a condition total score of each coarse positioning pedestrian crossing detection frame;
acquiring a coarse positioning pedestrian crossing detection frame with the highest total score of conditions, and taking the coarse positioning pedestrian crossing detection frame with the highest total score of conditions as a main pedestrian crossing line detection frame;
the formula used by the conditional scoring algorithm is as follows:
Figure FDA0002984193480000021
wherein the content of the first and second substances,
Figure FDA0002984193480000022
the condition total score of the ith rough positioning crosswalk detection frame is represented,
Figure FDA0002984193480000031
respectively representing the distances of the upper left corner of the ith rough positioning pedestrian crossing detection box in the horizontal direction and the vertical direction relative to the upper left corner of the current video frame,
Figure FDA0002984193480000032
respectively showing the width and the height, width, of the ith coarse positioning pedestrian crossing detection frameIMG、heightIMGRespectively representing the width and height of a video frame,
Figure FDA0002984193480000033
represents the confidence of the ith coarse positioning crosswalk detection frame, and I () is an indication function, (R1)t/h,R2t/h) Indicates a first predetermined interval, (R1)w/h,R2w/h) Represents a second preset section, s1 represents the first preset fixed value, and s2 represents the second preset fixed value;
after the step of obtaining the coarse positioning pedestrian crossing detection frame with the highest total score of the conditions, the method further comprises the following steps:
judging whether the width ratio of the coarse positioning pedestrian crossing detection frame with the highest total score of the conditions to the video frame is smaller than a preset width ratio threshold value or not;
if the sum of the width ratio of the rough positioning pedestrian crossing detection frame with the highest total score of the conditions to the video frame reaches the width ratio threshold, the expanded rough positioning pedestrian crossing detection frame with the highest total score of the conditions is used as the main pedestrian crossing line detection frame;
and if not, taking the coarse positioning pedestrian crossing detection frame with the highest total condition score as the main pedestrian crossing line detection frame.
4. The method of claim 3, wherein the target identification network is an SSD MobileNet-v2 network, the first interval threshold is 10, the first predetermined interval is (0.4, 0.6), the first predetermined fixed value is 0.4, the second predetermined interval is (0.07, 0.14), the second predetermined fixed value is 0.14, and the width ratio threshold is 0.85.
5. The method of claim 1, wherein the target element set comprises a category and a detection box of each target element, the target tracking algorithm is a maximum cross-over ratio screening method, and the step of tracking the motion trajectory of the target elements in the target element set by using a preset target tracking algorithm comprises:
for each target element in the target element set, obtaining a first sequence frame list set according to all current tracking path sequence frame lists, wherein the first sequence frame list set comprises all tracking path sequence frame lists which are the same as the types of the target elements, each tracking path sequence frame list comprises the types of the tracked target elements and detection frames of the target elements which are arranged in time sequence, and each tracking path sequence frame list corresponds to a motion track of one target element;
calculating the intersection ratio of the detection frame of the target element and the last detection frame of each tracking path sequence frame list in the first sequence frame list set;
judging whether an intersection ratio target frame exists or not, wherein the intersection ratio target frame is the last detection frame with an intersection ratio larger than a preset intersection ratio threshold value, and the intersection ratio threshold value is 0.75;
if the current target element exists, determining the motion track of the tracked target element, and adding the detection frame of the target element into a tracking path sequence frame list where the last detection frame with the largest intersection ratio is located;
if the target element does not exist, determining that the target element is a new element, and creating a tracking path sequence frame list according to the detection frame and the type of the target element;
before the step of obtaining the first sequence box list set according to all current tracking path sequence box lists, the method further includes:
acquiring a second frame interval of the current video frame and the video frame corresponding to the last detection frame in each current tracking path sequence frame list;
and discarding the tracking path sequence frame list with the second frame interval not less than a preset second interval threshold, wherein the second interval threshold is 5.
6. The method of claim 5,
after the step of judging whether the last detection frame with the intersection ratio larger than the preset intersection ratio threshold exists, the method further comprises the following steps:
if the intersection ratio target frame exists, carrying out logarithm extraction test on the maximum intersection ratio and the second maximum intersection ratio;
if the logarithm test is passed, determining the motion track of the target element;
if the logarithm check is not passed, tracking the motion track of the target element by adopting a preset cosine distance comparison method;
the logarithm test is used according to the formula:
(log2IoUmax-log2IoUsecond_max)<ε
wherein, IoUmaxDenotes the maximum crossing ratio, IoUsecond_maxRepresenting a second largest cross-over ratio, wherein epsilon is a constant and is represented as a preset logarithmic check threshold value;
the step of tracking the motion trail of the target element by adopting a preset cosine distance comparison method comprises the following steps:
acquiring a latest feature map of the target element in a current video frame, and a first feature map and a second feature map which respectively correspond to a last detection frame and a second last detection frame of each tracking path sequence frame list in a second sequence frame list set, wherein the second sequence frame list set comprises a tracking path sequence frame list in which all the intersection and comparison target frames are located;
normalizing the latest feature map and all the first feature maps and the second feature maps to obtain feature maps with uniform sizes;
respectively calculating a first cosine distance and a second cosine distance between the latest feature map after normalization processing and each first feature map and each second feature map;
respectively carrying out logarithm extraction on each first cosine distance and each second cosine distance to obtain a corresponding first characteristic similarity factor and a corresponding second characteristic similarity factor;
performing linear weighting calculation on each first characteristic similarity factor and a second characteristic similarity factor corresponding to each first characteristic similarity factor to obtain a minimum weighted characteristic similarity factor;
and taking the motion track corresponding to the tracking path sequence frame list corresponding to the minimum weighting characteristic similarity factor as the motion track of the target element, and adding the detection frame of the target element to the tracking path sequence frame list corresponding to the minimum weighting characteristic similarity factor.
7. The method according to claim 1, wherein the predetermined violation detection algorithm is a violation warning point algorithm, and the step of detecting whether a violation that does not give away a pedestrian to the motor vehicle exists in the current video frame using the predetermined violation detection algorithm comprises:
predicting an illegal warning point in the current video frame according to the motion track of each pedestrian on the main crosswalk line;
judging whether the violation warning point is in the upper half area of the last detection frame of any identified motor vehicle;
if so, determining that the violation behavior is detected in the current video frame;
the calculation mode of the violation warning point is as follows:
Figure FDA0002984193480000061
Figure FDA0002984193480000062
wherein x iswp、ywpRespectively represent the coordinates of the violation warning points,
Figure FDA0002984193480000063
respectively representing the coordinates of the center point of the mth detection frame in the tracking path sequence frame list of the pedestrian, and c representing the number of the detection frames in the tracking path sequence frame list of the pedestrian;
the method comprises the following steps of predicting an illegal warning point in a current video frame according to the motion trail of each pedestrian on the main crosswalk line, wherein the steps comprise:
judging whether each pedestrian in the current video frame is on the main pedestrian crossing line;
the step of judging whether each pedestrian in the current video frame is on the main pedestrian crossing line comprises the following steps:
judging whether the central point of the lower edge of the pedestrian detection frame of the pedestrian is positioned in the pedestrian crosswalk detection frame;
if the pedestrian is in the pedestrian crossing detection frame, judging that the pedestrian is on the pedestrian crossing line;
after the step of detecting whether the violation behaviors that the motor vehicle does not give away the pedestrians are present in the current video frame by using the preset violation behavior detection algorithm, the method further comprises the following steps:
if the illegal behavior is detected, a preset license plate recognition algorithm is used for recognizing the license plate of the motor vehicle with the illegal behavior;
and visually displaying the license plate, the violation and/or the time when the violation occurs.
8. A device for detecting whether a motor vehicle is courtesy for a pedestrian, the device comprising:
the target element identification unit is used for carrying out target element identification on the collected video frames by using a trained target identification network to obtain a target element set;
a motion trail tracking unit, configured to track a motion trail of a target element in the target element set by using a preset target tracking algorithm; and
and the illegal behavior recognition unit is used for detecting whether the illegal behavior of the motor vehicle which does not give the best to the pedestrians exists in the current video frame by using a preset illegal behavior detection algorithm based on the recognized main pedestrian crossing line and the tracked motion track of the target element.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202110295491.7A 2021-03-19 2021-03-19 Method and device for detecting whether motor vehicle gives way to pedestrians, electronic equipment and medium Expired - Fee Related CN113011331B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110295491.7A CN113011331B (en) 2021-03-19 2021-03-19 Method and device for detecting whether motor vehicle gives way to pedestrians, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110295491.7A CN113011331B (en) 2021-03-19 2021-03-19 Method and device for detecting whether motor vehicle gives way to pedestrians, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN113011331A true CN113011331A (en) 2021-06-22
CN113011331B CN113011331B (en) 2021-11-09

Family

ID=76403086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110295491.7A Expired - Fee Related CN113011331B (en) 2021-03-19 2021-03-19 Method and device for detecting whether motor vehicle gives way to pedestrians, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN113011331B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI787990B (en) * 2021-09-07 2022-12-21 中華電信股份有限公司 System and method of monitoring vehicle not yielded to pedestrian
CN115546192A (en) * 2022-11-03 2022-12-30 中国平安财产保险股份有限公司 Livestock quantity identification method, device, equipment and storage medium
CN117037045A (en) * 2023-10-08 2023-11-10 成都考拉悠然科技有限公司 Anomaly detection system based on fusion clustering and deep learning
CN117392621A (en) * 2023-11-07 2024-01-12 西南交通大学 Method and system for identifying behavior of motor vehicle in case of turning right without giving away pedestrians

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030126622A1 (en) * 2001-12-27 2003-07-03 Koninklijke Philips Electronics N.V. Method for efficiently storing the trajectory of tracked objects in video
CN104361747A (en) * 2014-11-11 2015-02-18 杭州新迪数字工程系统有限公司 Automatic capture system and recognition method for vehicles not giving way to passengers on zebra crossing
US20160148058A1 (en) * 2014-05-15 2016-05-26 Xerox Corporation Traffic violation detection
CN107730906A (en) * 2017-07-11 2018-02-23 银江股份有限公司 Zebra stripes vehicle does not give precedence to the vision detection system of pedestrian behavior
US9946960B1 (en) * 2017-10-13 2018-04-17 StradVision, Inc. Method for acquiring bounding box corresponding to an object in an image by using convolutional neural network including tracking network and computing device using the same
CN109493609A (en) * 2018-12-11 2019-03-19 杭州炬视科技有限公司 A kind of portable device and method for not giving precedence to the candid photograph of pedestrian's automatic identification
CN110232370A (en) * 2019-06-21 2019-09-13 华北电力大学(保定) A kind of transmission line of electricity Aerial Images fitting detection method for improving SSD model
CN110378259A (en) * 2019-07-05 2019-10-25 桂林电子科技大学 A kind of multiple target Activity recognition method and system towards monitor video
CN110689724A (en) * 2018-12-31 2020-01-14 上海眼控科技股份有限公司 Motor vehicle zebra crossing courtesy pedestrian automatic auditing method based on deep learning
CN111460926A (en) * 2020-03-16 2020-07-28 华中科技大学 Video pedestrian detection method fusing multi-target tracking clues
CN111524161A (en) * 2019-02-01 2020-08-11 杭州海康威视数字技术股份有限公司 Method and device for extracting track
US20200327679A1 (en) * 2019-04-12 2020-10-15 Beijing Moviebook Science and Technology Co., Ltd. Visual target tracking method and apparatus based on deeply and densely connected neural network
CN111986228A (en) * 2020-09-02 2020-11-24 华侨大学 Pedestrian tracking method, device and medium based on LSTM model escalator scene

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030126622A1 (en) * 2001-12-27 2003-07-03 Koninklijke Philips Electronics N.V. Method for efficiently storing the trajectory of tracked objects in video
US20160148058A1 (en) * 2014-05-15 2016-05-26 Xerox Corporation Traffic violation detection
CN104361747A (en) * 2014-11-11 2015-02-18 杭州新迪数字工程系统有限公司 Automatic capture system and recognition method for vehicles not giving way to passengers on zebra crossing
CN107730906A (en) * 2017-07-11 2018-02-23 银江股份有限公司 Zebra stripes vehicle does not give precedence to the vision detection system of pedestrian behavior
US9946960B1 (en) * 2017-10-13 2018-04-17 StradVision, Inc. Method for acquiring bounding box corresponding to an object in an image by using convolutional neural network including tracking network and computing device using the same
CN109493609A (en) * 2018-12-11 2019-03-19 杭州炬视科技有限公司 A kind of portable device and method for not giving precedence to the candid photograph of pedestrian's automatic identification
CN110689724A (en) * 2018-12-31 2020-01-14 上海眼控科技股份有限公司 Motor vehicle zebra crossing courtesy pedestrian automatic auditing method based on deep learning
CN111524161A (en) * 2019-02-01 2020-08-11 杭州海康威视数字技术股份有限公司 Method and device for extracting track
US20200327679A1 (en) * 2019-04-12 2020-10-15 Beijing Moviebook Science and Technology Co., Ltd. Visual target tracking method and apparatus based on deeply and densely connected neural network
CN110232370A (en) * 2019-06-21 2019-09-13 华北电力大学(保定) A kind of transmission line of electricity Aerial Images fitting detection method for improving SSD model
CN110378259A (en) * 2019-07-05 2019-10-25 桂林电子科技大学 A kind of multiple target Activity recognition method and system towards monitor video
CN111460926A (en) * 2020-03-16 2020-07-28 华中科技大学 Video pedestrian detection method fusing multi-target tracking clues
CN111986228A (en) * 2020-09-02 2020-11-24 华侨大学 Pedestrian tracking method, device and medium based on LSTM model escalator scene

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WENHAN LUO等: "Multiple Object Tracking: A Literature Review", 《ARXIV》 *
郑凯: "武汉市智能交通系统路口检测子系统的设计与实现", 《中国硕士学位论文全文数据库信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI787990B (en) * 2021-09-07 2022-12-21 中華電信股份有限公司 System and method of monitoring vehicle not yielded to pedestrian
CN115546192A (en) * 2022-11-03 2022-12-30 中国平安财产保险股份有限公司 Livestock quantity identification method, device, equipment and storage medium
CN115546192B (en) * 2022-11-03 2023-03-21 中国平安财产保险股份有限公司 Livestock quantity identification method, device, equipment and storage medium
CN117037045A (en) * 2023-10-08 2023-11-10 成都考拉悠然科技有限公司 Anomaly detection system based on fusion clustering and deep learning
CN117037045B (en) * 2023-10-08 2024-04-26 成都考拉悠然科技有限公司 Anomaly detection system based on fusion clustering and deep learning
CN117392621A (en) * 2023-11-07 2024-01-12 西南交通大学 Method and system for identifying behavior of motor vehicle in case of turning right without giving away pedestrians
CN117392621B (en) * 2023-11-07 2024-06-07 西南交通大学 Method and system for identifying behavior of motor vehicle in case of turning right without giving away pedestrians

Also Published As

Publication number Publication date
CN113011331B (en) 2021-11-09

Similar Documents

Publication Publication Date Title
CN113011331B (en) Method and device for detecting whether motor vehicle gives way to pedestrians, electronic equipment and medium
CN103824452B (en) A kind of peccancy parking detector based on panoramic vision of lightweight
CN109284674B (en) Method and device for determining lane line
US20200364467A1 (en) Method and device for detecting illegal parking, and electronic device
CN109670376B (en) Lane line identification method and system
CN100452110C (en) Automobile video frequency discrimination speed-testing method
CN109977782B (en) Cross-store operation behavior detection method based on target position information reasoning
US20180033148A1 (en) Method, apparatus and device for detecting lane boundary
CN102903239B (en) Method and system for detecting illegal left-and-right steering of vehicle at traffic intersection
CN112016605B (en) Target detection method based on corner alignment and boundary matching of bounding box
CN111554105B (en) Intelligent traffic identification and statistics method for complex traffic intersection
CN106446150A (en) Method and device for precise vehicle retrieval
CN112651293B (en) Video detection method for road illegal spreading event
CN111898491B (en) Identification method and device for reverse driving of vehicle and electronic equipment
CN107705577B (en) Real-time detection method and system for calibrating illegal lane change of vehicle based on lane line
CN107644206A (en) A kind of road abnormal behaviour action detection device
CN111126323A (en) Bayonet element recognition and analysis method and system serving for traffic violation detection
JP6678552B2 (en) Vehicle type identification device and vehicle type identification method
CN111524350B (en) Method, system, terminal device and medium for detecting abnormal driving condition of vehicle and road cooperation
CN112528924A (en) Vehicle turning detection method, device, equipment and storage medium
Suttiponpisarn et al. Detection of wrong direction vehicles on two-way traffic
CN112447060A (en) Method and device for recognizing lane and computing equipment
KR101347886B1 (en) Method and Apparatus for Road Lane Recognition by Surface Region and Geometry Information
Coronado et al. Detection and classification of road signs for automatic inventory systems using computer vision
CN114693722B (en) Vehicle driving behavior detection method, detection device and detection equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20211109