CN115019262B - Method for automatically capturing red light running of electric bicycle - Google Patents

Method for automatically capturing red light running of electric bicycle Download PDF

Info

Publication number
CN115019262B
CN115019262B CN202210341635.2A CN202210341635A CN115019262B CN 115019262 B CN115019262 B CN 115019262B CN 202210341635 A CN202210341635 A CN 202210341635A CN 115019262 B CN115019262 B CN 115019262B
Authority
CN
China
Prior art keywords
red light
electric
snapshot
license plate
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210341635.2A
Other languages
Chinese (zh)
Other versions
CN115019262A (en
Inventor
聂飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ronghe Yongdao Technology Co ltd
Original Assignee
Shenzhen Ronghe Yongdao Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ronghe Yongdao Technology Co ltd filed Critical Shenzhen Ronghe Yongdao Technology Co ltd
Priority to CN202210341635.2A priority Critical patent/CN115019262B/en
Publication of CN115019262A publication Critical patent/CN115019262A/en
Application granted granted Critical
Publication of CN115019262B publication Critical patent/CN115019262B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of artificial intelligence machine vision analysis and recognition, and discloses a method for automatically capturing red light running of an electric bicycle, collecting samples of the electric bicycle, a license plate of a person and a status light, and training Yolov target detection model library by adopting Darknet deep learning frames; and (3) thinning and pruning the obtained model library, performing F16 fine type secondary reasoning conversion to obtain a final red light running target object detection model library, and identifying the number plate after the snapshot event is generated, and recording the red light running behavior snapshot event record and video evidence obtaining and recording. According to the method for automatically capturing the red light running of the electric two-wheel vehicle, an AI artificial intelligent machine vision 'deep learning' technology is adopted, a video detection red, yellow and green light state algorithm library is trained, and red, yellow and green light state signals can be obtained without connecting a traffic light state machine.

Description

Method for automatically capturing red light running of electric bicycle
Technical Field
The invention relates to the technical field of artificial intelligence machine vision analysis and identification, in particular to a method for automatically capturing red light running of an electric bicycle.
Background
Currently, in the field of road traffic safety management, a motor vehicle red light running snapshot system has been widely applied, and three modes are generally adopted for a red light running snapshot mode: 1) The physical connection reads traffic light status signals+or needs to adopt radar to detect the vehicle+license plate recognition; 2) Reading traffic light signals through physical connection or detecting vehicles through ground sensing and license plate recognition; 3) And the network connection reads red light signals, video scribing detection vehicles and license plate identification.
In recent years, due to the technical development of the electric two-wheeled vehicle, the electric two-wheeled vehicle has high convenience and economic cost performance, so that people can use the electric two-wheeled vehicle in a large amount, traffic accidents are very easy to happen when the electric two-wheeled vehicle runs the red light due to the fact that the electric two-wheeled vehicle is high in speed, poor in protection measure, lack of effective technical means for supervision and the like, and the behavior of running the red light of the electric two-wheeled vehicle can be supervised in a snapshot mode according to the license plate policy of the electric two-wheeled vehicle.
However, when the traditional red light running snapshot mode is applied to supervision of an electric bicycle, the three modes of red light running snapshot modes have some defects and shortcomings: the first mode has the problems that radar equipment is expensive, a traffic light state machine reading state has a butt joint problem, construction is complex, and the radar detection electric motorcycle needs adjustment and correction, so that compatibility is poor; 2 nd and 3 rd modes mainly have the problems of complex butt joint and construction, and because of the problems of multiple types, multiple models and diversified forms of the electric two-wheeled vehicle, the detection of the electric two-wheeled vehicle is not applicable to the traditional dynamic target tracking algorithm; because the electric bicycle license plate target is small, the size is less than 50% of that of the motor vehicle license plate, and a camera with 800W pixels is needed for shooting clear license plates, in order to monitor the whole red light running process, the shooting range is large, the traditional automatic license plate identification whole processing process is divided into five modules of pretreatment, edge extraction, license plate positioning, character segmentation and character identification, the problems of slow license plate identification, reduced identification rate and the like are caused in a mode of panoramic large image and smaller license plate target, and therefore, the automatic red light running snapshot method of the electric bicycle is provided.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method for automatically capturing red light running of an electric two-wheeled vehicle, which has the advantages of realizing red light running behavior of the electric two-wheeled vehicle in a red light running state of a video identification crossing by adopting an AI technology, continuously capturing 3 pictures, recording videos from the beginning to the end of the red light running behavior, identifying license plate numbers of the electric two-wheeled vehicle, forming an effective technical supervision means of the red light running behavior of the electric two-wheeled vehicle, and solving the problems in the background technology.
The invention provides the following technical scheme: a method for automatically capturing red light running of an electric bicycle comprises the following steps:
step one: selecting 800W and above pixels to support Onvif protocol network cameras;
step two: selecting equipment with the power calculated by Nvidia JetsonNX GPU above AI 22T (INT 8);
step three: selecting Darknet deep learning frames for training the intersection target detection model library of Yolov to generate a target detection model library;
step four: a Darknet deep learning frame is selected and used for training a Tiny Yolov traffic signal subdivision target detection model base to generate a signal lamp state target detection model base;
Step five: selecting Darknet deep learning frames for training Tiny Yolov license plate recognition character classification model libraries to generate license plate character target detection algorithm model libraries;
Step six: pruning the original model generated in the third step, the fourth step and the fifth step;
Step seven: setting a connection network camera, adopting rtsp protocol to pull a stream from the network camera, decoding, and converting h264/h265 code stream into RGB24 images;
Step eight: setting virtual signal lamp detection areas in video pictures of cameras of crossroads and T-junctions, defining a rule setting function, drawing a local rectangular area containing signal lamps at the positions of the signal lamps of the panoramic image of the crossroads, cutting the rectangular image by a system, and calling a signal lamp state target detection model library to detect red, yellow, green and unknown states;
step nine: setting a snapshot detection area, demarcating 3 snapshot lines in an image detection area range, setting 3 snapshot line management objects by a program, storing a cut target image of an electric two-wheeled vehicle mixed with lines in the corresponding snapshot line management objects when the program detects a red light state, and when the 3 snapshot lines of the electric two-wheeled vehicle all have mixed line behaviors, carrying out snapshot judgment logic as follows:
1.3 snapshot lines trigger a snapshot event when the snapshot lines are in touch and mix in a red light state;
2. the 1 st line triggers a snap shooting event when being mixed in a red light state;
Step ten: identifying the number plate after the snap event is generated;
Step eleven: recording snapshot events of red light running, storing panoramic pictures, license plate character strings, time, places and other information in 3 snapshot line management objects to a local database, and uploading the information to a management platform;
step twelve: the motor bicycle runs the red light and acts as video evidence taking and video recording.
Preferably, in the third step, the targets are classified as follows: 0. electric two-wheeled vehicles (motorcycles); 1. a bicycle; 2. a car; 3. large trucks; 4. a bus; 5. other special vehicles; 6. the traffic signal lamp is used for setting the size of the neural network to 512 x 512 pixel parameters, the training mode is Iou mode, and more than 100000 sample labeling training is carried out on the panoramic picture of the acquired intersection.
Preferably, the targets in the fourth step are classified as: 0 no state; 1. green light; 2. a yellow lamp; 3. a red light; the detection range is set to be a signal lamp local area, the size of the neural network is set to 320 x 320, and more than 6000 sample labeling training of the local pictures of the intersection traffic signal lamp are collected.
Preferably, in the fifth step, the targets are classified as follows: 0. the number 0;1. number 1;2 the number 2;3. number 3;4. number 4;5. number 5;6. number 6;7. number 7;8. number 8;9. number 9;10. white bottom number plate; 11. yellow bottom number plate; 12, blue bottom number plate; 13. green bottom number plate; english letters 14A;15B,16C,17D,18E,19F,20G,21H,22I,23J,24K,25L,26M,27N,28O,29P,30Q,31R,32S,33T,34U,35V,36W,37X,38Y,39Z, set the neural network size as 320 x 320 pixel parameters, set the training mode as giou mode, and collect the sample labeling training above 20000 of the license plate local pictures of the electric two-wheeled vehicle at the intersection.
Preferably, in the sixth step, the pruning method for the original model comprises the following steps: the method comprises the steps of finding out masks of all convolution layers through a global threshold, then for each group of shortcut, merging the pruning masks of all the connected convolution layers, pruning through the masks after merge, carrying out fine tuning training on a model through Darknet after pruning, and then adopting TensorRT F mode to carry out secondary reasoning acceleration.
Preferably, in the eighth step, the system is set to detect every 1 second, and the detection result is recorded in the device memory.
Preferably, the method for identifying the number plate in the step ten comprises the following steps: and extracting license plate partial images with the best maximum and image quality from the 3 snapshot line management objects, calling a license plate character division model library to detect characters in the license plate, sequencing the characters from left to right according to X coordinates, and converting the characters into license plate number character strings.
Preferably, in step twelve, when the red light state is read, when the electric bicycle is mixed with the first snapshot line, video recording is started, and after 15 seconds of delay, a video recording record is generated and is associated with the red light running snapshot event record, and the video recording record is used as video recording evidence.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the method for automatically capturing the red light running of the electric two-wheel vehicle, an AI artificial intelligent machine vision 'deep learning' technology is adopted, a video detection red, yellow and green light state algorithm library is trained, and red, yellow and green light state signals can be obtained without connecting a traffic light state machine; the algorithm library of the electric two-wheel vehicle is trained and detected by adopting the AI artificial intelligence machine vision 'deep learning' technology, and the algorithm library has the capability of distinguishing the electric two-wheel vehicle (including electric motorcycle), the bicycle and the motor vehicle.
2. According to the automatic snapshot method for the red light running of the electric two-wheel vehicle, an AI artificial intelligent machine vision 'deep learning' technology is adopted, an electric two-wheel vehicle license plate recognition algorithm library is trained and realized, and the recognition rate is improved; 3 snapshot lines are drawn through videos, the red light running behavior of the electric bicycle in a red light state is recorded, 3 panoramic pictures are snapshot, a license plate target picture is extracted, and video evidence collection is recorded.
3. According to the automatic snapshot method for the red light running of the electric two-wheeled vehicle, the AI technology is adopted to realize the red light running behavior of the electric two-wheeled vehicle in the red light running state of the crossing, 3 pictures can be continuously snapshot and videos from the beginning to the end of the red light running behavior can be recorded, the license plate number of the electric two-wheeled vehicle is identified, and an effective technical supervision means for the red light running behavior of the electric two-wheeled vehicle is formed.
Drawings
FIG. 1 is a schematic diagram of a development flow of a target object detection library for red light running behavior of an electric vehicle;
FIG. 2 is a schematic illustration of an intersection system deployment;
Fig. 3 is a snapshot flow chart of red light running behavior of the electric bicycle.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1 and 2, a method for automatically capturing red light running of an electric two-wheeled vehicle includes the following steps:
step one: selecting 800W and above pixel to support Onvif protocol network cameras, having rtsp real-time streaming media protocol, having capability of viewing license plate numbers by human eyes, and collecting samples of electric bicycles, human license plates and status lights;
step two: selecting equipment with the power of Nvidia JetsonNX GPU calculated above AI 22T (INT 8), and drawing a training label according to yolo standard;
Step three: the Darknet deep learning framework is selected, the deep learning algorithm training framework darknet is open-source and is used for training the intersection target detection model library of Yolov4, and after training, the object classification targets in the tracking video can be detected by the generation algorithm library, and the targets are classified as follows: 0. electric two-wheeled vehicles (motorcycles); 1. a bicycle; 2. a car; 3. large trucks; 4. a bus; 5. other special vehicles; 6. the traffic signal lamp, the size of the neural network is set to 512 x 512 pixel parameters in consideration of comprehensive detection speed and detection effect, the training mode is Iou mode, more than 100000 sample labeling training of the panoramic picture of the acquired intersection is carried out, and a target detection model library is generated;
Step four: the Darknet deep learning framework is selected for training the Tiny Yolov traffic signal subdivision target detection model library, after classification training, the system can automatically identify three types of green light, yellow light and red light, and the targets are classified as follows: 0 no state; 1. green light; 2. a yellow lamp; 3. red lights, various types of which include round lights, arrow lights, and the like; setting a detection range as a signal lamp local area, and setting input image pixels to be small, so that a simplified model is adopted, the size of a neural network is 320 x 320, the detection speed is improved while the detection effect is ensured, and collecting more than 6000 sample labeling training of the local pictures of the intersection traffic signal lamp to generate a signal lamp state target detection model library;
Step five: a Darknet deep learning framework is selected for training a number plate recognition character classification model library of Tiny Yolov, and the target classification is as follows: 0. the number 0;1. number 1;2 the number 2;3. number 3;4. number 4;5. number 5;6. number 6;7. number 7;8. number 8;9. number 9;10. white bottom number plate; 11. yellow bottom number plate; 12, blue bottom number plate; 13. green bottom number plate; english letters 14A;15B,16C,17D,18E,19F,20G,21H,22I,23J,24K,25L,26M,27N,28O,29P,30Q,31R,32S,33T,34U,35V,36W,37X,38Y,39Z, set the neural network size as 320 x 320 pixel parameters, set the training mode as giou mode, collect the sample labeling training above 20000 of the electric two-wheeled vehicle license plate local picture at the intersection, and generate a license plate character target detection algorithm model library;
Step six: pruning the original models generated in the third, fourth and fifth steps, namely accelerating the obtained three models, otherwise, detecting the cost performance is affected by the slower speed;
The pruning method for the original model comprises the following steps: firstly, finding out the mask of each convolution layer by using a global threshold value, then for each group of shortcut, combining the pruning masks of each convolution layer which are connected, pruning by using the mask after merge, thus taking each relevant layer into consideration, limiting a reserved channel of each layer, carrying out fine tuning training on a model by using Darknet after pruning to restore accuracy in an experiment, thus obtaining a model library weight file with the size of only about 20% of an original model, adopting TensorRT F mode for secondary reasoning acceleration, improving the detection speed by about 3 times under the working condition that the detection accuracy is about the shoulder effect of the original model, reducing 50% of display memory occupation, and obtaining a final red light running target object detection model library;
Step seven: setting a connection network camera, adopting rtsp protocol to pull a stream from the network camera, decoding, and converting h264/h265 code stream into RGB24 images;
Step eight: setting virtual signal lamp detection areas in video pictures of cameras of intersections and T-shaped intersections, defining a rule setting function, drawing a local rectangular area containing signal lamps at the positions of the signal lamps of the panoramic images of the intersections, cutting the rectangular images by a system, calling a signal lamp state target detection model library to detect red, yellow, green and unknown states, setting the system to detect once every 1 second, recording the results in a device memory, and carrying out TensorRT reasoning acceleration by adopting a local detection and simplified yolo v model, wherein the detection speed can be within 12 haws seconds;
step nine: setting a snapshot detection area, demarcating 3 snapshot lines in an image detection area range, setting 3 snapshot line management objects by a program, storing a cut target image of an electric two-wheeled vehicle mixed with lines in the corresponding snapshot line management objects when the program detects a red light state, and when the 3 snapshot lines of the electric two-wheeled vehicle all have mixed line behaviors, carrying out snapshot judgment logic as follows:
1.3 snapshot lines trigger a snapshot event when the snapshot lines are in touch and mix in a red light state;
2. the 1 st line triggers a snap shooting event when being mixed in a red light state;
Step ten: identifying the license plate after the snap event is generated, extracting license plate local images with the maximum and best image quality from 3 snap line management objects, calling a license plate character division model library to detect characters in the license plate, sequencing the characters from left to right according to X coordinates, and converting the characters into license plate number character strings;
Step eleven: recording snapshot events of red light running, storing panoramic pictures, license plate character strings, time, places and other information in 3 snapshot line management objects to a local database, and uploading the information to a management platform;
Step twelve: when the red light state is read, when the electric bicycle is mixed with the first snapshot line, the video recording is started, the video recording is finished after 15 seconds, and a video recording record is generated and is associated with the red light running snapshot event record to serve as video recording evidence.
Referring to fig. 3, the snapshot process of the red light running behavior of the electric vehicle is as follows: the FFMPEG component is in a pulling state and decodes videos into RGB images, then whether the video detection is in a red light state or not, if the video detection is in the red light state, the electric two-wheel vehicle is detected in the red light state, if the judgment result is yes, whether the electric two-wheel vehicle accompanies a virtual line or not is continuously judged, if the judgment result is yes, the electric two-wheel vehicle is judged to break the red light, a picture and a video clip are shot and a red light running snapshot event is generated, if the video detection is in the red light state, the judgment result is in a green light state or a yellow state, the process is ended, if the video detection is in the red light state, the judgment result is that the electric two-wheel vehicle is not broken the red light, the process is ended, and if the electric two-wheel vehicle is not accompanied with the virtual line judgment result is no, the electric two-wheel vehicle stops the line and other red lights are continuously repeated.
The development training of the red light running behavior detection target classification algorithm model library of the electric bicycle is implemented by (1) preparing a deep learning target classification training server, specifically configured as (CPU: intel to strong E5 2690 V3, memory: 64GB,GPU:RTX 3060 x 3, hard disk: 1TB SSD), (2) installing Cuda 11.1.1, downloading Darknet frame source code compiling to generate executable software, (3) developing label making software for improving efficiency, (4) adopting label making software to import red light running target classification image samples collected on a traffic light site (see step 3 in detail), fixing targets in sample images according to a designed class frame, then running Darknet training tool for training to obtain a target detection algorithm original model library, and obtaining a final simplified model library by referring to a graphical development flow chart of the red light running behavior detection library of the electric bicycle.
Programming a program to realize the functional aim, wherein the specific implementation mode is as follows:
(1) A json format character format file is adopted to store the snapshot event record;
(2) Storing configuration parameter information by adopting an xml format file;
(3) Adopting a ffmpeg open source component to realize the whole process of connecting a network camera in an rtsp mode to fetch the stream to decode;
(4) Adopting TensortRT components to call an algorithm model library to detect a target object;
(5) Adopting TensortRT components to call an algorithm model library to detect a traffic light state target object;
(6) Adopting TensortRT components to call an algorithm model library to detect character target objects of the license plate of the electric bicycle, and connecting and arranging the character target objects of the license plate to form a license plate number;
(7) And 3 virtual lines are arranged in the image, when the red light state is detected, the electric two-wheel vehicle with the mixed lines is captured, the license plate number of the electric two-wheel vehicle is identified, and a capturing event is generated.
The electrical components to which the present application relates are all commercially available and all are prior art, and although embodiments of the present application have been shown and described, it will be appreciated by those skilled in the art that various changes, modifications, substitutions and alterations may be made to these embodiments without departing from the principles and spirit of the application, the scope of which is defined in the appended claims and their equivalents.

Claims (4)

1. The method for automatically capturing the red light running of the electric bicycle is characterized by comprising the following steps of:
step one: selecting 800W and above pixels to support Onvif protocol network cameras;
step two: selecting equipment with the power calculated by Nvidia JetsonNX GPU above AI 22T;
Step three: selecting Darknet deep learning frames for training the intersection target detection model library of Yolov to generate a target detection model library;
Step four: a Darknet deep learning frame is selected and used for training a Tiny Yolov traffic signal subdivision target detection model base to generate a signal lamp state target detection model base;
Step five: selecting Darknet deep learning frames for training Tiny Yolov license plate recognition character classification model libraries to generate license plate character target detection algorithm model libraries;
Step six: pruning the original model generated in the third step, the fourth step and the fifth step;
Step seven: setting a connection network camera, adopting rtsp protocol to pull a stream from the network camera, decoding, and converting h264/h265 code stream into RGB24 images;
Step eight: setting virtual signal lamp detection areas in video pictures of cameras of crossroads and T-junctions, defining a rule setting function, drawing a local rectangular area containing signal lamps at the positions of the signal lamps of the panoramic image of the crossroads, cutting the rectangular image by a system, and calling a signal lamp state target detection model library to detect red, yellow, green and unknown states;
step nine: setting a snapshot detection area, demarcating 3 snapshot lines in an image detection area range, setting 3 snapshot line management objects by a program, storing a cut target image of an electric two-wheeled vehicle mixed with lines in the corresponding snapshot line management objects when the program detects a red light state, and when the 3 snapshot lines of the electric two-wheeled vehicle all have mixed line behaviors, carrying out snapshot judgment logic as follows:
1) The 3 snapshot lines trigger a snapshot event when the three lines are mixed in a red light state;
2) Triggering a snap event when the 1 st line is mixed in a red light state;
Step ten: identifying the number plate after the snap event is generated;
step eleven: recording snapshot events of red light running, storing panoramic pictures, license plate character strings, time and place information in 3 snapshot line management objects to a local database, and uploading the panoramic pictures, license plate character strings, time and place information to a management platform;
step twelve: video evidence taking and video recording of red light running behavior of electric bicycle;
the targets in the third step are classified as follows: 0. an electric bicycle; 1.a bicycle; 2. a car; 3. large trucks; 4. a bus; 5. other special vehicles; 6. the traffic signal lamp is used for setting the size of the neural network to 512 x 512 pixel parameters, setting the training mode to Iou mode, and collecting more than 100000 sample labeling training of the panoramic picture of the intersection;
The targets in the fourth step are classified as follows: 0 no state; 1. green light; 2. a yellow lamp; 3. a red light; setting a detection range as a signal lamp local area, setting the size of a neural network as 320 x 320, and collecting more than 6000 sample labeling training of the local pictures of the traffic signal lamps at the intersections;
In the fifth step, the targets are classified as follows: 0. the number 0;1. number 1;2 the number 2;3. number 3;4. number 4;5. number 5;6. number 6;7. number 7;8. number 8;9. number 9;10. white bottom number plate; 11. yellow bottom number plate; 12, blue bottom number plate; 13. green bottom number plate; english letters 14A;15B,16C,17D,18E,19F,20G,21H,22I,23J,24K,25L,26M,27N,28O,29P,30Q,31R,32S,33T,34U,35V,36W,37X,38Y,39Z, set the neural network size as 320 x 320 pixel parameters, set the training mode as giou mode, and collect the sample labeling training above 20000 of the license plate local pictures of the electric two-wheeled vehicle at the intersection;
In the sixth step, the pruning method for the original model comprises the following steps: firstly, finding out masks of all convolution layers by using a global threshold value, then, for each group of shortcut, combining the pruning masks of all the connected convolution layers, pruning by using the masks after merge, carrying out fine tuning training on a model to restore accuracy by using Darknet after pruning, and then, adopting TensorRT F mode for secondary reasoning acceleration;
The snapshot flow is as follows: the FFMPEG component is in a pulling state and decodes videos into RGB images, then whether the video detection is in a red light state or not, if the video detection is in the red light state, the electric two-wheel vehicle is detected in the red light state, if the judgment result is yes, whether the electric two-wheel vehicle accompanies a virtual line or not is continuously judged, if the judgment result is yes, the electric two-wheel vehicle is judged to break the red light, a picture and a video clip are shot and a red light running snapshot event is generated, if the video detection is in the red light state, the judgment result is in a green light state or a yellow state, the process is ended, if the video detection is in the red light state, the judgment result is that the electric two-wheel vehicle is not broken the red light, the process is ended, and if the electric two-wheel vehicle is not accompanied with the virtual line judgment result is no, the electric two-wheel vehicle stops the line and other red lights are continuously repeated.
2. The method for automatically capturing the running red light of the electric two-wheeled vehicle according to claim 1, wherein the method comprises the following steps of: in the eighth step, the system is set to detect every 1 second, and the detection result is recorded in the equipment memory.
3. The method for automatically capturing the running red light of the electric two-wheeled vehicle according to claim 1, wherein the method comprises the following steps of: the method for identifying the number plate in the step ten comprises the following steps: and extracting license plate partial images with the best maximum and image quality from the 3 snapshot line management objects, calling a license plate character division model library to detect characters in the license plate, sequencing the characters from left to right according to X coordinates, and converting the characters into license plate number character strings.
4. The method for automatically capturing the running red light of the electric two-wheeled vehicle according to claim 1, wherein the method comprises the following steps of: in the twelfth step, when the red light state is read, when the electric bicycle is mixed with the first snapshot line, video recording is started, the video recording is finished after 15 seconds, and a video recording record and a red light running snapshot event record are generated to be associated and used as video recording evidence.
CN202210341635.2A 2022-04-02 2022-04-02 Method for automatically capturing red light running of electric bicycle Active CN115019262B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210341635.2A CN115019262B (en) 2022-04-02 2022-04-02 Method for automatically capturing red light running of electric bicycle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210341635.2A CN115019262B (en) 2022-04-02 2022-04-02 Method for automatically capturing red light running of electric bicycle

Publications (2)

Publication Number Publication Date
CN115019262A CN115019262A (en) 2022-09-06
CN115019262B true CN115019262B (en) 2024-05-24

Family

ID=83066734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210341635.2A Active CN115019262B (en) 2022-04-02 2022-04-02 Method for automatically capturing red light running of electric bicycle

Country Status (1)

Country Link
CN (1) CN115019262B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427615A (en) * 2015-12-03 2016-03-23 杭州中威电子股份有限公司 Robust red-light-running snapshotting system and method under low illumination
CN105788286A (en) * 2016-05-19 2016-07-20 湖南博广信息科技有限公司 Intelligent red light running identifying system and vehicle behavior detecting and capturing method
KR101671428B1 (en) * 2016-02-25 2016-11-03 유한회사 비츠 Intelligent Monitoring System For Violation Vehicles in crossroads
CN110136449A (en) * 2019-06-17 2019-08-16 珠海华园信息技术有限公司 Traffic video frequency vehicle based on deep learning disobeys the method for stopping automatic identification candid photograph
CN110288838A (en) * 2019-07-19 2019-09-27 网链科技集团有限公司 Electric bicycle makes a dash across the red light identifying system and method
CN113128355A (en) * 2021-03-29 2021-07-16 南京航空航天大学 Unmanned aerial vehicle image real-time target detection method based on channel pruning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427615A (en) * 2015-12-03 2016-03-23 杭州中威电子股份有限公司 Robust red-light-running snapshotting system and method under low illumination
KR101671428B1 (en) * 2016-02-25 2016-11-03 유한회사 비츠 Intelligent Monitoring System For Violation Vehicles in crossroads
CN105788286A (en) * 2016-05-19 2016-07-20 湖南博广信息科技有限公司 Intelligent red light running identifying system and vehicle behavior detecting and capturing method
CN110136449A (en) * 2019-06-17 2019-08-16 珠海华园信息技术有限公司 Traffic video frequency vehicle based on deep learning disobeys the method for stopping automatic identification candid photograph
CN110288838A (en) * 2019-07-19 2019-09-27 网链科技集团有限公司 Electric bicycle makes a dash across the red light identifying system and method
CN113128355A (en) * 2021-03-29 2021-07-16 南京航空航天大学 Unmanned aerial vehicle image real-time target detection method based on channel pruning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
交通路口车辆实时监测方法研究及其应用;肖习雨;《硕士电子期刊》;20120229;第1-74页 *

Also Published As

Publication number Publication date
CN115019262A (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN103069434B (en) For the method and system of multi-mode video case index
CN102867418B (en) Method and device for judging license plate identification accuracy
CN110769195B (en) Intelligent monitoring and recognizing system for violation of regulations on power transmission line construction site
KR102035592B1 (en) A supporting system and method that assist partial inspections of suspicious objects in cctv video streams by using multi-level object recognition technology to reduce workload of human-eye based inspectors
CN106340179A (en) Pedestrian crossing signal lamp system with red light running evidence obtaining function and method
TW202013252A (en) License plate recognition system and license plate recognition method
CN108509912A (en) Multipath network video stream licence plate recognition method and system
CN112509325B (en) Video deep learning-based off-site illegal automatic discrimination method
CN110580808B (en) Information processing method and device, electronic equipment and intelligent traffic system
CN111627215A (en) Video image identification method based on artificial intelligence and related equipment
CN113221727B (en) Engineering vehicle cleaning state judging method and system
Hakim et al. Implementation of an image processing based smart parking system using Haar-Cascade method
CN114898297A (en) Non-motor vehicle illegal behavior determination method based on target detection and target tracking
CN110674887A (en) End-to-end road congestion detection algorithm based on video classification
US8229170B2 (en) Method and system for detecting a signal structure from a moving video platform
CN117876966A (en) Intelligent traffic security monitoring system and method based on AI analysis
Kejriwal et al. Vehicle detection and counting using deep learning basedYOLO and deep SORT algorithm for urban traffic management system
CN204884166U (en) Regional violating regulations parking monitoring devices is stopped to traffic taboo
CN115019262B (en) Method for automatically capturing red light running of electric bicycle
CN113486885A (en) License plate recognition method and device, electronic equipment and storage medium
CN115512315B (en) Non-motor vehicle child riding detection method, electronic equipment and storage medium
CN215186950U (en) Pedestrian red light running behavior evidence acquisition device based on face recognition technology
CN111914704B (en) Tricycle manned identification method and device, electronic equipment and storage medium
CN114494938A (en) Non-motor vehicle behavior identification method and related device
CN113793069A (en) Urban waterlogging intelligent identification method of deep residual error network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant