CN111582253B - Event trigger-based license plate tracking and identifying method - Google Patents

Event trigger-based license plate tracking and identifying method Download PDF

Info

Publication number
CN111582253B
CN111582253B CN202010563999.6A CN202010563999A CN111582253B CN 111582253 B CN111582253 B CN 111582253B CN 202010563999 A CN202010563999 A CN 202010563999A CN 111582253 B CN111582253 B CN 111582253B
Authority
CN
China
Prior art keywords
license plate
tracking
frame
vehicle
trigger
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010563999.6A
Other languages
Chinese (zh)
Other versions
CN111582253A (en
Inventor
杨磊
邱国庆
魏敦楷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Keygo Technologies Co ltd
Original Assignee
Shanghai Keygo Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Keygo Technologies Co ltd filed Critical Shanghai Keygo Technologies Co ltd
Priority to CN202010563999.6A priority Critical patent/CN111582253B/en
Publication of CN111582253A publication Critical patent/CN111582253A/en
Application granted granted Critical
Publication of CN111582253B publication Critical patent/CN111582253B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

A license plate tracking and identifying method based on event triggering is characterized in that vehicle detection and license plate extraction are carried out on a triggering frame at the triggering moment to obtain vehicle coordinates or a license plate area, target vehicle judgment is carried out, then video stream tracking is started under the condition that whether license plate quality evaluation of a target vehicle exceeds a specified threshold value, license plate area judgment is carried out on the target vehicle area in a video stream obtained by tracking, and finally a license plate finally identified is obtained through identification voting. The invention uses the trigger frame as the reference in the video stream to decode and track and recognize the vehicle license plate bidirectionally and simultaneously, thus effectively reducing the length of the intercepted video stream and improving the efficiency of tracking and recognizing; according to the tracked license plate, the situation that the quality of a single-frame license plate is poor in some scenes is avoided by integrating the confidence coefficient and the voting mechanism.

Description

License plate tracking and identifying method based on event triggering
Technical Field
The invention relates to a technology in the field of intelligent traffic control, in particular to a license plate tracking and identifying method based on event triggering.
Background
License plate recognition scenes are mainly divided into static scenes and dynamic scenes. The static scene is mainly found at a parking lot entrance, community property registration and the like, the license plate is clear and has fixed size, and the recognition condition is better; various events of a dynamic scene, which are commonly found in the traffic field, trigger snapshot, such as snapshot of running red light, snapshot of whistling, snapshot of speed measurement, and the like, and because the influence of blurring, shielding, shadowing, overexposure and the like in the snapshot process, the quality of a license plate triggered by a single frame is low, and the capture rate of the license plate and the recognition accuracy of the license plate are seriously influenced.
The existing license plate recognition method based on video streaming has the defects of large calculated amount, unstable effect or large resource consumption and low accuracy, and is difficult to meet the traffic management requirements under high-speed large-scale scenes.
Disclosure of Invention
The invention provides a license plate tracking and identifying method based on event triggering, aiming at the defects of low capture rate and low identification rate caused by fuzzy, shielding, shadow and overexposure in the existing license plate identifying method based on triggering events, which is characterized in that a triggering frame is taken as a reference in a video stream, a triggering target vehicle is taken as a tracking target, bidirectional frame-by-frame decoding, vehicle tracking and license plate detection and identification are carried out simultaneously, the length of an intercepted video stream is effectively reduced, and the tracking and identifying efficiency is improved; according to a series of candidate license plates, a confidence weighting voting mechanism is adopted, and the condition that the quality of a single-frame license plate is poor in certain scenes is avoided.
The invention is realized by the following technical scheme:
the invention relates to a license plate tracking and identifying method based on event triggering, which comprises the steps of carrying out vehicle detection and license plate extraction on a triggering frame at the triggering moment to obtain vehicle coordinates or a license plate area, carrying out target vehicle judgment, starting video stream tracking under the condition that whether the license plate quality evaluation of a target vehicle exceeds a specified threshold value, carrying out license plate area judgment on the target vehicle area in the tracked video stream, and finally obtaining a finally identified license plate through identification voting.
The triggering events include but are not limited to: when the vehicle runs, the whistle is larger than the limit decibel, and the vehicle runs through the specified speed-limit road section in an overspeed way or runs through the red light when the red light at the traffic light intersection is on.
The vehicle detection and license plate extraction means that: and performing vehicle region detection and license plate region detection on the image through a convolutional neural network, wherein the detection result is represented as vehicle coordinates or coordinates of a point at the upper left corner of the license plate region on the image and the length and the width of the region.
The target vehicle judgment means that: when the event is triggered, the vehicle coordinate with the nearest distance to the whistle sound source coordinate is positioned through the microphone array sound source in the whistle snapshot, the red light running vehicle coordinate triggered when the red light is judged by adopting the pressure sensor or the vehicle coordinate with the overspeed in the interval form is judged by adopting the radar speed measurement.
The license plate quality evaluation comprises the following steps: vehicle trigger position evaluation and license plate quality evaluation, wherein: the quality of the triggering position of the vehicle is determined by the distance from the triggering place to the camera and the coordinate positions of the triggering frame vehicle and other vehicles, and the quality of the license plate is determined by the length-width ratio, the size, the definition and the illumination of the license plate, and specifically comprises the following steps:
Figure BDA0002547127710000021
wherein: q place And Q plate Respectively, the position rating and the license plate quality evaluation of the target vehicle are triggered, q dist For trigger point to camera distance evaluation, q iou For the intersection and comparison evaluation of the target vehicle with the other vehicles, q ratio For plate aspect ratio evaluation, q size For license plate size evaluation, q grad For license plate clarity evaluation, q gray For evaluation of the illuminance of the license plate, p dist 、p iou 、p ratio 、p size 、p grad 、p gray And obtaining the corresponding evaluation weight through practical experiment experience.
The condition starting refers to that: and when the quality evaluation of the license plate is higher than a specified threshold value, directly outputting the recognition results of vehicle detection and license plate extraction, and recognizing the number and confidence coefficient of the output license plate by using a recurrent neural network, otherwise, starting video stream tracking.
The video stream tracking refers to: and taking the trigger frame as a reference, taking the target vehicle as a tracking target, and simultaneously taking a plurality of frames before and after the trigger frame in the video stream to perform frame-by-frame decoding tracking until the tracking target disappears to obtain a target vehicle area in the video stream.
The frame-by-frame decoding tracking refers to: performing feature extraction on the target vehicle of the trigger frame, setting a search area and searching for a best matching position by taking the target vehicle area of the previous frame as a central point in the next tracking frame, and when the maximum matching degree is greater than a specified threshold value, successfully tracking and continuing the tracking of the next frame, wherein: the next frame refers to the next frame in the forward direction or the next frame in the backward direction of the trigger frame, and the previous frame refers to the previous frame in the forward direction or the previous frame in the backward direction of the trigger frame.
The license plate area judgment means that: and detecting a license plate region of a target vehicle region in the video stream through a convolutional neural network to obtain coordinates of a point at the upper left corner of the license plate position and the width and the height of the license plate region.
The identification voting refers to: the method comprises the following steps of judging a series of license plate areas in a video stream obtained by license plate areas, identifying through a recurrent neural network to obtain a series of corresponding license plate numbers and confidence degrees thereof, and obtaining a finally identified license plate through multi-frame license plate identification voting and confidence degree weighting, wherein the steps specifically comprise: according to the identification result of each frame of license plate number
Figure BDA0002547127710000022
Counting the probability of the corresponding bit of each bit of character in all tracking frames
Figure BDA0002547127710000023
Wherein: n is the number of frames to track,
Figure BDA0002547127710000024
voting result of each license plate number in each frame for ith character of license plate number in jth frame
Figure BDA0002547127710000025
Accumulating the confidence coefficient of the ith character of the license plate number of the jth frame to obtain the total voting score of the license plate number
Figure BDA0002547127710000026
M is the length of the license plate number.
The invention relates to a system for realizing the method, which comprises the following steps: trigger judgment unit, quality evaluation unit, two-way tracking unit and optimum selection unit, wherein: the trigger judging unit and the bidirectional tracking unit are respectively connected with the camera, the trigger judging unit judges a target vehicle in a trigger frame through a trigger mechanism and outputs the target vehicle to the quality evaluation unit, the quality evaluation unit extracts and evaluates the license plate of the target vehicle and outputs a license plate identification number obtained by detection when the quality evaluation reaches the standard, otherwise, the target vehicle tracking is carried out on the video stream from front and back directions by using the trigger frame as a reference by the bidirectional tracking unit, a target vehicle area in the video stream is obtained and output to the optimal selection unit, the optimal selection unit respectively carries out license plate area detection and license plate number identification on the tracked vehicle area, and the optimal identification license plate is selected through a license plate number and confidence voting weighting mechanism.
Technical effects
The invention integrally solves the problems of low license plate capture rate and low license plate identification accuracy rate caused by the conditions of blurring, shielding, shadow, overexposure and the like in the prior art based on the event triggering license plate identification; the method has the advantages that the starting position and the length of the video stream cannot be determined based on the triggering tracking identification of the video stream, single-target tracking cannot be achieved, and the tracking efficiency is low.
Compared with the prior art, the license plate capturing method can obviously improve the license plate capturing rate and the license plate recognition rate under the conditions of blurring, shielding, shading, overexposure and the like; the license plate recognition result of the invention integrates the vehicle position, the license plate quality and the recognition result voting index evaluation, thereby improving the recognition accuracy; the tracking identification adopts a trigger frame as an initial frame, and determines the target vehicle to track the single target when triggered, thereby improving the tracking efficiency.
Drawings
FIG. 1 is a schematic diagram of the system of the present invention;
FIG. 2 is a flow chart of the method of the present invention;
FIG. 3 is a schematic diagram of an embodiment vehicle two-way tracking.
Detailed Description
As shown in fig. 1, fig. 2, and fig. 3, in this embodiment, a section of h264 video (67 frames in total) of 2.680s stored before and after triggering is selected by taking a whistle deployment point as an example, and the method for recognizing the license plate at the triggering time of this embodiment specifically includes the following steps:
1) whistling triggering and acquiring triggering frame
Figure BDA0002547127710000031
And trigger point sound source coordinates (x) tri ,y tri ) Carrying out convolution neural network vehicle detection on the trigger frame to obtain a vehicle area (x) in a coordinate form i ,y i ,w i ,h i ) Calculating the distance between the center point of the vehicle area and the coordinates of the trigger point sound source, and using the closest vehicle area as the target vehicle area, i.e.
Figure BDA0002547127710000032
Wherein: trigger point sound source coordinate (x) tri ,y tri ) Vehicle coordinates (x) of the trigger frame i ,y i ,w i ,h i )。
The convolutional neural network vehicle detection and the license plate detection refer to the following steps: and performing vehicle region detection on the pictures by adopting a deep convolutional neural network including but not limited to yolo v2, yolo v3, SSD and the like.
The acquisition mode of the trigger point sound source coordinate is as follows: the method comprises the steps of identifying the real-time whistle of sound signals collected by a microphone array, and locating a sound source in real time by adopting a beam forming algorithm (Beamforming) when the whistle is judged to occur, so as to obtain a sound source coordinate.
2) And performing license plate detection on the target vehicle region by adopting a convolutional neural network to obtain a license plate region, position evaluation, length-width ratio measurement, size measurement, gradient measurement and illumination measurement, and further performing comprehensive evaluation to obtain a quality evaluation score.
The quality evaluation score obtained by the comprehensive evaluation means that:
Figure BDA0002547127710000041
wherein: q. q of dist For trigger point to camera distance evaluation, q iou For the intersection-comparison evaluation of the target vehicle with the other vehicles, q ratio For the evaluation of the aspect ratio of the license plate,q size for license plate size evaluation, q grad For license plate clarity evaluation, q gray Evaluation of license plate illuminance, p dist 、p iou 、p ratio 、p size 、p grad 、p gray And obtaining corresponding evaluation weight through practical experiment experience.
The position evaluation comprises the following steps: evaluation q for distance between trigger point and camera dist Sum-cross-over ratio metric q iou Wherein:
Figure BDA0002547127710000042
(x tri ,y tri ) And (x) cam ,y cam ) Respectively as a trigger point and a camera coordinate, and (w, h) as the length and width of the image;
Figure BDA0002547127710000043
and
Figure BDA0002547127710000044
the areas of the trigger frame target vehicle and the remaining vehicles, respectively.
The length-width ratio measurement
Figure BDA0002547127710000045
Wherein: w is a net And h net Network input size, w, for license plate recognition i And h is the detected license plate size.
The size measurement
Figure BDA0002547127710000046
w i And h i Is the detected license plate size.
The gradient metric
Figure BDA0002547127710000047
Wherein: and grad is the mean of the squares of the x direction and the y direction of the license plate area image.
Said illumination measure
Figure BDA0002547127710000048
Wherein: gray i The average value of the gray scale of the license plate area image is obtained.
3) And (4) when the license plate quality comprehensive evaluation score of the target vehicle of the trigger frame is larger than a specified threshold value, directly performing convolutional neural network recognition on the license plate image of the target vehicle in the step two and outputting a license plate result obtained by the recognition, otherwise, entering a step 4 to perform a tracking recognition process.
The preferred range of the specified threshold is as follows: q. q of dist ,q iou ,q ratio ,q size ,q grad ,q gray The evaluation function is more close to 1, the license plate quality is better, and a better weight coefficient is obtained according to the experiment: p is a radical of dist ,p ratio Is 0.2, p iou ,p size ,p grad ,p gray The number of the license plates is 0.15, the preferred range is 0.75-1, the quality of the license plates is better when the license plates are closer to 1, and 0.75 is used as a threshold value for judging whether the quality of the license plates reaches the standard or not.
4) And simultaneously carrying out backward tracking and forward tracking by taking the trigger frame as a reference to obtain … frame tracking results.
The forward tracking refers to: and (3) decoding frames by frames in the forward direction of time, tracking the target vehicle after decoding the t +1 frame, continuously decoding and tracking the t +2 frame when the target is tracked, and stopping tracking when the tracked target disappears to obtain tracking results of the t +1, t +2 and … frames.
The back tracking is that: and decoding the frames by frames in a time reverse direction, tracking the target vehicle after decoding the t-1 frame, continuously decoding and tracking the t-2 frame when the target is tracked, and stopping tracking when the tracked target disappears to obtain tracking results of the t-1, t-2 and … frames.
The decoding means that: the high-definition camera monitors in real time, data is in a video stream format of h264, and the RGB pictures taken out of the video stream need to be decoded by the h 264.
The target vehicle tracking means that: and (3) taking the target vehicle of the trigger frame as a tracking target, searching a best matching position in the tracking frame of the next frame by adopting algorithms including but not limited to DCF, KCF, TLD and the like, and when the maximum matching degree is greater than a specified threshold value, successfully tracking and continuing the tracking of the next frame. Otherwise tracking stops.
The tracking result comprises: corresponding to the vehicle region of each frame, in the form of (x) i ,y i ,w i ,h i ) And respectively represent the coordinates of the upper left corner point and the width and the height.
5) Sequentially carrying out license plate detection on a series of tracked vehicle regions frame by adopting a convolutional neural network to obtain a license plate region, and carrying out license plate number detection on the license plate region obtained by each frame by adopting a cyclic neural network to obtain a license plate number and confidence;
the recurrent neural network is as follows: the recurrent neural network may include, but is not limited to, a long short term memory network (LSTM), a bidirectional long short term memory network (Bi-LSTM), a Deep Recurrent Neural Network (DRNN), and a composite neural network formed by compositing other neural networks.
6) And 4, performing confidence weighted voting mechanism judgment on each frame of license plate recognition result (including license plate number and confidence) in the video stream obtained in the steps 4 and 5, and selecting the optimal license plate recognition result as a final recognition result.
The confidence weighted voting mechanism judgment refers to voting weighting of the license plate recognition results of each frame obtained in the steps 4 and 5,
Figure BDA0002547127710000051
wherein: m is the length of the license plate number, N is the length of the tracking frame, i and j are respectively the ith character in the jth frame identification result,
Figure BDA0002547127710000052
is the ith character of the license plate number of the jth frame, wherein
Figure BDA0002547127710000053
Is the confidence of the ith character of the license plate number of the jth frame.
Through specific practical experiments, under the specific environment setting of an NVIDIA development board TX 28G, a section of 2.68s h264 video is selected, 67 frames are total, the 41 st frame of the video is a whistle trigger frame, and orthogonal tests are designed to respectively compare experimental effects of comparing single/multi-target tracking, single frame/multi-frame detection and single/two-way tracking respectively, wherein:
trigger frame bidirectional-multiframe detection-multi-target tracking: consuming 3.825s, tracking 45 frames, and tracking 9 targets;
initial frame one-way-multi-frame detection-multi-target tracking: consuming 5.385s, tracking 67 frames, and tracking 9 targets;
trigger frame bidirectional-single frame detection-multi-target tracking: consuming 2.648s, tracking 45 frames and tracking 9 targets;
trigger frame bidirectional-single frame detection-single target tracking: it takes 2.113s, track 45 frames, track 1 object.
The experiment shows that: the scheme of trigger frame bidirectional tracking, single frame detection and single target tracking has the most advantages that the starting frame is the key frame and needs to track 67 frames, and the trigger frame is the key frame and only needs to track 45 frames of the video frame where the target is located; the starting frame is used as a key frame, whether all targets need to be tracked or not is not known, and the triggering frame is used as the key frame, and only the target vehicle is judged during triggering, and the tracking of a single target is needed.
Based on the method, the trigger tracking license plate recognition algorithm is deployed at a Anhui Huaihei whistle test point, 170 vehicles are tracked and recognized in total under the test condition from 5 months 15 days to 5 months 31 days in 2020, and compared with the video stream tracking recognition condition and the single-frame trigger recognition condition, the test finds that the trigger recognition is wrong, 12 vehicles are correctly tracked and recognized, wherein 9 data are trigger recognition errors caused by shielding, and 3 data are errors caused by overexposure.
In conclusion, the invention takes the trigger frame as the reference key frame, simultaneously carries out forward tracking and backward tracking, has the minimum number of tracking video frames and the shortest length of tracking video stream, only needs to track the video frame where the target is positioned, and consumes less resources; the quality judgment of the trigger position combines the spatial position of the vehicle and the quality of the license plate to comprehensively judge whether the license plate reaches the standard, and the license plate can be directly output without tracking after reaching the standard; during tracking, the trigger frame is used as a key frame, the target vehicle judged by the trigger frame is used as single target tracking, the tracking efficiency is high, and ID switching is not needed to be worried about; and (4) selecting an optimal result by combining a series of tracked license plates with a license plate number voting mechanism and a confidence weighting voting mechanism.
The foregoing embodiments may be modified in many different ways by those skilled in the art without departing from the spirit and scope of the invention, which is defined by the appended claims and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (9)

1. A license plate tracking and identifying method based on event triggering is characterized in that vehicle detection and license plate extraction are carried out on a triggering frame at the triggering moment to obtain vehicle coordinates or a license plate area, target vehicle judgment is carried out, then video stream tracking is started under the condition that whether license plate quality evaluation of a target vehicle exceeds a specified threshold value, license plate area judgment is carried out on the target vehicle area in a video stream obtained by tracking, and finally a finally identified license plate is obtained through identification voting;
the license plate quality evaluation comprises the following steps: vehicle trigger position evaluation and license plate quality evaluation, wherein: the quality of the triggering position of the vehicle is determined by the distance from the triggering place to the camera and the coordinate positions of the triggering frame vehicle and other vehicles, and the quality of the license plate is determined by the length-width ratio, the size, the definition and the illumination of the license plate, and specifically comprises the following steps:
Figure FDA0003663262010000011
wherein: q place And Q plate Respectively, the position rating and the license plate quality evaluation of the target vehicle are triggered, q dist For trigger point to camera distance evaluation, q iou For the intersection and comparison evaluation of the target vehicle with the other vehicles, q ratio For evaluation of aspect ratio of license plate, q size For license plate size evaluation, q grad For license plate clarity evaluation, q gray Evaluation of license plate illuminance, p dist 、p iou 、p ratio 、p size 、p grad 、p gray And obtaining the corresponding evaluation weight through practical experiment experience.
2. The method for tracking and identifying the license plate based on the event trigger as claimed in claim 1, wherein the vehicle detection and the license plate extraction are as follows: and performing vehicle region detection and license plate region detection on the image through a convolutional neural network, wherein the detection result is expressed as vehicle coordinates or coordinates of the upper left corner point of the license plate region on the image and the length and the width of the region.
3. The method for tracking and identifying the license plate based on the event trigger as claimed in claim 1, wherein the judgment of the target vehicle is as follows: and when the event is triggered, the vehicle coordinate with the nearest distance of the whistle sound source coordinate is positioned by the microphone array sound source in the whistle snapshot, the red light running vehicle coordinate triggered when the red light is judged by the pressure sensor or the vehicle coordinate with overspeed in the interval form is judged by radar speed measurement.
4. The method for tracking and identifying the license plate based on the event trigger as claimed in claim 1, wherein the condition starting is that: and when the license plate quality evaluation is higher than a specified threshold value, directly outputting recognition results of vehicle detection and license plate extraction, and recognizing the number and confidence coefficient of the output license plate by using a recurrent neural network, otherwise, starting video stream tracking.
5. The method for tracking and identifying the license plate based on the event trigger as claimed in claim 1, wherein the video stream tracking is that: and taking the trigger frame as a reference, taking the target vehicle as a tracking target, and simultaneously taking a plurality of frames before and after the trigger frame in the video stream to perform frame-by-frame decoding tracking until the tracking target disappears to obtain a target vehicle area in the video stream.
6. The event-triggered license plate tracking and identifying method according to claim 5, wherein the frame-by-frame decoding and tracking means that: performing feature extraction on the target vehicle of the trigger frame, setting a search area and searching for a best matching position by taking the target vehicle area of the previous frame as a central point in the next tracking frame, and when the maximum matching degree is greater than a specified threshold value, successfully tracking and continuing the tracking of the next frame, wherein: the next frame refers to the forward next frame or the backward next frame of the trigger frame, and the previous frame refers to the forward previous frame or the backward previous frame of the trigger frame.
7. The method for tracking and identifying the license plate based on the event trigger as claimed in claim 1, wherein the judgment of the license plate area is as follows: and detecting a license plate region of a target vehicle region in the video stream through a convolutional neural network to obtain coordinates of a point at the upper left corner of the license plate position and the width and the height of the license plate region.
8. The license plate tracking and identifying method based on event triggering of claim 1, wherein the identification voting refers to: the method comprises the following steps of judging a series of license plate areas in a video stream obtained by license plate areas, identifying through a recurrent neural network to obtain a series of corresponding license plate numbers and confidence degrees thereof, and obtaining a finally identified license plate through multi-frame license plate identification voting and confidence degree weighting, wherein the steps specifically comprise: according to the identification result of each frame of license plate number
Figure FDA0003663262010000021
Counting the probability of the corresponding bit of each bit of character in all tracking frames
Figure FDA0003663262010000022
Wherein: n is the number of frames to track,
Figure FDA0003663262010000025
voting result of each license plate number in each frame for ith character of license plate number in jth frame
Figure FDA0003663262010000023
Conf i j Accumulating the confidence coefficient of the ith character of the license plate number of the jth frame to obtain the total voting score of the license plate number
Figure FDA0003663262010000024
M is the length of the license plate number.
9. A system for implementing the method of any preceding claim, comprising: trigger judgment unit, quality evaluation unit, two-way tracking unit and optimum selection unit, wherein: the trigger judgment unit and the bidirectional tracking unit are respectively connected with the camera, the trigger judgment unit judges a target vehicle in a trigger frame through a trigger mechanism and outputs the target vehicle to the quality evaluation unit, the quality evaluation unit extracts and evaluates the license plate of the target vehicle and outputs a license plate identification number obtained by detection when the quality evaluation reaches the standard, otherwise, the trigger judgment unit enters the bidirectional tracking unit, the bidirectional tracking unit tracks the target vehicle in the video stream from the front direction and the rear direction by taking the trigger frame as a reference to obtain a target vehicle area in the video stream and outputs the target vehicle area to the optimal selection unit, the optimal selection unit respectively detects the license plate area and identifies the license plate number of the tracked vehicle area, and the optimal identification license plate is selected through a license plate number and confidence voting weighting mechanism.
CN202010563999.6A 2020-06-19 2020-06-19 Event trigger-based license plate tracking and identifying method Active CN111582253B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010563999.6A CN111582253B (en) 2020-06-19 2020-06-19 Event trigger-based license plate tracking and identifying method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010563999.6A CN111582253B (en) 2020-06-19 2020-06-19 Event trigger-based license plate tracking and identifying method

Publications (2)

Publication Number Publication Date
CN111582253A CN111582253A (en) 2020-08-25
CN111582253B true CN111582253B (en) 2022-09-06

Family

ID=72125730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010563999.6A Active CN111582253B (en) 2020-06-19 2020-06-19 Event trigger-based license plate tracking and identifying method

Country Status (1)

Country Link
CN (1) CN111582253B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112071083B (en) * 2020-09-15 2022-03-01 深圳市领航城市科技有限公司 Motor vehicle license plate relay identification system and license plate relay identification method
CN113030506B (en) * 2021-03-25 2022-07-12 上海其高电子科技有限公司 Micro-area speed measurement method and system based on multi-license plate calibration library
CN113591725B (en) * 2021-08-03 2023-08-22 世邦通信股份有限公司 Method, device, equipment and medium for extracting whistle vehicle
CN115098731B (en) * 2022-07-14 2022-11-22 浙江大华技术股份有限公司 Target association method, device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699905A (en) * 2013-12-27 2014-04-02 深圳市捷顺科技实业股份有限公司 Method and device for positioning license plate
CN103824066A (en) * 2014-03-18 2014-05-28 厦门翼歌软件科技有限公司 Video stream-based license plate recognition method
CN107705574A (en) * 2017-10-09 2018-02-16 荆门程远电子科技有限公司 A kind of precisely full-automatic capturing system of quick road violation parking
CN108846854A (en) * 2018-05-07 2018-11-20 中国科学院声学研究所 A kind of wireless vehicle tracking based on motion prediction and multiple features fusion
CN110136449A (en) * 2019-06-17 2019-08-16 珠海华园信息技术有限公司 Traffic video frequency vehicle based on deep learning disobeys the method for stopping automatic identification candid photograph
CN110178167A (en) * 2018-06-27 2019-08-27 潍坊学院 Crossing video frequency identifying method violating the regulations based on video camera collaboration relay

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778786B (en) * 2013-12-17 2016-04-27 东莞中国科学院云计算产业技术创新与育成中心 A kind of break in traffic rules and regulations detection method based on remarkable vehicle part model
US20180268238A1 (en) * 2017-03-14 2018-09-20 Mohammad Ayub Khan System and methods for enhancing license plate and vehicle recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699905A (en) * 2013-12-27 2014-04-02 深圳市捷顺科技实业股份有限公司 Method and device for positioning license plate
CN103824066A (en) * 2014-03-18 2014-05-28 厦门翼歌软件科技有限公司 Video stream-based license plate recognition method
CN107705574A (en) * 2017-10-09 2018-02-16 荆门程远电子科技有限公司 A kind of precisely full-automatic capturing system of quick road violation parking
CN108846854A (en) * 2018-05-07 2018-11-20 中国科学院声学研究所 A kind of wireless vehicle tracking based on motion prediction and multiple features fusion
CN110178167A (en) * 2018-06-27 2019-08-27 潍坊学院 Crossing video frequency identifying method violating the regulations based on video camera collaboration relay
CN110136449A (en) * 2019-06-17 2019-08-16 珠海华园信息技术有限公司 Traffic video frequency vehicle based on deep learning disobeys the method for stopping automatic identification candid photograph

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automatic Detection of Parking Violation and Capture of License Plate;Zhemin Liu;《2019 IEEE 10th Annual Information Technology, Electronics and Mobile Communication Conference》;20191219;全文 *
基于视频检测的车牌识别系统的研究;白雪松;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20100915;第2-6页及图1.3 *

Also Published As

Publication number Publication date
CN111582253A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN111582253B (en) Event trigger-based license plate tracking and identifying method
CN111260693B (en) High-altitude parabolic detection method
CN108021848A (en) Passenger flow volume statistical method and device
CN111144247A (en) Escalator passenger reverse-running detection method based on deep learning
CN114299417A (en) Multi-target tracking method based on radar-vision fusion
CN111781600B (en) Vehicle queuing length detection method suitable for signalized intersection scene
CN112991391A (en) Vehicle detection and tracking method based on radar signal and vision fusion
CN112257569B (en) Target detection and identification method based on real-time video stream
CN1448886A (en) Apparatus and method for measuring vehicle queue length
WO2021139049A1 (en) Detection method, detection apparatus, monitoring device, and computer readable storage medium
CN103049909B (en) A kind of be focus with car plate exposure method
CN110674672B (en) Multi-scene people counting method based on tof camera
CN110633643A (en) Abnormal behavior detection method and system for smart community
CN110781785A (en) Traffic scene pedestrian detection method improved based on fast RCNN algorithm
CN114898326A (en) Method, system and equipment for detecting reverse running of one-way vehicle based on deep learning
CN109254271B (en) Static target suppression method for ground monitoring radar system
CN109100697B (en) Target condensation method based on ground monitoring radar system
CN116434159A (en) Traffic flow statistics method based on improved YOLO V7 and Deep-Sort
CN108983194B (en) Target extraction and condensation method based on ground monitoring radar system
CN114067282A (en) End-to-end vehicle pose detection method and device
WO2022048053A1 (en) Target tracking method, apparatus, and device, and computer-readable storage medium
CN117037085A (en) Vehicle identification and quantity statistics monitoring method based on improved YOLOv5
KR102120812B1 (en) Target recognition and classification system based on probability fusion of camera-radar and method thereof
CN116311166A (en) Traffic obstacle recognition method and device and electronic equipment
CN113253262B (en) One-dimensional range profile recording-based background contrast target detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant