CN114677500B - Weak surveillance video license plate recognition method based on eye tracker point annotation information - Google Patents

Weak surveillance video license plate recognition method based on eye tracker point annotation information Download PDF

Info

Publication number
CN114677500B
CN114677500B CN202210571171.4A CN202210571171A CN114677500B CN 114677500 B CN114677500 B CN 114677500B CN 202210571171 A CN202210571171 A CN 202210571171A CN 114677500 B CN114677500 B CN 114677500B
Authority
CN
China
Prior art keywords
license plate
viewpoint
frame
video
peripheral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210571171.4A
Other languages
Chinese (zh)
Other versions
CN114677500A (en
Inventor
刘寒松
王永
王国强
刘瑞
翟贵乾
李贤超
焦安健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonli Holdings Group Co Ltd
Original Assignee
Sonli Holdings Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonli Holdings Group Co Ltd filed Critical Sonli Holdings Group Co Ltd
Priority to CN202210571171.4A priority Critical patent/CN114677500B/en
Publication of CN114677500A publication Critical patent/CN114677500A/en
Application granted granted Critical
Publication of CN114677500B publication Critical patent/CN114677500B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of license plate detection, and relates to a weak surveillance video license plate recognition method based on eye movement instrument point annotation information, which comprises the steps of firstly generating a viewpoint position coordinate through an eye movement instrument, generating viewpoint area information by adopting a viewpoint smoothing mode for the coordinate position of the eye movement instrument, then generating a large amount of peripheral bounding box recommendations through a selective search algorithm, calculating the consistency between a viewpoint area and a peripheral bounding box to determine an initial peripheral bounding box, and improving the quality of the initial peripheral bounding box through inter-frame consistency; and then based on the viewpoint area information, generating a high-quality license plate peripheral boundary frame by adopting an area generation algorithm, using the generated peripheral boundary frame for training a test network, and generating a license plate detection result by using the test network, so that the problems of inclination and distortion of the license plate are solved, and the method can be used for weak supervision video license plate detection based on the viewpoint area, scene text detection, supermarket commodity detection and other inclined target detection tasks.

Description

Weak surveillance video license plate recognition method based on eye tracker point annotation information
Technical Field
The invention belongs to the technical field of license plate detection, and relates to a weak surveillance video license plate identification method based on eye tracker point annotation information.
Background
After entering the big data era, the artificial intelligence technology has important significance for the construction of smart cities. The license plate recognition is used as an important link of intelligent traffic, has strong practical significance and application value, and is used for monitoring illegal vehicles in traffic management; parking in districts, shopping malls, scenic spots and the like is difficult. With the change of the machine learning technology, the deep learning technology has become the mainstream of the license plate recognition field, compared with the traditional license plate recognition system, the deep learning technology utilizes a neural network model constructed by a data set, the superior performance of the neural network model is shown in the aspect of image recognition, the deep learning technology is utilized to train and learn various feature information of the license plate, and the speed, the accuracy and the like of the license plate recognition are remarkably improved.
The license plate recognition process mainly comprises three stages of license plate positioning, license plate character segmentation and license plate character recognition, the existing license plate recognition is provided with a plurality of algorithms for early object recognition, such as fast RCNN and YOLO V3, however, the deep learning algorithm based on strong supervision often needs to fit massive data to obtain higher accuracy, the data driving mode has huge defects, only scenes in data concentration can be recognized, if noise information (untrained difficult scenes, such as license plate detection in extreme scenes) is introduced, the accuracy of the model is broken and cliff type drop occurs, and the generalization capability is poor. In addition, the images in the current data set are mainly frame data captured by a snapshot or from a video stream, so that the data volume is increased, the license plate recognition efficiency is greatly improved, but the mode has a great defect, and intrinsic motion information in the video is discarded, so that the intrinsic motion information cannot be fully utilized. For license plate recognition, the problem domain of license plate fine recognition can be reduced by initially positioning the license plate from the motion information.
At present, the image processing mainly adopts a manual labeling method, a peripheral boundary frame is labeled at the license plate position to generate license plate coordinates, but a video is frequently hundreds of frames, the frame-by-frame labeling is very complicated, the license plate labeling is distorted in the license plate identification process, the complexity of the license plate labeling work is increased, on the other hand, the video information repeatability is high, especially the license plate identification video with a fixed visual angle, the frame-by-frame labeling causes repeated labor, and a large amount of labor, material resources and financial resources are consumed.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a weak surveillance video license plate recognition method based on eye tracker point annotation information.
In order to achieve the purpose, the specific process for realizing license plate recognition comprises the following steps:
(1) collecting the license plate position information of the viewpoint of the eye tracker;
(2) generating viewpoint area information by using the license plate position coordinates of the viewpoint of the eye tracker in a viewpoint smoothing mode;
(3) generating a recommended peripheral boundary for the collected video frame by frame through a selective search algorithm, calculating the consistency between a viewpoint area and the recommended peripheral boundary box, determining an initial peripheral boundary box, and improving the quality of the initial peripheral boundary box through the inter-frame consistency;
(4) based on the initial peripheral bounding box, firstly smoothing the coordinates of the inter-frame initial peripheral bounding box, and then generating a refined license plate peripheral bounding box based on the weak supervision of viewpoint information by adopting a region growing algorithm;
(5) and (3) training a license plate detection network for the high-quality refined license plate peripheral boundary frame (pseudo labeling information) obtained in the step (4), wherein the license plate detection network comprises two branches of a category branch and a regression branch, the category branch detects whether the license plate is detected, the regression branch is used for detecting coordinates of four vertexes of the refined license plate peripheral boundary frame, and the position coordinates of the license plate are directly output through the vehicle detection network, so that license plate recognition is completed, and the problems of inclination and distortion of the license plate are solved.
Specifically, the specific process of collecting the license plate position information of the eyetracker viewpoint in the step (1) is as follows:
(1-1) constructing a data set: collecting videos containing conventional, inclined and distorted license plates in scenes such as traffic monitoring, high-speed intersections, large parking lots and the like, constructing a data set of not less than 1000 sections of license plate videos, dividing the data set into a training set, a verification set and a test set according to the number ratio of 6:2:2, intercepting the length of the videos, deleting 'idle periods' in which no vehicle information appears for a long time, and constructing an original data set N through the intercepted videos;
(1-2) debugging the eye tracker: initializing the video size and the numerical range in an original data set N to adapt to the size of a computer screen, then installing an eye tracker at the bottom end of a computer display, starting a calibration program carried by the eye tracker, fixing the head of a worker, and sequentially watching four corners and a central point of the screen to finish the capture of the eye tracker on the eye position, wherein after the calibration is finished, the worker needs to keep the head posture as immovable as much as possible and focus the eye on the license plate position;
(1-3) collecting data set: the method comprises the steps of starting playing a video needing to collect viewpoint data, starting a viewpoint position information capturing program of the eye tracker, outputting viewpoint position coordinates to a corresponding folder to obtain license plate position information P (x, y) of a viewpoint of the eye tracker, stopping for 1 second in an initial frame of the video to ensure that volunteers can mark the license plate position information with high quality, enabling the workers to lock the viewpoint at the license plate position, allowing the workers to rest for 10 seconds after the video playing is finished, preventing fatigue, and stopping immediately in the marking process if the workers are fatigued until the workers have full rest and then collecting again.
Specifically, the specific process of generating the viewpoint area information in step (2) is as follows:
(2-1) viewpoint data post-processing: smoothing the original license plate position information coordinates according to the license plate position information P (x, y) of the eyetracker viewpoint obtained in the step (1), and expanding the coordinates into a viewpoint area A (x, y), wherein the values of coordinate points around the viewpoint (upper, lower, left, right, upper left, upper right, lower left and lower right) are set as 1;
(2-2) batch inter-frame consistency weighting based on viewpoint area a (x, y) a' (x, y): the data obtained in the step (2-1) is divided into batch, namely a sliding window mode is adopted between frames, video segments are grouped (every 5 frames are 1 group and comprise a current frame t, two frames t-1 and t-2 before, and two frames t +1 and t +2 after), the coordinates of the video segments are weighted and smoothed, the weighting is smaller when the video segments are farther away from the current frame, meanwhile, the speed weighting is introduced, the weighting is updated to calculate the movement speed between the two frames for vehicles with too high speed, and if the speed is high, the weighting ratio of adjacent frames is small.
Specifically, the specific process of calculating the consistency between the viewpoint area and the peripheral frame in the step (3) is as follows: generating consistency of recommended peripheral bounding boxes (proposal) frame by frame for the collected video by adopting the viewpoint area A' (x, y) obtained in the step (2-2) and a selective search algorithm, introducing prior constraint, namely calculating the aspect ratio (width: height = 3.5) of the license plate, if the aspect ratio does not accord with the actual aspect ratio of the license plate, deleting the current Box, and finally obtaining an initial peripheral bounding Box Box 1 (Stage 1)。
Specifically, the specific process of peripheral frame refinement in step (4) is as follows:
(4-1) primary smoothing of the outer peripheral frame between frames: initial peripheral bounding Box Box based on frame 1 The coordinate information is smoothed, and four points (8 coordinates) of the initial peripheral bounding Box are fused in a weighting mode to obtain Box 2 The weight calculation mode is that the frame which is farther away from the current frame has smaller weight; for vehicles with too high speed, updating the weight to calculate the movement speed between two frames, and if the speed is high, the applied weight of the adjacent frames is smaller;
(4-2) surrounding Box based on region growing Algorithm 2 And (3) refining: aiming at the problem that the license plate in the object detection frame is distorted and inclined, a boundary frame Box at the periphery of the license plate 2 A region growing algorithm is adopted, pixels with the same characteristics as the viewpoint region A' (x, y) obtained in the step (2-2) are continuously merged into the current pixel level license plate region segmentation mask, a license plate detection region is refined, and the license plate characteristics are continuously metThe license plate regions are merged, thereby generating a refined segmentation mask (Seg) of the license plate regions 1 ) And performing secondary interframe smoothing (stage 2, the smoothing mode is the same as that of stage 1), and obtaining a high-quality refined license plate peripheral bounding Box Box based on the pixel-level segmentation mask 3
Compared with the prior art, the invention has the following advantages:
(1) the eye tracker is used for providing the position coordinates of the license plate in the video, so that the problem of complexity of video license plate labeling is solved, and the information labeling speed is increased;
(2) the method has the advantages that the weak supervision information based on the viewpoint information is used for generating the license plate position information, the license plate detection network is trained through the generated license plate information, and compared with the traditional license plate positioning mode based on the deep learning of strong supervision, the method can be used for large-scale weak labeling video license plate recognition, the generalization and data guidance of a strong supervision method are avoided, and the manpower, material resources and financial resources are greatly saved;
(3) the inter-frame consistency self-adaption is used for aligning the inter-frame characteristics with the license plate characteristics, the video license plate recognition performance is improved, meanwhile, a region growing algorithm is introduced to solve the problems of inclined license plates and distorted license plates, and the method can be used for weak supervision video license plate detection based on a viewpoint region, and can also be used for various inclined target detection tasks such as scene text detection, supermarket commodity detection and the like.
Drawings
FIG. 1 is a flow chart of the inter-frame consistency weighted smoothing according to the present invention.
Fig. 2 is a diagram of the inter-frame consistency network architecture according to the present invention.
Fig. 3 is a work flow diagram of the license plate recognition method of the present invention.
Detailed Description
The invention is further described below by way of examples and with reference to the accompanying drawings, without limiting the scope of the invention in any way.
Example (b):
according to the method, an eye tracker is used for marking a license plate point region, the license plate region is fully detected by means of inter-frame consistency contained in inter-frames in a video, the distortion problem of a license plate is solved through a region growing algorithm based on the license plate point region marking, the cost problem of frame marking and the generalization performance problem of a deep learning algorithm are solved, and the specific implementation comprises the following steps:
(1) and (3) data set construction: collecting videos containing conventional, inclined and distorted license plates of scenes such as traffic monitoring, high-speed intersections, large parking lots and the like, constructing 1000-segment license plate video data sets, dividing the data sets into three parts, namely a training set, a verification set and a test set, wherein the number of the training set, the number of the verification set and the number of the test set are respectively 600, 200 and 200, intercepting the length of the video, for example, an idle period of vehicle information does not appear for a long time, and taking the intercepted video segments as an original data set N;
(2) equipment debugging: firstly, initializing the size and the numerical range of a video to adapt to the size of a computer screen, firstly, installing an eye tracker at the bottom end of a computer display, starting a calibration program carried by the eye tracker, sequentially watching four corners and a central point of the screen by a worker to capture the positions of eyes by the eye tracker, wherein the head of the worker is in a fixed state, keeping the head posture of the worker as immovable as much as possible after calibration is completed, and concentrating eye comments on the license plate position;
(3) collecting a data set: starting to play a video needing to collect viewpoint data and containing a license plate, starting an eye tracker position information capture program, and outputting position coordinates (x, y, time) to a folder, wherein the x, y and time respectively represent horizontal and vertical coordinates and viewpoint collection time, and stay for 1 second in an initial frame to ensure that the license plate position information can be labeled with high quality, so that a worker can lock the viewpoint at the license plate position, and after the video is played, the worker has a rest for 10 seconds to prevent fatigue;
(4) and (3) license plate position information post-processing: smoothing is carried out on the basis of the license plate position information P (x, y) based on the viewpoint obtained in the step (3), the coordinates of the viewpoint are expanded into a viewpoint area A (x, y) by adopting an 8-neighborhood method, and the values of coordinate points around the viewpoint (upper, lower, left, right, upper left, upper right, lower left and lower right) are set to be 1;
(5) inter-frame smoothing based on the viewpoint area a (x, y) yields a' (x, y): in order to form a license plate motion track in a viewpoint area, positions of inter-frame license plates need to be connected through coordinates to ensure continuity of the coordinate positions of the license plates, meanwhile, an inter-frame smoothing mode is adopted to filter out the problem of viewpoint loss or drift caused by eye fatigue of workers, but the coordinates of the viewpoint areas between adjacent frames are directly subjected to an average value mode, noise is introduced, and even coordinate position error accumulation is caused, in order to prevent error accumulation, a batch dividing mode is adopted, namely video segments are grouped by adopting a sliding window mode between frames, weighting (w) smoothing is carried out on the video segments, and the farther the video segments are from a current frame, the smaller the weight (w 1) is (1 batch is 5 frames, and the weight of each part accounts for 1-30%, 2-60%, 3-100%, 4-60% and 5-30%); meanwhile, speed weighting (w 2) is introduced, a vehicle with an excessively high speed updates the weight to calculate the moving speed (v 2-v 1) between two frames, and if the speed is high, the weight proportion of adjacent frames is small (the proportion of each part is 1-30%, 2-60%, 3-100%, 4-60%, and 5-30%), so that the continuity between frames can be ensured, and noise information can be filtered;
(6) generating a peripheral box recommendation: generating a recommended license plate peripheral bounding box (prophase) on the acquired video (N) frame by using a selective search algorithm;
(7) viewpoint area a' (x, y) and peripheral bounding box (peripheral) consistency calculation: the method includes the steps that consistency between a viewpoint area A' (x, y) and a peripheral bounding box (peripheral) is calculated in an occupancy ratio (IOU) mode, if consistency is calculated by directly using the area of the viewpoint area/the area of the peripheral bounding box, a frame with incomplete license plate detection is caused to be dominant, and aiming at the problem, a priori constraint is introduced, namely, the aspect ratio (width: height = 3.5) of a license plate is calculated, and if the aspect ratio does not accord with the actual aspect ratio of the license plate, a current frame is deleted, so that an initial peripheral bounding box (peripheral bounding box) is generatedBounding Box Box 1 (x, y)(Stage1);
(8) Primary smoothing of an interframe peripheral frame: based on the initial peripheral bounding Box Box 1 (x, y) (Stage 1), and obtaining Box by adopting a mode of smoothing based on the inter-frame consistency of the initial peripheral bounding Box 2 Four vertexes (8 coordinates) are fused in a mode of smoothing coordinates of a peripheral bounding box between frames, the weighting mode is used for fusing, the weighting is smaller (each part accounts for 1-30%, 2-60%, 3-100%, 4-60%, 5-30%) of a frame which is farther away from the current frame, the weighting is updated to be a vehicle with an excessively high speed (v 2-v 1), and if the speed is high, the weighting proportion of adjacent frames is small (each part accounts for 1-30%, 2-60%, 3-100%, 4-60%, 5-30%);
(9) peripheral Box based on region growing algorithm 2 Refining: aiming at the problem that the license plate in the detection frame is distorted and inclined, a region growing algorithm based on a viewpoint region A' (x, y) is adopted in a license plate peripheral boundary frame subjected to primary smoothing of an inter-frame peripheral frame, license plate pixels with the same characteristics as the viewpoint region are continuously merged into a current license plate region to fulfill the aim of refining a license plate detection region, a pixel-level license plate region segmentation mask (stage 2) with a refined license plate region is generated, secondary inter-frame license plate position coordinate continuity smoothing is carried out (the smoothing mode is the same as that of stage 1), and a peripheral boundary frame Box of the refined pixel-level license plate region segmentation mask is obtained 3 (x,y);
(10) And refining a license plate detection network for training a peripheral frame: peripheral bounding Box Box for dividing refined pixel-level license plate region into masks 3 (x, y) as weak supervision information, training a license plate detection network, dividing the whole network into two branches, namely a classification branch (whether the license plate is) and a regression branch (position coordinate point), directly outputting the position coordinate of the license plate according to the trained license plate detection network to finish license plate identification, wherein the precision of 93% can be achieved without any other information assistance in the embodiment, video sequence data is input in the test process, and the video frame is set to 512 x 512 as a processing flow through mean value removal and normalization operationsIs input.
According to the weak supervision video license plate recognition algorithm based on the eye tracker point annotation information, the high-quality candidate frame is generated by using the eye tracker point annotation information, the inter-frame features are aligned by using the video inter-frame consistency characteristic, the complexity of marking a license plate by adopting a traditional manual marking method in a video is solved, the problem of misalignment of the depth features of an inclined or distorted license plate is solved by using a local region growing algorithm, and the license plate detection and correction can be efficiently realized.
Algorithms, software, etc., not disclosed herein are prior art.
It is noted that the present embodiment is intended to aid in further understanding of the present invention, but those skilled in the art will understand that: various substitutions and modifications are possible without departing from the spirit and scope of the invention and appended claims. Therefore, the invention should not be limited to the embodiments disclosed, but the scope of the invention is defined by the appended claims.

Claims (3)

1. A weak supervision video license plate recognition method based on eye movement instrument point annotation information is characterized in that the specific process of license plate recognition is as follows:
(1) collecting the license plate position information of the viewpoint of the eye tracker;
(2) generating viewpoint area information by using the license plate position coordinates of the viewpoint of the eye tracker in a viewpoint smoothing mode, which specifically comprises the following steps:
(2-1) viewpoint data post-processing: smoothing the original license plate position information coordinates according to the license plate position information P (x, y) of the eyetracker viewpoint obtained in the step (1), and expanding the coordinates into a viewpoint area A (x, y), wherein the value of 2 coordinate points around the viewpoint is set as 1;
(2-2) batch inter-frame consistency weighting based on viewpoint area a (x, y) a' (x, y): dividing the data obtained in the step (2-1) into lots, namely, grouping video segments into a group according to every 5 frames by adopting a sliding window mode between the frames, weighting and smoothing the coordinates of the video segments, wherein the weighting is smaller when the video segments are farther away from the current frame, and meanwhile, introducing speed weighting, updating the weighting to calculate the movement speed between the two frames for a vehicle with an excessively high speed, and if the speed is high, the weighting ratio of adjacent frames is small, so that the continuity between the frames can be ensured and noise information can be filtered;
(3) generating a recommended peripheral boundary for the collected video frame by frame through a selective search algorithm, calculating the consistency between a viewpoint area and the recommended peripheral boundary box, determining an initial peripheral boundary box, and improving the quality of the initial peripheral boundary box through the inter-frame consistency;
(4) based on the initial peripheral bounding box, firstly, smoothing the coordinates of the initial peripheral bounding box between frames, and then generating a refined license plate peripheral bounding box based on weak supervision of viewpoint information by adopting a region growing algorithm, wherein the method specifically comprises the following steps:
(4-1) primary smoothing of the outer peripheral frame between frames: initial peripheral bounding Box Box based on interframes 1 The coordinate information is smoothed, and the four points of the initial peripheral bounding Box are fused in a weighting mode to obtain Box 2 The weight calculation mode is that the frame which is farther away from the current frame has smaller weight; for vehicles with too high speed, updating the weight to calculate the movement speed between two frames, and if the speed is high, the applied weight of the adjacent frames is smaller;
(4-2) surrounding Box based on region growing Algorithm 2 Refining: aiming at the problem that the license plate in the object detection frame is distorted and inclined, a boundary frame Box at the periphery of the license plate 2 Continuously merging pixels with the same characteristics as the A' (x, y) of the viewpoint area obtained in the step (2-2) into the current pixel-level license plate area segmentation mask to refine the license plate detection area, continuously merging the license plate areas according with the license plate characteristics to generate a refined segmentation mask of the license plate area, performing secondary inter-frame smoothing, and obtaining a high-quality refined license plate peripheral boundary frame Box based on the pixel-level segmentation mask 3
(5) And (4) training a license plate detection network for the high-quality refined license plate peripheral boundary frame obtained in the step (4), wherein the license plate detection network comprises two branches of a category branch and a regression branch, the category branch detects whether the license plate is detected, the regression branch is used for detecting coordinates of four vertexes of the refined license plate peripheral boundary frame, and license plate position coordinates are directly output through the vehicle detection network to finish license plate recognition.
2. The eye tracker point annotation information-based weak supervision video license plate recognition method according to claim 1, wherein the specific process of collecting the license plate position information of the eye tracker point in step (1) is as follows:
(1-1) constructing a data set: collecting videos containing conventional, inclined and distorted license plates in traffic monitoring, high-speed intersections and large parking lots, constructing a data set of not less than 1000 sections of license plate videos, dividing the data set into a training set, a verification set and a test set according to the number ratio of 6:2:2, intercepting the length of the videos, deleting 'idle periods' in which no vehicle information appears for a long time, and constructing an original data set N through the intercepted videos;
(1-2) debugging the eye tracker: initializing the video size and the numerical range in an original data set N to adapt to the size of a computer screen, then installing an eye tracker at the bottom end of a computer display, starting a calibration program carried by the eye tracker, fixing the head of a worker, and sequentially watching four corners and a central point of the screen to finish the capture of the eye tracker on the eye position, wherein after the calibration is finished, the worker needs to keep the head posture as immovable as much as possible and concentrate the eye gaze on the license plate position;
(1-3) collecting data set: the method comprises the steps of starting playing a video needing to collect viewpoint data, starting a viewpoint position information capturing program of the eye tracker, outputting viewpoint position coordinates to a corresponding folder to obtain license plate position information P (x, y) of a viewpoint of the eye tracker, stopping for 1 second in an initial frame of the video to ensure that volunteers can mark the license plate position information with high quality, enabling the workers to lock the viewpoint at the license plate position, allowing the workers to rest for 10 seconds after the video playing is finished, preventing fatigue, and stopping immediately in the marking process if the workers are fatigued until the workers have full rest and then collecting again.
3. The eye tracker point annotation information-based weak surveillance video license plate recognition method of claim 2, wherein the specific process of the consistency calculation of the viewpoint area and the peripheral frame in the step (3) is as follows: generating recommended peripheral bounding Box consistency for the collected video frame by adopting the viewpoint area A' (x, y) obtained in the step (2-2) and a selective search algorithm, introducing prior constraint, namely calculating the length-width ratio of the license plate, deleting the current frame if the length-width ratio does not accord with the actual length-width ratio of the license plate, and finally obtaining an initial peripheral bounding Box Box 1
CN202210571171.4A 2022-05-25 2022-05-25 Weak surveillance video license plate recognition method based on eye tracker point annotation information Active CN114677500B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210571171.4A CN114677500B (en) 2022-05-25 2022-05-25 Weak surveillance video license plate recognition method based on eye tracker point annotation information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210571171.4A CN114677500B (en) 2022-05-25 2022-05-25 Weak surveillance video license plate recognition method based on eye tracker point annotation information

Publications (2)

Publication Number Publication Date
CN114677500A CN114677500A (en) 2022-06-28
CN114677500B true CN114677500B (en) 2022-08-23

Family

ID=82080718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210571171.4A Active CN114677500B (en) 2022-05-25 2022-05-25 Weak surveillance video license plate recognition method based on eye tracker point annotation information

Country Status (1)

Country Link
CN (1) CN114677500B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107038442A (en) * 2017-03-27 2017-08-11 新智认知数据服务有限公司 A kind of car plate detection and global recognition method based on deep learning
CN108509907A (en) * 2018-03-30 2018-09-07 北京市商汤科技开发有限公司 Vehicle lamp detection method, method, apparatus, medium and the equipment for realizing intelligent driving
CN109816013A (en) * 2019-01-17 2019-05-28 陆宇佳 It is tracked based on eye movement and carries out image pattern quick obtaining device and method
CN111368830A (en) * 2020-03-03 2020-07-03 西北工业大学 License plate detection and identification method based on multi-video frame information and nuclear phase light filtering algorithm
CN111626155A (en) * 2020-05-14 2020-09-04 新华智云科技有限公司 Basketball position point generation method and equipment
CN114037923A (en) * 2021-10-15 2022-02-11 上海洛塔信息技术有限公司 Target activity hotspot graph drawing method, system, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273894A (en) * 2017-06-15 2017-10-20 珠海习悦信息技术有限公司 Recognition methods, device, storage medium and the processor of car plate
US10497258B1 (en) * 2018-09-10 2019-12-03 Sony Corporation Vehicle tracking and license plate recognition based on group of pictures (GOP) structure
US20220063488A1 (en) * 2020-08-31 2022-03-03 David Mays Dynamic license plate for displaying/outputting license plate information/data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107038442A (en) * 2017-03-27 2017-08-11 新智认知数据服务有限公司 A kind of car plate detection and global recognition method based on deep learning
CN108509907A (en) * 2018-03-30 2018-09-07 北京市商汤科技开发有限公司 Vehicle lamp detection method, method, apparatus, medium and the equipment for realizing intelligent driving
CN109816013A (en) * 2019-01-17 2019-05-28 陆宇佳 It is tracked based on eye movement and carries out image pattern quick obtaining device and method
CN111368830A (en) * 2020-03-03 2020-07-03 西北工业大学 License plate detection and identification method based on multi-video frame information and nuclear phase light filtering algorithm
CN111626155A (en) * 2020-05-14 2020-09-04 新华智云科技有限公司 Basketball position point generation method and equipment
CN114037923A (en) * 2021-10-15 2022-02-11 上海洛塔信息技术有限公司 Target activity hotspot graph drawing method, system, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
License plate detection and localization in complex scenes based on deep learning;Tian Ying 等;《2018 Chinese Control And Decision Conference (CCDC)》;20180709;正文第6569-6571页 *

Also Published As

Publication number Publication date
CN114677500A (en) 2022-06-28

Similar Documents

Publication Publication Date Title
CN108304798B (en) Street level order event video detection method based on deep learning and motion consistency
US20200026930A1 (en) Lane line detection method and apparatus
CN105574543B (en) A kind of vehicle brand type identifier method and system based on deep learning
JP4157620B2 (en) Moving object detection apparatus and method
CN108320510A (en) One kind being based on unmanned plane video traffic information statistical method and system
WO2015010451A1 (en) Method for road detection from one image
CN110379168A (en) A kind of vehicular traffic information acquisition method based on Mask R-CNN
CN104239867A (en) License plate locating method and system
CN110309765B (en) High-efficiency detection method for video moving target
CN105809716A (en) Superpixel and three-dimensional self-organizing background subtraction algorithm-combined foreground extraction method
Tao et al. Gap detection of switch machines in complex environment based on object detection and image processing
CN111310593B (en) Ultra-fast lane line detection method based on structure perception
CN110443142B (en) Deep learning vehicle counting method based on road surface extraction and segmentation
CN114463390A (en) Multi-twin-countermeasure network cross-camera vehicle tracking method with coupled motorcade following strengthening
CN107247967B (en) Vehicle window annual inspection mark detection method based on R-CNN
CN112215073A (en) Traffic marking line rapid identification and tracking method under high-speed motion scene
CN114037834B (en) Semantic segmentation method and device based on fusion of vibration signal and RGB image
CN114745528A (en) High-order panoramic video safety monitoring method
Cheng et al. Semantic segmentation of road profiles for efficient sensing in autonomous driving
CN113516853B (en) Multi-lane traffic flow detection method for complex monitoring scene
CN112801021B (en) Method and system for detecting lane line based on multi-level semantic information
CN104517127A (en) Self-learning pedestrian counting method and apparatus based on Bag-of-features model
CN114677500B (en) Weak surveillance video license plate recognition method based on eye tracker point annotation information
CN106127813B (en) The monitor video motion segments dividing method of view-based access control model energy sensing
CN117315547A (en) Visual SLAM method for solving large duty ratio of dynamic object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant