CN113888740A - Method and device for determining binding relationship between target license plate frame and target vehicle frame - Google Patents
Method and device for determining binding relationship between target license plate frame and target vehicle frame Download PDFInfo
- Publication number
- CN113888740A CN113888740A CN202110970069.7A CN202110970069A CN113888740A CN 113888740 A CN113888740 A CN 113888740A CN 202110970069 A CN202110970069 A CN 202110970069A CN 113888740 A CN113888740 A CN 113888740A
- Authority
- CN
- China
- Prior art keywords
- license plate
- target
- frame
- vehicle
- vehicle frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the application discloses a method and a device for determining a binding relationship between a target license plate frame and a target vehicle frame, which are used for improving the accuracy of binding a vehicle and a license plate of a high-order video pile. The method in the embodiment of the application comprises the following steps: acquiring a target image; extracting a target vehicle frame and a target license plate frame in the target image through a target detection algorithm; performing regression operation on the key points in the target vehicle frame and the key points in the target license plate frame according to the regression model to generate a first operation result and a second operation result; judging whether a first movement trend correlation parameter between the first operation result and the second operation result is larger than a first threshold value or not; if the first movement trend correlation parameter is larger than a first threshold value, judging whether a second movement trend correlation parameter between the target vehicle frame and the target license plate frame is larger than a second threshold value; and if the second movement trend correlation parameter is larger than a second threshold value, binding the target vehicle frame and the target license plate frame.
Description
Technical Field
The embodiment of the application relates to the field of data processing, in particular to a method and a device for determining a binding relationship between a target license plate frame and a target vehicle frame.
Background
With the improvement of the national living standard and the rapid development of the automobile industry, the quantity of the retained national automobiles is increased day by day, but the quantity of the urban parking spaces is limited, so that the parking in the road is developed at a high speed at present, and the full utilization of resources in the road is great tendency.
In the prior art, acquiring vehicles and license plates through a camera can generate wrong binding information because a plurality of vehicles are too close to or move, such as: the vehicle that traveles on the road surface because the detection frame of vehicle is inaccurate with the license plate detection frame, when two cars or a plurality of cars are close to, the condition that a license plate was bound by two or even a plurality of car frames probably can appear to lead to binding of license plate and vehicle to make mistakes, can't normally accomplish and collect fee.
Disclosure of Invention
The embodiment of the application provides a method and a device for determining a binding relationship between a target license plate frame and a target vehicle frame, which are used for improving the accuracy of binding a vehicle and a license plate.
The application provides a method for determining a binding relationship between a target license plate frame and a target vehicle frame in a first aspect, which comprises the following steps:
acquiring a target image;
extracting a target vehicle frame and a target license plate frame in the target image through a target detection algorithm;
performing regression operation on the key points in the target vehicle frame and the key points in the target license plate frame according to a regression model to generate a first operation result and a second operation result;
judging whether a first movement trend correlation parameter between the first operation result and the second operation result is larger than a first threshold value or not;
if the first movement trend correlation parameter is larger than the first threshold value, judging whether a second movement trend correlation parameter between the target vehicle frame and the target license plate frame is larger than a second threshold value;
and if the second movement trend correlation parameter is larger than the second threshold value, binding the target vehicle frame and the target license plate frame.
Optionally, the extracting, by using a target detection algorithm, a target vehicle frame and a target license plate frame in the target image includes:
extracting a vehicle frame and a license plate frame in the target image through a target detection algorithm based on deep learning;
calculating the overlapping degree of the vehicle frame and the license plate frame;
judging whether the overlapping degree is larger than a preset overlapping degree or not;
and if the preset overlap degree is greater than the preset overlap degree, determining that the vehicle frame is the target vehicle frame, and determining that the license plate frame is the target license plate frame.
Optionally, the calculating the overlapping degree of the vehicle frame and the license plate frame includes:
respectively acquiring a first vehicle frame and a first license plate frame from all the detected vehicle frames and all the detected license plate frames;
calculating a cross-over ratio of the first vehicle frame and the first license plate frame;
respectively acquiring a second vehicle frame and a second license plate frame from all the vehicle frames and all the license plate frames;
calculating the intersection ratio of the second vehicle frame and the second license plate frame until the intersection ratio calculation of all the vehicle frames and all the license plate frames is completed;
and determining the overlapping degree according to the intersection ratio.
Optionally, the performing regression operation on the key points in the target vehicle frame and the key points in the target license plate frame according to the regression model to generate a first operation result and a second operation result includes:
performing regression operation on key points of the target vehicle frame according to a vehicle regression model, wherein the vehicle key points comprise coordinates of four corner points of the target vehicle frame and coordinates of middle points of four edges of the target vehicle frame;
shifting the vehicle key point relative to the central point of the target vehicle frame to generate a vehicle perspective transformation matrix of the target vehicle frame, and taking the vehicle perspective transformation matrix as the first operation result;
performing regression operation on key points of the target license plate frame according to a license plate regression model, wherein the coordinates of the key points of the license plate comprise coordinates of four corner points of the target license plate frame;
and offsetting the license plate key point relative to the central point of the target license plate frame to generate a license plate perspective transformation matrix of the target license plate frame, and taking the license plate perspective transformation matrix as the second operation result.
Optionally, before the determining whether the first movement tendency correlation parameter is greater than the first threshold, the method further includes:
associating a target license plate in the target license plate frame with a target vehicle in the target vehicle frame to generate an association record, wherein the association record comprises key point information of the target vehicle frame and key point information of the target license plate frame;
adding the association record to a trace queue;
generating a key point coordinate record of the association record through the tracking queue;
and calculating a correlation ratio between the target vehicle and the target license plate in the tracking queue according to the key point coordinate records, and taking the correlation ratio as a second movement trend correlation parameter, wherein the correlation ratio is the ratio of the key point coordinate records with the movement trend correlation to the total number of the key point coordinate records.
Optionally, the generating, by the tracking queue, the key point coordinate record of the association record includes:
adding the key point coordinates of the current frame of the target vehicle frame and the key point coordinates of the current frame of the target license plate frame which have the incidence relation to a tracking queue;
and acquiring the key point coordinate record of the target vehicle frame and the key point coordinate record of the target license plate frame which have the incidence relation from the tracking queue, and generating a key point coordinate record.
Optionally, before the target image is obtained, the license plate regression model needs to be trained, and the license plate regression model is trained in the following process:
acquiring a license plate image sample set, wherein images contained in the license plate image sample set are color images, and the image content in the license plate image sample set only contains one license plate;
labeling license plate key points contained in all images in the license plate image sample set, wherein the license plate key points comprise four angular points of the license plate frame to obtain a license plate labeling data set;
intercepting a license plate image set from the license plate image sample set according to a license plate marking data set;
adjusting the license plate image set according to a preset size of a license plate to obtain a license plate regression model training image set;
inputting the license plate regression model training images in the license plate regression model training image set into a CNN convolution neural network regression network for training;
the regression network training process adjusts model parameters of an initial license plate regression recognition model according to loss values output by a CNN regression network until the loss values converge, and a first training result is generated;
and generating a license plate regression model according to the training result, wherein the license plate regression model is used for performing regression on the detected license plate frame.
Optionally, before the target image is obtained, the vehicle regression model needs to be trained, and a training process of the vehicle regression model is as follows:
acquiring a vehicle image sample set, wherein images contained in the vehicle image sample set are color images, and the image content in the vehicle image sample set only contains one vehicle;
marking vehicle key points contained in all images in the vehicle image sample set, wherein the vehicle key points comprise four corner points and middle points of four edges of the vehicle frame, and obtaining a vehicle marking data set;
intercepting a vehicle image set from the vehicle image sample set according to a vehicle marking data set;
adjusting the vehicle image set according to a preset vehicle size to obtain a vehicle regression model training image set;
inputting the vehicle regression model training images in the vehicle regression model training image set into a CNN convolutional neural network regression network for training;
in the regression network training process, model parameters of an initial vehicle regression recognition model are adjusted according to loss values output by a CNN (CNN) regression network until the loss values are converged, and a second training result is generated;
and generating a vehicle regression model according to the training result, wherein the vehicle regression model is used for performing regression on the detected vehicle frame.
Optionally, before determining whether the first movement trend correlation parameter between the first operation result and the second operation result is greater than a first threshold, the method further includes:
calculating an imaging angle between the target license plate frame and the target vehicle frame according to a first operation result and the second operation result;
and determining the correlation of the first movement trend of the target license plate frame and the target vehicle frame according to the imaging angle.
A second aspect of the present application provides an apparatus for determining a binding relationship between a target license plate frame and a target vehicle frame, comprising:
a first acquisition unit configured to acquire a target image;
the extraction unit is used for extracting a target vehicle frame and a target license plate frame in the target image through a target detection algorithm;
the regression unit is used for performing regression operation on the key points in the target vehicle frame and the key points in the target license plate frame according to a regression model to generate a first operation result and a second operation result;
a first judging unit, configured to judge whether a first movement trend correlation parameter between the first operation result and the second operation result is greater than a first threshold;
the second judging unit is used for judging whether a second movement trend correlation parameter between the target vehicle frame and the target license plate frame is greater than a second threshold value or not when the first judging unit judges that the first movement trend correlation parameter is greater than the first threshold value;
and the binding unit is used for binding the target vehicle frame and the target license plate frame when the second movement trend correlation parameter is larger than the second threshold value.
Optionally, the first extracting unit includes:
the extraction module is used for extracting a vehicle frame and a license plate frame in the target image through a target detection algorithm based on deep learning;
the calculating module is used for calculating the overlapping degree of the vehicle frame and the license plate frame;
the judging module is used for judging whether the overlapping degree is larger than a preset overlapping degree or not;
and the determining module is used for determining the vehicle frame as the target vehicle frame and determining the license plate frame as the target license plate frame when the judgment result of the judging module is greater than the preset overlapping degree.
Optionally, the computing module is further configured to:
respectively acquiring a first vehicle frame and a first license plate frame from all the detected vehicle frames and all the detected license plate frames;
calculating a cross-over ratio of the first vehicle frame and the first license plate frame;
respectively acquiring a second vehicle frame and a second license plate frame from all the vehicle frames and all the license plate frames;
calculating the intersection ratio of the second vehicle frame and the second license plate frame until the intersection ratio calculation of all the vehicle frames and all the license plate frames is completed;
and determining the overlapping degree according to the intersection ratio.
Optionally, the first returning unit includes:
the first regression module is used for performing regression operation on key points of the target vehicle frame according to a vehicle regression model, wherein the vehicle mark key points comprise coordinates of four corner points of the target vehicle frame and coordinates of middle points of four edges of the target vehicle frame;
the first offset module is used for offsetting the vehicle key point relative to the central point of the target vehicle frame, generating a vehicle perspective transformation matrix of the target vehicle frame, and taking the vehicle perspective transformation matrix as the first operation result;
the second regression module is used for performing regression operation on key points of the target license plate frame according to a license plate regression model, and the license plate key point coordinates comprise coordinates of four corner points of the target license plate frame;
and the second offset module is used for offsetting the license plate key point relative to the central point of the target license plate frame to generate a license plate perspective transformation matrix of the target license plate frame, and taking the license plate perspective transformation matrix as the second operation result.
Optionally, the apparatus further comprises:
the association unit is used for associating a target license plate in the target license plate frame with a target vehicle in the target vehicle frame to generate an association record, and the association record comprises key point information of the target vehicle frame and key point information of the target license plate frame;
an adding unit, configured to add the association record to a trace queue;
the first generation unit is used for generating a key point coordinate record of the association record through the tracking queue;
and the first calculation unit is used for calculating a correlation ratio between the target vehicle and the target license plate in the tracking queue according to the key point coordinate records, and taking the correlation ratio as the second movement trend correlation parameter, wherein the correlation ratio is the ratio of the key point coordinate records with the movement trend correlation to the total number of the key point coordinate records.
Optionally, the first generating unit includes:
the adding module is used for adding the key point coordinates of the current frame of the target vehicle frame and the key point coordinates of the current frame of the target license plate frame which have the incidence relation into a tracking queue;
and the generating module is used for acquiring the key point coordinate record of the target vehicle frame and the key point coordinate record of the target license plate frame which have the incidence relation from the tracking queue and generating the key point coordinate record.
Optionally, the apparatus further comprises:
the second acquisition unit is used for acquiring a license plate image sample set, wherein images contained in the license plate image sample set are color images, and the image content in the license plate image sample set only contains one license plate;
the first labeling unit is used for labeling license plate key points contained in all images in the license plate image sample set, wherein the license plate key points comprise four angular points of the license plate frame to obtain a license plate labeling data set;
the first intercepting unit is used for intercepting a license plate image set from the license plate image sample set according to a license plate marking data set;
the first adjusting unit is used for adjusting the license plate image set according to the preset size of the license plate to obtain a license plate regression model training image set;
the first input unit is used for inputting the license plate regression model training images in the license plate regression model training image set into a CNN (convolutional neural network) regression network for training;
the second adjusting unit is used for adjusting model parameters of the initial license plate regression recognition model according to loss values output by the CNN regression network in the regression network training process until the loss values are converged to generate a first training result;
and the second generation unit is used for generating a license plate regression model according to the training result, and the license plate regression model is used for regressing the detected license plate frame.
Optionally, the apparatus further comprises:
the third acquisition unit is used for acquiring a vehicle image sample set, images contained in the vehicle image sample set are color images, and the image content in the vehicle image sample set only contains one vehicle;
the second labeling unit is used for labeling vehicle key points contained in all the images in the vehicle image sample set, wherein the vehicle key points comprise four corner points and midpoints of four edges of the vehicle frame, and a vehicle labeling data set is obtained;
the second intercepting unit is used for intercepting a vehicle image set from the vehicle image sample set according to a vehicle marking data set;
the third adjusting unit is used for adjusting the vehicle image set according to a preset vehicle size to obtain a vehicle regression model training image set;
the second input unit is used for inputting the vehicle regression model training images in the vehicle regression model training image set into a CNN (convolutional neural network) regression network for training;
the fourth adjusting unit is used for adjusting the model parameters of the initial vehicle regression recognition model according to the loss value output by the CNN regression network in the regression network training process until the loss value is converged to generate a second training result;
and the third generating unit is used for generating a vehicle regression model according to the training result, and the vehicle regression model is used for performing regression on the detected vehicle frame.
Optionally, the apparatus further comprises:
a second calculation unit for calculating an imaging angle of the target license plate frame and the target vehicle frame according to a first operation result and the second operation result;
the determining unit is used for determining the correlation of the first movement trend of the target license plate frame and the target vehicle frame according to the imaging angle.
A third aspect of the embodiments of the present application provides a device for determining a binding relationship between a target license plate frame and a target vehicle frame, including:
the device comprises a processor, a memory, an input and output unit and a bus;
the processor is connected with the memory, the input and output unit and the bus;
the processor specifically performs the same operations as in the foregoing first aspect.
According to the technical scheme, after the target license plate frame and the target vehicle frame are obtained, the target vehicle frame and the target license plate frame are subjected to twice movement trend correlation parameters confirmation, so that the accuracy of binding the vehicle and the license plate is improved.
Drawings
FIG. 1 is a schematic flowchart illustrating an embodiment of a method for determining a binding relationship between a target license plate frame and a target vehicle frame in an embodiment of the present application;
FIG. 2 is a schematic flowchart of another embodiment of a method for determining a binding relationship between a target license plate frame and a target vehicle frame in the embodiment of the present application;
FIG. 3 is a schematic flow chart of an embodiment of training a license plate regression model in the method for determining the binding relationship between a target license plate frame and a target vehicle frame in the embodiment of the present application;
FIG. 4 is a schematic flowchart of an embodiment of training a regression model of a vehicle in the method for determining a binding relationship between a target license plate frame and a target vehicle frame in the embodiment of the present application;
FIG. 5 is a structural diagram illustrating an embodiment of an apparatus for determining a binding relationship between a target license plate frame and a target vehicle frame in an embodiment of the present application;
FIG. 6 is a schematic structural diagram of another embodiment of an apparatus for determining a binding relationship between a target license plate frame and a target vehicle frame in the embodiment of the present application;
fig. 7 is a structural schematic diagram of another embodiment of the device for determining the binding relationship between the target license plate frame and the target vehicle frame in the embodiment of the application.
Detailed Description
The embodiment of the application provides a method and a device for determining a binding relationship between a target license plate frame and a target vehicle frame, which are used for improving the accuracy of binding a vehicle and a license plate.
In the following, the technical solutions in the embodiments of the present application will be clearly and completely described with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The execution subject of the embodiment of the application can be as follows: the embodiments of the present application will be described with a terminal as an execution subject, where the server, the controller, and the terminal are devices with logic analysis and computing capabilities.
Referring to fig. 1, an embodiment of the present application provides an embodiment of a method for determining a vehicle-license plate association, including:
101. acquiring a target image;
in an actual situation, the target images are parking spaces on the side of a road and a road close to the parking spaces, images obtained when vehicles move exist, and the images are obtained when at least one vehicle and at least one license plate exist in the target images, so that the license plate information of one vehicle and one license plate is ensured to exist in the obtained target images. And for the unlicensed vehicle, the problem of binding the vehicle with the license plate does not exist, and the vehicle is processed in a relevant way after the vehicle is detected.
The target image is obtained through the high-order video pile, and the target image is sent to the terminal after the high-order video pile obtains the image, so that the terminal obtains the target image, and data processing of subsequent steps is carried out on the target image.
102. Extracting a target vehicle frame and a target license plate frame in the target image through a target detection algorithm;
specifically, after the terminal acquires the target image, the license plate and the vehicle in the target image are detected according to a target detection algorithm, and detection cards are generated for the detected vehicle and the detected license plate, wherein the detection cards are a license plate frame and a vehicle frame respectively.
103. Performing regression operation on the key points in the target vehicle frame and the key points in the target license plate frame according to a regression model to generate a first operation result and a second operation result;
specifically, the regression models are a vehicle regression model and a license plate regression model respectively, and are obtained by terminal training before a target image is obtained, the vehicle regression model is used for determining key points of a target vehicle frame, and the license plate regression model is used for determining key points of the target license plate frame. And respectively generating the moving trend of the key points of the vehicle frame and the moving trend of the key points of the license plate frame according to the key points.
104. Judging whether a first movement trend correlation parameter between the first operation result and the second operation result is larger than a first threshold value or not;
specifically, after the terminal determines the movement trends of the key points of the vehicle frame and the license plate frame, the terminal calculates a first movement trend correlation parameter of the two key points, compares the first movement trend correlation parameter with a first threshold of the movement trend correlation parameter, and if the comparison result shows that the first movement trend correlation parameter is greater than the first threshold, executes step 105.
105. If the first movement trend correlation parameter is larger than the first threshold value, judging whether a second movement trend correlation parameter between the target vehicle frame and the target license plate frame is larger than a second threshold value;
specifically, after the terminal determines that the first movement trend correlation parameter is greater than the first threshold, the terminal calculates a second movement trend correlation parameter between the target vehicle frame and the target license plate frame according to the multi-frame images, and determines whether the second movement trend correlation parameter is greater than the second threshold, and if the second movement trend correlation parameter is greater than the second threshold, step 106 is executed.
106. And if the second movement trend correlation parameter is larger than the second threshold value, binding the target vehicle frame and the target license plate frame.
Specifically, if the terminal determines that the structure of the correlation parameter of the movement trend satisfies the threshold condition twice, it indicates that the target vehicle frame and the target license plate frame have a maximum probability of belonging to the same target vehicle, and therefore the target vehicle frame and the target license plate frame are bound.
According to the technical scheme, after the target license plate frame and the target vehicle frame are obtained, the target vehicle frame and the target license plate frame are subjected to twice movement trend correlation parameters confirmation, so that the accuracy of binding the vehicle and the license plate is improved.
Referring to fig. 2, another embodiment of a method for determining a vehicle-license plate association according to an embodiment of the present application includes:
201. acquiring a target image;
step 201 in this embodiment is similar to step 101 in the previous embodiment, and is not described herein again.
202. Extracting a vehicle frame and a license plate frame in the target image through a target detection algorithm based on deep learning;
the target detection algorithm for deep learning includes, but is not limited to: YOLO V3, detectron2, centrnet, etc. may be used to identify the desired data in the target image, and are not limited thereto.
203. Respectively acquiring a first vehicle frame and a first license plate frame from all the detected vehicle frames and all the detected license plate frames;
specifically, the terminal obtains a vehicle frame and a license plate frame from all vehicle frames and all license plate frames of the target image, wherein the vehicle frame is a first vehicle frame, and the license plate frame is a first license plate frame.
204. Calculating a cross-over ratio of the first vehicle frame and the first license plate frame;
specifically, after the terminal acquires the first license plate frame and the first vehicle frame, the cross-over ratio calculation is performed on the correlation between the first license plate frame and the first vehicle frame.
205. Respectively acquiring a second vehicle frame and a second license plate frame from all the vehicle frames and all the license plate frames;
specifically, the step is similar to step 204, wherein the second license plate frame may be the same license plate frame as the first license plate frame, and the second vehicle frame and the first license plate frame may also be the same vehicle frame, but they cannot be the same as the previous vehicle frame or the license plate frame at the same time.
206. Calculating the intersection ratio of the second vehicle frame and the second license plate frame until the intersection ratio calculation of all the vehicle frames and all the license plate frames is completed;
specifically, the second license plate frame and the second vehicle frame refer to the license plate frame and the vehicle frame except the first license plate frame and the first vehicle frame, and the terminal pairs all the acquired vehicle frames and license plate frames in the target image one by one and respectively performs intersection and comparison calculation until the intersection and comparison calculation of all the pairing modes in the target image is completed.
207. And determining the overlapping degree according to the intersection ratio.
Specifically, after the overlap degrees of all the pairing modes are obtained, the terminal obtains a preset overlap degree threshold value, so that the terminal can compare the calculated overlap degrees according to the overlap degree threshold value, if the overlap degree is greater than the overlap degree threshold value, step 210 is executed, and if the overlap degree is less than the overlap degree threshold value, it is indicated that the license plate frame in the pairing mode does not belong to the vehicle frame.
208. Judging whether the overlapping degree is larger than a preset overlapping degree or not;
specifically, after the overlap degree of all the vehicle frames and all the license plate frames in the target image are respectively calculated, all the vehicle frames and license plate frames with the overlap degree larger than the overlap degree threshold value are obtained from all the calculated overlap degree data according to the preset overlap degree threshold value in the terminal. And the overlapping degree threshold is used for judging whether the combination of the license plate frame and the vehicle frame is possibly associated, and when the overlapping degree is smaller than the combination of the preset value, the possibility that the license plate belongs to the vehicle is very low, and no further processing is performed. If the overlap exceeds the predetermined value, it indicates that there is a relationship between the license plate frame and the vehicle frame, step 208 is executed, because more than one vehicle frame and one license plate frame may appear in the target image, and the license plate in the license plate frame may belong to one or more vehicles.
209. And if the preset overlap degree is greater than the preset overlap degree, determining that the vehicle frame is the target vehicle frame, and determining that the license plate frame is the target license plate frame.
Specifically, the terminal records all combinations larger than the threshold value of the overlap degree, that is, records that all combinations of the license plate frame and the vehicle frame larger than the preset overlap degree are determined as the target license plate frame and the target vehicle frame.
210. Performing regression operation on key points of the target vehicle frame according to a vehicle regression model, wherein the vehicle key points comprise coordinates of four corner points of the target vehicle frame and coordinates of middle points of four edges of the target vehicle frame;
after the terminal determines the target license plate frame and the target vehicle frame, corresponding logic processing is respectively carried out on the target license plate frame and the target vehicle frame.
Specifically, the terminal obtains key points of the target vehicle frame through a vehicle regression network model, the vehicle regression model is a model for obtaining the vehicle key points, the model is obtained by performing multi-sample training after corresponding parameter design is performed on a CNN convolutional neural network, after a target image is input into the vehicle regression model, coordinates of eight key points of the target vehicle frame are correspondingly output, and the eight key points are coordinates of four corner points of the target vehicle frame and coordinates of middle points of four sides of the target vehicle frame respectively.
211. Shifting the vehicle key point relative to the central point of the target vehicle frame to generate a vehicle perspective transformation matrix of the target vehicle frame, and taking the vehicle perspective transformation matrix as the first operation result;
specifically, the central point is generated when a target detection algorithm based on deep learning is performed to obtain the vehicle frame, after the terminal obtains the coordinate point of the target vehicle frame, the terminal shifts the coordinate point by using the central point as a reference, converts the vehicle in the target vehicle frame into a horizontal image through shifting, calculates the coordinate data before shifting and the coordinate data after shifting, and generates a vehicle perspective change matrix of the target vehicle frame.
212. Performing regression operation on key points of the target license plate frame according to a license plate regression model, wherein the coordinates of the key points of the license plate comprise coordinates of four corner points of the target license plate frame;
specifically, similar to step 208, the terminal performs a key point regression on the target license plate frame through the license plate regression model, where the license plate key points only include four corner points of the target vehicle frame.
213. And offsetting the license plate key point relative to the central point of the target license plate frame to generate a license plate perspective transformation matrix of the target license plate frame, and taking the license plate perspective transformation matrix as the second operation result.
Specifically, similar to step 211, the central point is generated when a target detection algorithm based on deep learning is performed to obtain the license plate frame, after the terminal obtains the coordinate point of the target license plate frame, the terminal shifts the coordinate point with the central point as a reference, converts the license plate in the target license plate frame into a horizontal image through shifting, and calculates the coordinate data before shifting and the coordinate data after shifting to generate a license plate perspective change matrix of the target license plate frame.
214. Calculating an imaging angle between the target license plate frame and the target vehicle frame according to a first operation result and the second operation result;
specifically, the license plate perspective change matrix and the vehicle perspective change matrix are obtained after the terminal offsets key points of the target license plate frame and the target vehicle frame based on the center point, and the terminal calculates offset conditions in the license plate perspective change matrix and the vehicle perspective change matrix, so that the imaging angle of the target license plate frame and the target vehicle frame in the target image is obtained.
215. And determining the correlation of the first movement trend of the target license plate frame and the target vehicle frame according to the imaging angle.
Specifically, after the terminal calculates the imaging angle of the target license plate frame and the target vehicle frame in the target image, the terminal respectively determines the moving trend of the target license plate frame and the target vehicle frame according to the imaging angle and the data in the license plate perspective change matrix and the vehicle perspective change matrix, so as to obtain the correlation of the moving trends of the target license plate frame and the target vehicle frame.
216. Judging whether a first movement trend correlation parameter between the first operation result and the second operation result is larger than a first threshold value or not;
step 216 in this embodiment is similar to step 104 in the previous embodiment, and is not described herein again.
217. If the first movement trend correlation parameter is larger than the first threshold value, associating a target license plate in the target license plate frame with a target vehicle in the target vehicle frame to generate an association record, wherein the association record comprises key point information of the target vehicle frame and key point information of the target license plate frame;
when the terminal determines that the target license plate frame and the target vehicle frame have the moving trend correlation through the step 216, the terminal associates the target license plate frame and the target vehicle in the target vehicle frame with the target license plate according to the information and the position and generates an association record.
Specifically, because a plurality of recognition results may exist in the license plate and the vehicle acquired in the target image at the same time, which may cause the possibility that both the target license plate frame and the target vehicle frame are not unique, after the same target image is analyzed, the terminal may generate more than one association record.
218. Adding the association record to a trace queue;
specifically, the tracking queue tracks key points of a target license plate and a target vehicle in the association record, and after the association record is determined, the tracking of the association record continuously acquires the coordinates of the key points with a fixed preset number of frames.
219. Adding the key point coordinates of the current frame of the target vehicle frame and the key point coordinates of the current frame of the target license plate frame which have the incidence relation to a tracking queue;
specifically, the tracking queue tracks key points of a target license plate and a target vehicle in the association record, and after the association record is determined, the tracking of the association record continuously acquires the coordinates of the key points with a fixed preset number of frames.
220. And acquiring the key point coordinate record of the target vehicle frame and the key point coordinate record of the target license plate frame which have the incidence relation from the tracking queue, and generating a key point coordinate record.
Specifically, the terminal records the key point information acquired by the tracking queue, and determines the moving trend of the target vehicle and the target license plate by recording the coordinates of the key points.
221. And calculating a correlation ratio between the target vehicle and the target license plate in the tracking queue according to the key point coordinate records, and taking the correlation ratio as a second movement trend correlation parameter, wherein the correlation ratio is the ratio of the key point coordinate records with the movement trend correlation to the total number of the key point coordinate records.
Specifically, after the tracking queue acquires the coordinate change of the preset frame number, the terminal calculates the movement trend correlation obtained by calculating the coordinates of each pair of target vehicles and target license plates, calculates the key points of all the acquired target vehicles and target license plates and obtains a calculation result, and compares the quantity with the movement trend correlation in the calculation result with the total coordinate acquisition quantity to obtain a ratio representing the movement trend correlation and the total coordinate acquisition frame number, wherein the ratio is a correlation ratio.
222. Judging whether a second movement trend correlation parameter between the target vehicle frame and the target license plate frame is larger than a second threshold value or not;
223. and if the second movement trend correlation parameter is larger than the second threshold value, binding the target vehicle frame and the target license plate frame.
In the embodiment of the application, the terminal tracks the target vehicle frame and the target license plate frame through the tracking queue, so that data of the second movement trend related tragedies are obtained, and the accuracy of the binding result of the target vehicle frame and the target license plate frame is improved.
In the embodiment of the present application, the vehicle regression model mentioned in steps 103 and 211 is trained as follows, please refer to fig. 3, before the target image is obtained, the vehicle regression model needs to be trained:
301. acquiring a vehicle image sample set, wherein images contained in the vehicle image sample set are color images, and the image content in the vehicle image sample set only contains one vehicle;
specifically, the vehicle image sample set is a sample used for training the vehicle regression model, and all vehicle images in the vehicle image sample set have one and only one vehicle.
302. Marking vehicle key points contained in all images in the vehicle image sample set, wherein the vehicle key points comprise four corner points and middle points of four edges of the vehicle frame, and obtaining a vehicle marking data set;
specifically, after the vehicle image sample set is obtained, the key points in the vehicle image are accurately marked manually, so that the coordinates of the key points obtained by the target image through the vehicle regression network are more accurately obtained.
303. Intercepting a vehicle image set from the vehicle image sample set according to a vehicle marking data set;
specifically, the vehicle image set is intercepted from the vehicle image sample set according to the labeling data of the vehicle labeling data set by using a program. The vehicle image set contains only one vehicle.
304. Adjusting the vehicle image set according to a preset vehicle size to obtain a vehicle regression model training image set;
specifically, all images in the vehicle image set are adjusted to a preset size, so that the size of a sample input of the network can be kept consistent with the size of a result output.
305. Inputting the vehicle regression model training images in the vehicle regression model training image set into a CNN convolutional neural network regression network for training;
specifically, after the vehicle image sample set is subjected to size adjustment and labeling of the labeling points, the processed vehicle image set is input into a CNN (Convolutional Neural Networks) regression network for training.
306. In the regression network training process, model parameters of the vehicle regression recognition model are adjusted according to loss values output by a CNN (CNN) regression network until the loss values are converged, and a second training result is generated;
specifically, the structure of the regression network is as follows:
the input image size is 128 by 3.
First winding layer (conv 1): the convolution kernel size is 3 × 3, padding is 1, the number of output feature maps is 3, and stride is 1;
first excitation layer (prelu 1): for introducing non-linear factors;
first pooling layer (Pool 1): the pooling mode is maximum pooling (Max), and stride is 2;
second convolution layer (conv 2): the convolution kernel size is 3 × 3, the number of output feature maps is 6, and stride is 1;
second excitation layer (prelu 2): for introducing non-linear factors;
second pooling layer (Pool 2): the pooling mode is maximum pooling (Max), and stride is 2;
third convolution layer (conv 3): the convolution kernel size is 3 × 3, the number of output feature maps is 24, and stride is 1;
third excitation layer (prelu 3): for introducing non-linear factors;
third pooling layer (Pool 3): the pooling mode is maximum pooling (Max), and stride is 2;
fourth convolution layer (conv 4): the convolution kernel size is 3 × 3, the number of output feature maps is 40, and stride is 1;
fourth active layer (prelu 4): for introducing non-linear factors;
fourth pooling layer (Pool 4): the pooling mode is maximum pooling (Max), and stride is 2;
fifth convolutional layer (conv 5): the convolution kernel size is 3 × 3, the number of output feature maps is 64, and stride is 1;
fifth excitation layer (prelu 5): for introducing non-linear factors;
sixth convolutional layer (conv 6): the convolution kernel size is 3 × 3, the number of output feature maps is 512, and stride is 1;
sixth stimulation layer (prelu 6): for introducing non-linear factors;
fully connected layer (InnerProduct): and carrying out full connection processing on the data output by the sixth excitation layer and outputting 16 output dimensions for representing the coordinates of the key points of the vehicle frame.
307. And generating a vehicle regression model according to the training result, wherein the vehicle regression model is used for performing regression on the detected vehicle frame.
Specifically, after the convergence of the initial vehicle regression model is determined, a vehicle regression model is generated, and the vehicle regression model is used for performing regression on the vehicle frame key points obtained by the vehicle detection model.
In the license plate regression model mentioned in steps 103 and 213 in the embodiment of the present application, the training process is as follows, please refer to fig. 4, before the target image is obtained, the license plate regression model needs to be trained:
401. acquiring a license plate image sample set, wherein images contained in the license plate image sample set are color images, and the image content in the license plate image sample set only contains one license plate;
specifically, the license plate image sample set is a sample used for training a license plate regression model, and all license plate images in the license plate image sample set have one and only one license plate.
402. Labeling license plate key points contained in all images in the license plate image sample set, wherein the license plate key points comprise four angular points of the license plate frame to obtain a license plate labeling data set;
specifically, after the license plate image sample set is obtained, the key points in the license plate image are accurately marked manually, so that the coordinates of the key points obtained by the target image through the license plate regression network are more accurately obtained.
403. Intercepting a license plate image set from the license plate image sample set according to a license plate marking data set;
specifically, a license plate image set is intercepted from the license plate image sample set by using a program according to the labeling data of the license plate labeling data set. The license plate image set contains only one license plate.
404. Adjusting the license plate image set according to a preset size of a license plate to obtain a license plate regression model training image set;
specifically, all images in the license plate image set are adjusted to a preset size, so that the size of sample input of the network and the size of result output can be kept consistent.
405. Inputting the license plate regression model training images in the license plate regression model training image set into a CNN convolution neural network regression network for training;
specifically, after the license plate image sample set is subjected to size adjustment and marking of marking points, the processed license plate image set is input into a CNN (Convolutional Neural Networks) regression network for training.
406. In the regression network training process, model parameters of an initial license plate regression recognition model are adjusted according to loss values output by a CNN regression network until the loss values are converged, and a first training result is generated;
specifically, the structure of the regression network is as follows:
the input image size is 64 times 3.
First winding layer (conv 1): the convolution kernel size is 3 × 3, the number of output feature maps is 3, and stride is 1;
first excitation layer (prelu 1): for introducing non-linear factors;
first pooling layer (Pool 1): the pooling mode is maximum pooling (Max), and stride is 2;
second convolution layer (conv 2): the convolution kernel size is 3 × 3, the number of output feature maps is 12, and stride is 1;
second excitation layer (prelu 2): for introducing non-linear factors;
second pooling layer (Pool 2): the pooling mode is maximum pooling (Max), and stride is 2;
third convolution layer (conv 3): the size of the convolution kernel is 3 × 3, the number of output feature maps is 20, and stride is 1;
third excitation layer (prelu 3): for introducing non-linear factors;
third pooling layer (Pool 3): the pooling mode is maximum pooling (Max), and stride is 2;
fourth convolution layer (conv 4): the convolution kernel size is 3 × 3, the number of output feature maps is 32, and stride is 1;
fourth active layer (prelu 4): the method is used for clearing invalid data in the characteristic diagram output by the convolutional layer;
fifth convolutional layer (conv 5): the size of the convolution kernel is 3 × 3, the number of output feature maps is 256, and stride is 1;
fifth excitation layer (prelu 5): for introducing non-linear factors;
fully connected layer (InnerProduct): and carrying out full connection processing on the data output by the fifth excitation layer and outputting 8 output dimensions for representing the coordinates of the key points of the license plate frame.
407. And generating a license plate regression model according to the training result, wherein the license plate regression model is used for performing regression on the detected license plate frame.
Specifically, after the convergence of an initial license plate regression model is determined, a license plate regression model is generated, and the license plate regression model is used for performing regression on license plate frame key points obtained by a license plate detection model.
The above description is made on the method for determining the binding relationship between the target license plate frame and the target vehicle frame in the embodiment of the present application, and the following description is made on the related system in the embodiment of the present application:
referring to fig. 5, an embodiment of the present application provides an apparatus for determining a binding relationship between a target license plate frame and a target vehicle frame, including:
a first acquisition unit 501 for acquiring a target image;
an extracting unit 502, configured to extract a target vehicle frame and a target license plate frame in the target image through a target detection algorithm;
the regression unit 503 is configured to perform regression operation on the key points in the target vehicle frame and the key points in the target license plate frame according to a regression model to generate a first operation result and a second operation result;
a first determining unit 504, configured to determine whether a first movement trend correlation parameter between the first operation result and the second operation result is greater than a first threshold;
a second judging unit 505, configured to judge whether a second movement tendency correlation parameter between the target vehicle frame and the target license plate frame is greater than a second threshold value when the first judging unit judges that the first movement tendency correlation parameter is greater than the first threshold value;
a binding unit 506, configured to bind the target vehicle frame and the target license plate frame when the second movement tendency correlation parameter is greater than the second threshold value.
In this embodiment, the functions of the units correspond to the steps in the embodiment shown in fig. 1, and are not described herein again.
Referring to fig. 6, another embodiment of the apparatus for determining a binding relationship between a target license plate frame and a target vehicle frame according to the present application includes:
a first acquisition unit 601 configured to acquire a target image;
an extracting unit 602, configured to extract a target vehicle frame and a target license plate frame in the target image through a target detection algorithm;
the regression unit 603 is configured to perform regression operation on the key points in the target vehicle frame and the key points in the target license plate frame according to a regression model to generate a first operation result and a second operation result;
a second calculating unit 604 for calculating an imaging angle of the target license plate frame and the target vehicle frame according to the first operation result and the second operation result;
a determining unit 605, configured to determine a first movement trend correlation between the target license plate frame and the target vehicle frame according to the imaging angle.
A first determining unit 606, configured to determine whether a first movement trend correlation parameter between the first operation result and the second operation result is greater than a first threshold;
the association unit 607 is configured to associate a target license plate in the target license plate frame with a target vehicle in the target vehicle frame to generate an association record when the first determination unit determines that the first movement trend correlation parameter is greater than the first threshold, where the association record includes key point information of the target vehicle frame and key point information of the target license plate frame;
an adding unit 608, configured to add the association record to a trace queue;
a first generating unit 609, configured to generate, through the tracking queue, a key point coordinate record of the association record;
a first calculating unit 610, configured to calculate, according to the key point coordinate records, a correlation ratio between the target vehicle and the target license plate in the tracking queue, and use the correlation ratio as the second movement trend correlation parameter, where the correlation ratio is a ratio of the key point coordinate records having the movement trend correlation to a total number of the key point coordinate records.
A second judging unit 611 that judges whether a second movement tendency correlation parameter between the target vehicle frame and the target license plate frame is greater than a second threshold value;
a binding unit 612, configured to bind the target vehicle frame and the target license plate frame when the second movement tendency correlation parameter is greater than the second threshold.
Optionally, the extracting unit 602 includes:
an extraction module 6021, configured to extract a vehicle frame and a license plate frame in the target image through a target detection algorithm based on deep learning;
a calculating module 6022 for calculating the overlapping degree of the vehicle frame and the license plate frame;
a judging module 6023, configured to judge whether the overlapping degree is greater than a preset overlapping degree;
a determining module 6024, configured to determine that the vehicle frame is the target vehicle frame and determine that the license plate frame is the target license plate frame when the determination result of the determining module is greater than the preset overlap degree.
Optionally, the calculating module 6022 is further configured to:
respectively acquiring a first vehicle frame and a first license plate frame from all the detected vehicle frames and all the detected license plate frames;
calculating a cross-over ratio of the first vehicle frame and the first license plate frame;
respectively acquiring a second vehicle frame and a second license plate frame from all the vehicle frames and all the license plate frames;
calculating the intersection ratio of the second vehicle frame and the second license plate frame until the intersection ratio calculation of all the vehicle frames and all the license plate frames is completed;
and determining the overlapping degree according to the intersection ratio.
Optionally, the regression unit 603 includes:
a first regression module 6031, configured to perform regression operation on key points of the target vehicle frame according to a vehicle regression model, where the vehicle key points include coordinates of four corner points of the target vehicle frame and coordinates of middle points of four sides of the target vehicle frame;
a first shifting module 6032, configured to shift the vehicle key point relative to a central point of the target vehicle frame, generate a vehicle perspective transformation matrix of the target vehicle frame, and use the vehicle perspective transformation matrix as the first operation result;
a second regression module 6033, configured to perform regression operation on key points of the target license plate frame according to a license plate regression model, where coordinates of the license plate key points include coordinates of four corner points of the target license plate frame; (ii) a
A second offset module 6034, configured to offset the license plate key point with respect to the central point of the target license plate frame, generate a license plate perspective transformation matrix of the target license plate frame, and use the license plate perspective transformation matrix as the second operation result.
Optionally, the first generating unit 609 includes:
an adding module 6091, configured to add the key point coordinates of the current frame of the target vehicle frame and the key point coordinates of the current frame of the target license plate frame, which have the association relationship, to a tracking queue;
a generating module 6092, configured to obtain, from the tracking queue, a key point coordinate record of the target vehicle frame and a key point coordinate record of the target license plate frame that have the association relationship, and generate a key point coordinate record.
Optionally, the apparatus further comprises:
a second obtaining unit 613, configured to obtain a license plate image sample set, where images included in the license plate image sample set are color images, and image contents in the license plate image sample set only include one license plate;
a first labeling unit 614, configured to label license plate key points included in all images in the license plate image sample set, where the license plate key points include four corner points of the license plate frame, so as to obtain a license plate labeling data set;
the first intercepting unit 615 is used for intercepting a license plate image set from the license plate image sample set according to a license plate marking data set;
a first adjusting unit 616, configured to adjust the license plate image set according to a preset size of a license plate, so as to obtain a license plate regression model training image set;
a first input unit 617, configured to input license plate regression model training images in the license plate regression model training image set into a CNN convolutional neural network regression network for training;
a second adjusting unit 618, configured to adjust, in the regression network training process, a model parameter of the initial license plate regression recognition model according to a loss value output by the CNN regression network until the loss value converges, so as to generate a first training result;
and a second generating unit 619, configured to generate a license plate regression model according to the training result, where the license plate regression model is used to perform regression on the detected license plate frame.
Optionally, the apparatus further comprises:
a third obtaining unit 620, configured to obtain a vehicle image sample set, where images included in the vehicle image sample set are color images, and image contents in the vehicle image sample set include only one vehicle;
a second labeling unit 621, configured to label vehicle key points included in all images in the vehicle image sample set, where the vehicle key points include four corner points and midpoints of four edges of the vehicle frame, so as to obtain a vehicle labeling data set;
a second intercepting unit 622, configured to intercept the vehicle image set from the vehicle image sample set according to the vehicle labeling data set;
a third adjusting unit 623, configured to adjust the vehicle image set according to a preset vehicle size to obtain a vehicle regression model training image set;
a second input unit 624, configured to input the vehicle regression model training images in the vehicle regression model training image set into a CNN convolutional neural network regression network for training;
a fourth adjusting unit 625, configured to adjust, in the regression network training process, a model parameter of the initial vehicle regression recognition model according to a loss value output by the CNN regression network until the loss value converges, and generate a second training result;
a third generating unit 626, configured to generate a vehicle regression model according to the training result, where the vehicle regression model is used to perform regression on the detected vehicle frame.
In this embodiment, the functions of each unit correspond to the steps in the embodiments shown in fig. 2, fig. 3, and fig. 4, and are not described herein again.
Referring to fig. 7, an embodiment of the present application provides an apparatus for determining a binding relationship between a target license plate frame and a target vehicle frame, including:
a processor 701, a memory 702, an input/output unit 703, a bus 704;
the processor 701 is connected to the memory 702, the input/output unit 703 and the bus 704;
the processor 701 specifically executes the operations corresponding to the method steps in fig. 1 to fig. 4.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
Claims (10)
1. A method for determining a binding relationship between a target license plate frame and a target vehicle frame is characterized by comprising the following steps:
acquiring a target image;
extracting a target vehicle frame and a target license plate frame in the target image through a target detection algorithm;
performing regression operation on the key points in the target vehicle frame and the key points in the target license plate frame according to a regression model to generate a first operation result and a second operation result;
judging whether a first movement trend correlation parameter between the first operation result and the second operation result is larger than a first threshold value or not;
if the first movement trend correlation parameter is larger than the first threshold value, judging whether a second movement trend correlation parameter between the target vehicle frame and the target license plate frame is larger than a second threshold value;
and if the second movement trend correlation parameter is larger than the second threshold value, binding the target vehicle frame and the target license plate frame.
2. The method of claim 1, wherein the extracting a target vehicle frame and a target license plate frame in the target image by a target detection algorithm comprises:
extracting a vehicle frame and a license plate frame in the target image through a target detection algorithm based on deep learning;
calculating the overlapping degree of the vehicle frame and the license plate frame;
judging whether the overlapping degree is larger than a preset overlapping degree or not;
and if the preset overlap degree is greater than the preset overlap degree, determining that the vehicle frame is the target vehicle frame, and determining that the license plate frame is the target license plate frame.
3. The method of claim 2, wherein calculating the degree of overlap of the vehicle frame and the license plate frame comprises:
respectively acquiring a first vehicle frame and a first license plate frame from all the detected vehicle frames and all the detected license plate frames;
calculating a cross-over ratio of the first vehicle frame and the first license plate frame;
respectively acquiring a second vehicle frame and a second license plate frame from all the vehicle frames and all the license plate frames;
calculating the intersection ratio of the second vehicle frame and the second license plate frame until the intersection ratio calculation of all the vehicle frames and all the license plate frames is completed;
and determining the overlapping degree according to the intersection ratio.
4. The method of claim 1, wherein the generating a first operation result and a second operation result by performing a regression operation on the key points in the target vehicle frame and the key points in the target license plate frame according to a regression model comprises:
performing regression operation on key points of the target vehicle frame according to a vehicle regression model, wherein the vehicle key points comprise coordinates of four corner points of the target vehicle frame and coordinates of middle points of four edges of the target vehicle frame;
shifting the vehicle key point relative to the central point of the target vehicle frame to generate a vehicle perspective transformation matrix of the target vehicle frame, and taking the vehicle perspective transformation matrix as the first operation result;
performing regression operation on key points of the target license plate frame according to a license plate regression model, wherein the coordinates of the key points of the license plate comprise coordinates of four corner points of the target license plate frame;
and offsetting the license plate key point relative to the central point of the target license plate frame to generate a license plate perspective transformation matrix of the target license plate frame, and taking the license plate perspective transformation matrix as the second operation result.
5. The method of claim 1, wherein prior to determining whether the second movement trend correlation parameter between the target vehicle frame and the target license plate frame is greater than a second threshold, the method further comprises:
associating a target license plate in the target license plate frame with a target vehicle in the target vehicle frame to generate an association record, wherein the association record comprises key point information of the target vehicle frame and key point information of the target license plate frame;
adding the association record to a trace queue;
generating a key point coordinate record of the association record through the tracking queue;
and calculating a correlation ratio between the target vehicle and the target license plate in the tracking queue according to the key point coordinate records, and taking the correlation ratio as a second movement trend correlation parameter, wherein the correlation ratio is the ratio of the key point coordinate records with the movement trend correlation to the total number of the key point coordinate records.
6. The method of claim 5, wherein generating, by the tracking queue, a keypoint coordinate record of the association record comprises:
adding the key point coordinates of the current frame of the target vehicle frame and the key point coordinates of the current frame of the target license plate frame which have the incidence relation to a tracking queue;
and acquiring the key point coordinate record of the target vehicle frame and the key point coordinate record of the target license plate frame which have the incidence relation from the tracking queue, and generating a key point coordinate record.
7. The method according to any one of claims 1 to 6, wherein before the target image is obtained, the license plate regression model needs to be trained, and the license plate regression model is trained as follows:
acquiring a license plate image sample set, wherein images contained in the license plate image sample set are color images, and the image content in the license plate image sample set only contains one license plate;
labeling license plate key points contained in all images in the license plate image sample set, wherein the license plate key points comprise four angular points of the license plate frame to obtain a license plate labeling data set;
intercepting a license plate image set from the license plate image sample set according to a license plate marking data set;
adjusting the license plate image set according to a preset size of a license plate to obtain a license plate regression model training image set;
inputting the license plate regression model training images in the license plate regression model training image set into a CNN convolution neural network regression network for training;
the regression network training process adjusts model parameters of an initial license plate regression recognition model according to loss values output by a CNN regression network until the loss values converge, and a first training result is generated;
and generating a license plate regression model according to the training result, wherein the license plate regression model is used for performing regression on the detected license plate frame.
8. The method according to any one of claims 1 to 6, wherein before the obtaining of the target image, the vehicle regression model needs to be trained, and the training process of the vehicle regression model is as follows:
acquiring a vehicle image sample set, wherein images contained in the vehicle image sample set are color images, and the image content in the vehicle image sample set only contains one vehicle;
marking vehicle key points contained in all images in the vehicle image sample set, wherein the vehicle key points comprise four corner points and middle points of four edges of the vehicle frame, and obtaining a vehicle marking data set;
intercepting a vehicle image set from the vehicle image sample set according to a vehicle marking data set;
adjusting the vehicle image set according to a preset vehicle size to obtain a vehicle regression model training image set;
inputting the vehicle regression model training images in the vehicle regression model training image set into a CNN convolutional neural network regression network for training;
in the regression network training process, model parameters of an initial vehicle regression recognition model are adjusted according to loss values output by a CNN (CNN) regression network until the loss values are converged, and a second training result is generated;
and generating a vehicle regression model according to the training result, wherein the vehicle regression model is used for performing regression on the detected vehicle frame.
9. The method according to any one of claims 1 to 6, wherein before determining whether the first movement tendency correlation parameter between the first operation result and the second operation result is greater than a first threshold, the method further comprises:
calculating an imaging angle between the target license plate frame and the target vehicle frame according to a first operation result and the second operation result;
and determining the correlation of the first movement trend of the target license plate frame and the target vehicle frame according to the imaging angle.
10. An apparatus for determining a binding relationship between a target license plate frame and a target vehicle frame, comprising:
a first acquisition unit configured to acquire a target image;
the first extraction unit is used for extracting a target vehicle frame and a target license plate frame in the target image through a target detection algorithm;
the first regression unit is used for performing regression operation on the key points in the target vehicle frame and the key points in the target license plate frame according to a regression model to generate a first operation result and a second operation result;
a first judging unit, configured to judge whether a first movement trend correlation parameter between the first operation result and the second operation result is greater than a first threshold;
the second judging unit is used for judging whether a second movement trend correlation parameter between the target vehicle frame and the target license plate frame is greater than a second threshold value or not when the first judging unit judges that the first movement trend correlation parameter is greater than the first threshold value;
and the binding unit is used for binding the target vehicle frame and the target license plate frame when the second movement trend correlation parameter is larger than the second threshold value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110970069.7A CN113888740A (en) | 2021-08-23 | 2021-08-23 | Method and device for determining binding relationship between target license plate frame and target vehicle frame |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110970069.7A CN113888740A (en) | 2021-08-23 | 2021-08-23 | Method and device for determining binding relationship between target license plate frame and target vehicle frame |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113888740A true CN113888740A (en) | 2022-01-04 |
Family
ID=79011184
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110970069.7A Pending CN113888740A (en) | 2021-08-23 | 2021-08-23 | Method and device for determining binding relationship between target license plate frame and target vehicle frame |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113888740A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114677774A (en) * | 2022-03-30 | 2022-06-28 | 深圳市捷顺科技实业股份有限公司 | Barrier gate control method and related equipment |
-
2021
- 2021-08-23 CN CN202110970069.7A patent/CN113888740A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114677774A (en) * | 2022-03-30 | 2022-06-28 | 深圳市捷顺科技实业股份有限公司 | Barrier gate control method and related equipment |
CN114677774B (en) * | 2022-03-30 | 2023-10-17 | 深圳市捷顺科技实业股份有限公司 | Barrier gate control method and related equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112528878B (en) | Method and device for detecting lane line, terminal equipment and readable storage medium | |
US20230154210A1 (en) | Method for recognizing license plate characters, electronic device and storage medium | |
CN110020592A (en) | Object detection model training method, device, computer equipment and storage medium | |
US11790499B2 (en) | Certificate image extraction method and terminal device | |
CN109087510A (en) | traffic monitoring method and device | |
CN104867225A (en) | Banknote face orientation identification method and apparatus | |
CN106780727B (en) | Vehicle head detection model reconstruction method and device | |
CN111091023A (en) | Vehicle detection method and device and electronic equipment | |
CN111126393A (en) | Vehicle appearance refitting judgment method and device, computer equipment and storage medium | |
CN115953744A (en) | Vehicle identification tracking method based on deep learning | |
CN112052807A (en) | Vehicle position detection method, device, electronic equipment and storage medium | |
CN113888740A (en) | Method and device for determining binding relationship between target license plate frame and target vehicle frame | |
CN117831002A (en) | Obstacle key point detection method and device, electronic equipment and storage medium | |
CN117184075A (en) | Vehicle lane change detection method and device and computer readable storage medium | |
CN113159158A (en) | License plate correction and reconstruction method and system based on generation countermeasure network | |
CN109977937B (en) | Image processing method, device and equipment | |
CN112634141A (en) | License plate correction method, device, equipment and medium | |
CN109543610B (en) | Vehicle detection tracking method, device, equipment and storage medium | |
CN114897987B (en) | Method, device, equipment and medium for determining vehicle ground projection | |
CN115115530B (en) | Image deblurring method, device, terminal equipment and medium | |
CN116468931A (en) | Vehicle part detection method, device, terminal and storage medium | |
CN108629786B (en) | Image edge detection method and device | |
CN112613402B (en) | Text region detection method, device, computer equipment and storage medium | |
CN111461128A (en) | License plate recognition method and device | |
CN112348044A (en) | License plate detection method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |