CN113610770A - License plate recognition method, device and equipment - Google Patents

License plate recognition method, device and equipment Download PDF

Info

Publication number
CN113610770A
CN113610770A CN202110799265.2A CN202110799265A CN113610770A CN 113610770 A CN113610770 A CN 113610770A CN 202110799265 A CN202110799265 A CN 202110799265A CN 113610770 A CN113610770 A CN 113610770A
Authority
CN
China
Prior art keywords
license plate
image
image frames
matrix
marking area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110799265.2A
Other languages
Chinese (zh)
Inventor
吕翠文
邵明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110799265.2A priority Critical patent/CN113610770A/en
Publication of CN113610770A publication Critical patent/CN113610770A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a license plate recognition method, a license plate recognition device and license plate recognition equipment, wherein the method comprises the following steps: acquiring a plurality of original image frames acquired after a vehicle reaches a preset acquisition area; respectively carrying out vehicle detection and license plate detection on the multiple original image frames to obtain multiple first image frames, wherein the first image frames comprise a vehicle marking area for a vehicle and a first license plate marking area for a license plate; expanding vehicle marking areas contained in the plurality of first image frames, and cutting the corresponding first image frames according to the expanded vehicle marking areas to obtain a plurality of second image frames; respectively carrying out license plate detection on the plurality of second image frames to obtain a plurality of third image frames, wherein the third image frames comprise second license plate marking areas aiming at license plates; determining license plate content of the license plate based on the first license plate marking area and the second license plate marking area. By using the method provided by the invention, the problem of missing report of the license plate can be reduced.

Description

License plate recognition method, device and equipment
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to a license plate recognition method, a license plate recognition device and license plate recognition equipment.
Background
License plate recognition is a technology for detecting vehicles on a monitored road surface to automatically extract vehicle license plate information and perform related processing on the vehicle license plate information. The application of license plate recognition is very wide, license plate recognition is needed in many scenes such as highway toll management, overspeed violation automatic photographing, parking lot management, district vehicle entering and exiting management, traffic data acquisition and the like, and with the development of artificial intelligence technologies such as deep learning, the intelligent traffic field is developed at a rapid speed, and the intelligent traffic field plays an increasingly important role in the life of people, so that higher requirements are provided for the license plate recognition effect.
In the license plate recognition scheme in the prior art, the license plate region is positioned on the collected original image, and the license plate content is obtained by performing character recognition on the license plate region.
Disclosure of Invention
The invention provides a license plate recognition method, a license plate recognition device and license plate recognition equipment, and solves the problem of high missing report rate of a license plate in a license plate recognition scheme in the prior art.
In a first aspect, the present invention provides a license plate recognition method, including:
acquiring a plurality of original image frames acquired after a vehicle reaches a preset acquisition area;
respectively carrying out vehicle detection and license plate detection on the multiple original image frames to obtain multiple first image frames, wherein the first image frames comprise a vehicle marking area for a vehicle and a first license plate marking area for a license plate;
expanding vehicle marking areas contained in the plurality of first image frames, and cutting the corresponding first image frames according to the expanded vehicle marking areas to obtain a plurality of second image frames;
respectively carrying out license plate detection on the plurality of second image frames to obtain a plurality of third image frames, wherein the third image frames comprise second license plate marking areas aiming at license plates;
determining license plate content of the license plate based on the first license plate marking area and the second license plate marking area.
Optionally, after obtaining the plurality of third image frames, the method further comprises:
judging whether a corresponding second license plate marking area exists in each of the plurality of third image frames;
screening a fourth image frame with the license plate quality meeting the preset requirement from the plurality of third image frames when the corresponding second license plate marking area is determined to exist;
when the corresponding second license plate marking area does not exist, judging whether the corresponding first license plate marking area exists or not for each first image frame in the plurality of first image frames; and screening and obtaining a fourth image frame with the license plate quality meeting the preset requirement from the plurality of first image frames when the corresponding first license plate marking area is determined to exist.
Optionally, determining license plate content of the license plate based on the first license plate indicia area and the second license plate indicia area comprises:
inputting the fourth image frame into a pre-trained license plate recognition model;
performing feature extraction on the input fourth image frame based on a first CNN feature extraction layer in the license plate recognition model to obtain a feature matrix H1
Based on the incidence matrix calculation layer in the license plate recognition model, the feature matrix H is enhanced1Obtaining a dependency matrix O according to the dependency among the characteristic vectors;
extracting the characteristics of the dependency matrix O based on an RNN characteristic extraction layer in the license plate recognition model to obtain a characteristic matrix H2
And predicting characters and probability corresponding to each time step timeout based on a first classifier in the license plate recognition model, sequencing the characters with the maximum probability according to the timeout to obtain license plate contents, and outputting the license plate contents to obtain n license plate contents of the output license plate.
Optionally, after obtaining the output n license plate contents of the license plate, the method further includes:
and when the n is determined to be not smaller than a preset value, or the n is determined to be smaller than the preset value and not equal to 0, and the n license plate contents accord with a preset license plate coding rule, determining the license plate contents of the license plate according to the n license plate contents.
Optionally, the pre-training of the license plate recognition model includes:
inputting image frames in pre-collected sample data into a license plate recognition model;
adjusting parameters of the license plate recognition model according to the connection time sequence classification CTC loss and the cumulative cross entropy CCE loss so that the license plate recognition model outputs license plate content corresponding to the image frame;
the sample data comprises an image frame containing a license plate region and license plate content corresponding to the image frame; the CTC loss is obtained by calculation according to the arrangement sequence of characters in the output license plate content and the arrangement sequence of characters in the license plate content corresponding to the image frame; and the CCE loss is calculated according to the occurrence frequency of the same character in the output license plate content and the occurrence frequency of the same character in the license plate content corresponding to the image frame.
Optionally, the adjusting the parameters of the license plate recognition model according to the connection timing classification CTC loss comprises:
feature extraction is carried out on the image frame in the sample data based on a first CNN feature extraction layer in the license plate recognition model, and a feature matrix H is obtained1
Based on the incidence matrix calculation layer in the license plate recognition model, the feature matrix H is enhanced1Obtaining a dependency matrix O according to the dependency among the characteristic vectors;
extracting the characteristics of the dependency matrix O based on an RNN characteristic extraction layer in the license plate recognition model to obtain a characteristic matrix H2
Based on a first classifier in the license plate recognition model, predicting characters and probability corresponding to the time step timeout, and sequencing the characters with the maximum probability according to the time step to obtain and output first license plate content;
calculating connection time sequence classification CTC loss according to the arrangement sequence of the characters in the first license plate content and the arrangement sequence of the characters in the license plate content corresponding to the image frame in the sample data;
and adjusting parameters of the license plate recognition model according to the CTC loss.
Optionally, the license plate recognition model further includes a second CNN feature extraction layer and a second classifier, and the adjusting of the parameters of the license plate recognition model according to the cumulative cross entropy CCE loss includes:
the feature matrix H is subjected to feature matrix pair based on a second CNN feature extraction layer in the license plate recognition model1Extracting the characteristics to obtain a characteristic matrix H3
Based on a second classifier in the license plate recognition model, predicting characters and probability corresponding to each time step timeout, and sequencing the characters with the maximum probability according to the timeout to obtain and output second license plate content;
calculating the cumulative cross entropy CCE loss according to the occurrence times of the same characters in the second license plate content and the occurrence times of the same characters in the license plate content corresponding to the image frame in the sample data;
and adjusting parameters of the license plate recognition model according to the CCE loss.
Optionally, the feature matrix H is enhanced1Obtaining a dependency matrix O according to the dependency between the feature vectors, including:
computing and characterizing the feature matrix H1A distance matrix of distances between different feature vectors;
computing and characterizing the feature matrix H1The incidence matrix of incidence relation between different characteristic vectors;
calculating the product of the distance matrix and the incidence matrix to obtain a correlation matrix;
calculating the correlation matrix and the feature matrix H1And presetting the product of the weight matrix to obtain a dependency matrix O.
Optionally, calculating the cumulative cross entropy CCE loss according to the number of occurrences of the same character in the second license plate content and the number of occurrences of the same character in the license plate content corresponding to the image frame in the sample data, includes:
calculating the feature matrix H of each character c3Cumulative probability of occurrence of all positions in
Figure BDA0003164033310000041
Figure BDA0003164033310000042
Wherein,
Figure BDA0003164033310000043
for each character c in the feature matrix H3I ═ 1,2, …, T being the feature matrix H3The total number of the positions in (a) is,
Figure BDA0003164033310000044
Figure BDA0003164033310000045
Figure BDA0003164033310000046
c is a set of characters of the license plate,
Figure BDA0003164033310000047
is a space;
for the cumulative probability ocNormalization is carried out to obtain the normalized cumulative probability
Figure BDA0003164033310000048
Calculating the occurrence frequency L of the character c in the license plate content corresponding to the image frame in the sample datacAnd to said LcThe normalization is carried out, and the normalization is carried out,
Figure BDA0003164033310000049
computing cumulative cross-entropy CCE loss
Figure BDA0003164033310000051
Wherein I is the input license plate image and S is the second license plate content.
Optionally, after obtaining the output n license plate contents of the license plate, the method further includes:
when the n is determined to be not smaller than a preset value, or when the n is determined to be smaller than the preset value and not equal to 0 and the n license plate contents meet a preset license plate coding rule, determining a license plate state label for identifying the license plate state to be normal;
and when n is determined to be smaller than a preset value and not equal to 0, and the n license plate contents do not accord with a preset license plate coding rule, or when n is determined to be equal to 0, respectively inputting the plurality of second image frames into a pre-trained license plate state classification model, obtaining output license plate state labels for identifying the license plate state to be normal/abnormal, and determining the license plate state labels as the license plate state labels output by the license plate state classification model.
Optionally, the step of screening and obtaining a fourth image frame with a license plate quality meeting a preset requirement from the plurality of third image frames/the first image frames includes:
when the second license plate marking area/the first license plate marking area is determined to be a license plate image, calculating the definition of the license plate image, judging whether the license plate is abnormal according to the license plate image, and calculating the inclination angle of the license plate;
and screening a fourth image frame with the license plate quality meeting a preset requirement from the plurality of third image frames/first image frames, wherein the preset requirement is that the definition is not less than a first preset threshold, the inclination angle is not greater than a second preset threshold, and the license plate is not abnormal.
Optionally, determining that the second license plate indicia region is a license plate image comprises:
when the width-to-height ratio of the second license plate marking area is determined to be within a preset threshold range and the width of the second license plate marking area is larger than a third preset threshold, expanding the second license plate marking area;
inputting the expanded second license plate marking area into a pre-trained classification model, and judging whether the second license plate marking area is a license plate image;
when the judgment result that the output license plate image is obtained, determining that the second license plate marking area/the first license plate marking area is the license plate image;
determining that the first license plate indicia area is a license plate image, comprising:
when the width-to-height ratio of the first license plate marking area is determined to be within a preset threshold range and the width of the first license plate marking area is larger than a third preset threshold, expanding the first license plate marking area;
inputting the expanded first license plate marking area into a pre-trained classification model, and judging whether the first license plate marking area is a license plate image;
and when the judgment result that the license plate image is output is obtained, determining that the first license plate marking area is the license plate image.
Optionally, calculating the inclination angle of the license plate comprises:
graying and threshold segmentation are carried out on the expanded second license plate mark area/first license plate mark area to obtain a binary image;
carrying out connected domain segmentation and connected domain marking on the binary image, determining each character in the binary image, and determining the highest point and the lowest point of each character;
respectively performing straight line fitting on the highest point and the lowest point, determining a first straight line where the highest point is located and a second straight line where the lowest point is located, and calculating the slope of the first straight line and the average slope k of the slopes of the second straight lines;
and calculating the inclination angle theta of the license plate region according to the average slope, wherein the inclination angle theta is atan (k).
Optionally, after determining whether the license plate is abnormal according to the license plate image, the method further includes:
calculating the total number of characters in the binary image corresponding to the determined non-abnormal license plate;
and screening out fourth image frames corresponding to the license plates with the total number smaller than a fourth preset threshold value.
Optionally, cropping the corresponding first image frame according to the enlarged vehicle mark region includes:
and cutting off the upper half part of the corresponding first image frame according to the width of the enlarged vehicle mark area and the preset aspect ratio.
In a second aspect, the present invention provides a license plate recognition device, comprising a memory and a processor, wherein:
the memory is used for storing a computer program;
the processor is used for reading the program in the memory and executing the following steps:
acquiring a plurality of original image frames acquired after a vehicle reaches a preset acquisition area;
respectively carrying out vehicle detection and license plate detection on the multiple original image frames to obtain multiple first image frames, wherein the first image frames comprise a vehicle marking area for a vehicle and a first license plate marking area for a license plate;
expanding vehicle marking areas contained in the plurality of first image frames, and cutting the corresponding first image frames according to the expanded vehicle marking areas to obtain a plurality of second image frames;
respectively carrying out license plate detection on the plurality of second image frames to obtain a plurality of third image frames, wherein the third image frames comprise second license plate marking areas aiming at license plates;
determining license plate content of the license plate based on the first license plate marking area and the second license plate marking area.
Optionally, after obtaining the plurality of third image frames, the processor is further configured to:
judging whether a corresponding second license plate marking area exists in each of the plurality of third image frames;
screening a fourth image frame with the license plate quality meeting the preset requirement from the plurality of third image frames when the corresponding second license plate marking area is determined to exist;
when the corresponding second license plate marking area does not exist, judging whether the corresponding first license plate marking area exists or not for each first image frame in the plurality of first image frames; and screening and obtaining a fourth image frame with the license plate quality meeting the preset requirement from the plurality of first image frames when the corresponding first license plate marking area is determined to exist.
Optionally, the processor determines license plate content of the license plate based on the first license plate indicia area and the second license plate indicia area, including:
inputting the fourth image frame into a pre-trained license plate recognition model;
performing feature extraction on the input fourth image frame based on a first CNN feature extraction layer in the license plate recognition model to obtain a feature matrix H1
Based on the incidence matrix calculation layer in the license plate recognition model, the feature matrix H is enhanced1Obtaining a dependency matrix O according to the dependency among the characteristic vectors;
extracting the characteristics of the dependency matrix O based on an RNN characteristic extraction layer in the license plate recognition model to obtain a characteristic matrix H2
And predicting characters and probability corresponding to each time step timeout based on a first classifier in the license plate recognition model, sequencing the characters with the maximum probability according to the timeout to obtain license plate contents, and outputting the license plate contents to obtain n license plate contents of the output license plate.
Optionally, after obtaining the output n license plate contents of the license plate, the processor is further configured to:
and when the n is determined to be not smaller than a preset value, or the n is determined to be smaller than the preset value and not equal to 0, and the n license plate contents accord with a preset license plate coding rule, determining the license plate contents of the license plate according to the n license plate contents.
Optionally, the processor trains the license plate recognition model in advance, including:
inputting image frames in pre-collected sample data into a license plate recognition model;
adjusting parameters of the license plate recognition model according to the connection time sequence classification CTC loss and the cumulative cross entropy CCE loss so that the license plate recognition model outputs license plate content corresponding to the image frame;
the sample data comprises an image frame containing a license plate region and license plate content corresponding to the image frame; the CTC loss is obtained by calculation according to the arrangement sequence of characters in the output license plate content and the arrangement sequence of characters in the license plate content corresponding to the image frame; and the CCE loss is calculated according to the occurrence frequency of the same character in the output license plate content and the occurrence frequency of the same character in the license plate content corresponding to the image frame.
Optionally, the processor adjusts parameters of the license plate recognition model according to the connection timing classification CTC loss, and the parameters include:
feature extraction is carried out on the image frame in the sample data based on a first CNN feature extraction layer in the license plate recognition model, and a feature matrix H is obtained1
Based on the incidence matrix calculation layer in the license plate recognition model, the feature matrix H is enhanced1Obtaining a dependency matrix O according to the dependency among the characteristic vectors;
extracting the characteristics of the dependency matrix O based on an RNN characteristic extraction layer in the license plate recognition model to obtain a characteristic matrix H2
Based on a first classifier in the license plate recognition model, predicting characters and probability corresponding to the time step timeout, and sequencing the characters with the maximum probability according to the time step to obtain and output first license plate content;
calculating connection time sequence classification CTC loss according to the arrangement sequence of the characters in the first license plate content and the arrangement sequence of the characters in the license plate content corresponding to the image frame in the sample data;
and adjusting parameters of the license plate recognition model according to the CTC loss.
Optionally, the license plate recognition model further includes a second CNN feature extraction layer and a second classifier, and the processor adjusts parameters of the license plate recognition model according to the cumulative cross-entropy CCE loss, including:
the feature matrix H is subjected to feature matrix pair based on a second CNN feature extraction layer in the license plate recognition model1Extracting the characteristics to obtain a characteristic matrix H3
Based on a second classifier in the license plate recognition model, predicting characters and probability corresponding to each time step timeout, and sequencing the characters with the maximum probability according to the timeout to obtain and output second license plate content;
calculating the cumulative cross entropy CCE loss according to the occurrence times of the same characters in the second license plate content and the occurrence times of the same characters in the license plate content corresponding to the image frame in the sample data;
and adjusting parameters of the license plate recognition model according to the CCE loss.
Optionally, the processor enhances the feature matrix H1Obtaining a dependency matrix O according to the dependency between the feature vectors, including:
computing and characterizing the feature matrix H1A distance matrix of distances between different feature vectors;
computing and characterizing the feature matrix H1The incidence matrix of incidence relation between different characteristic vectors;
calculating the product of the distance matrix and the incidence matrix to obtain a correlation matrix;
calculating the correlation matrix and the feature matrix H1And presetting the product of the weight matrix to obtain a dependency matrix O.
Optionally, the processor calculates cumulative cross entropy CCE loss according to the number of occurrences of the same character in the second license plate content and the number of occurrences of the same character in the license plate content corresponding to the image frame in the sample data, and the method includes:
calculating the feature matrix H of each character c3Cumulative probability of occurrence of all positions in
Figure BDA0003164033310000091
Figure BDA0003164033310000092
Wherein,
Figure BDA0003164033310000093
for each character c in the feature matrix H3I ═ 1,2, …, T being the feature matrix H3The total number of the positions in (a) is,
Figure BDA0003164033310000094
Figure BDA0003164033310000095
Figure BDA0003164033310000101
c is a set of characters of the license plate,
Figure BDA0003164033310000102
is a space;
for the cumulative probability ocNormalization is carried out to obtain the normalized cumulative probability
Figure BDA0003164033310000103
Calculating the occurrence frequency L of the character c in the license plate content corresponding to the image frame in the sample datacAnd to said LcThe normalization is carried out, and the normalization is carried out,
Figure BDA0003164033310000104
computing cumulative cross-entropy CCE loss
Figure BDA0003164033310000105
Wherein I is the input license plate image and S is the second license plate content.
Optionally, after obtaining the output n license plate contents of the license plate, the processor is further configured to:
when the n is determined to be not smaller than a preset value, or when the n is determined to be smaller than the preset value and not equal to 0 and the n license plate contents meet a preset license plate coding rule, determining a license plate state label for identifying the license plate state to be normal;
and when n is determined to be smaller than a preset value and not equal to 0, and the n license plate contents do not accord with a preset license plate coding rule, or when n is determined to be equal to 0, respectively inputting the plurality of second image frames into a pre-trained license plate state classification model, obtaining output license plate state labels for identifying the license plate state to be normal/abnormal, and determining the license plate state labels as the license plate state labels output by the license plate state classification model.
Optionally, the processor obtains a fourth image frame with a license plate quality meeting a preset requirement by screening from the plurality of third image frames/first image frames, and the method includes:
when the second license plate marking area/the first license plate marking area is determined to be a license plate image, calculating the definition of the license plate image, judging whether the license plate is abnormal according to the license plate image, and calculating the inclination angle of the license plate;
and screening a fourth image frame with the license plate quality meeting a preset requirement from the plurality of third image frames/first image frames, wherein the preset requirement is that the definition is not less than a first preset threshold, the inclination angle is not greater than a second preset threshold, and the license plate is not abnormal.
Optionally, the processor determines that the second license plate indicia region is a license plate image, including:
when the width-to-height ratio of the second license plate marking area is determined to be within a preset threshold range and the width of the second license plate marking area is larger than a third preset threshold, expanding the second license plate marking area;
inputting the expanded second license plate marking area into a pre-trained classification model, and judging whether the second license plate marking area is a license plate image;
when the judgment result that the output license plate image is obtained, determining that the second license plate marking area/the first license plate marking area is the license plate image;
the processor determining that the first license plate indicia area is a license plate image, comprising:
when the width-to-height ratio of the first license plate marking area is determined to be within a preset threshold range and the width of the first license plate marking area is larger than a third preset threshold, expanding the first license plate marking area;
inputting the expanded first license plate marking area into a pre-trained classification model, and judging whether the first license plate marking area is a license plate image;
and when the judgment result that the license plate image is output is obtained, determining that the first license plate marking area is the license plate image.
Optionally, the processor calculates a tilt angle of the license plate, including:
graying and threshold segmentation are carried out on the expanded second license plate mark area/first license plate mark area to obtain a binary image;
carrying out connected domain segmentation and connected domain marking on the binary image, determining each character in the binary image, and determining the highest point and the lowest point of each character;
respectively performing straight line fitting on the highest point and the lowest point, determining a first straight line where the highest point is located and a second straight line where the lowest point is located, and calculating the slope of the first straight line and the average slope k of the slopes of the second straight lines;
and calculating the inclination angle theta of the license plate region according to the average slope, wherein the inclination angle theta is atan (k).
Optionally, after determining whether the license plate is abnormal according to the license plate image, the processor is further configured to:
calculating the total number of characters in the binary image corresponding to the determined non-abnormal license plate;
and screening out fourth image frames corresponding to the license plates with the total number smaller than a fourth preset threshold value.
Optionally, the processor crops the corresponding first image frame according to the enlarged vehicle marking region, including:
and cutting off the upper half part of the corresponding first image frame according to the width of the enlarged vehicle mark area and the preset aspect ratio.
In a third aspect, the present invention provides a license plate recognition apparatus, including:
the image frame acquisition unit is used for acquiring a plurality of original image frames acquired after the vehicle reaches a preset acquisition area;
the first detection unit is used for respectively carrying out vehicle detection and license plate detection on the multiple original image frames to obtain multiple first image frames, and the first image frames comprise a vehicle mark area aiming at a vehicle and a first license plate mark area aiming at a license plate;
the cutting unit is used for expanding vehicle marking areas contained in the first image frames and cutting the corresponding first image frames according to the expanded vehicle marking areas to obtain second image frames;
the second detection unit is used for respectively carrying out license plate detection on the plurality of second image frames to obtain a plurality of third image frames, and the third image frames comprise second license plate marking areas aiming at license plates;
a content determination unit configured to determine license plate content of the license plate based on the first license plate marking area and the second license plate marking area.
Optionally, after obtaining a plurality of third image frames, the content determining unit is further configured to:
judging whether a corresponding second license plate marking area exists in each of the plurality of third image frames;
screening a fourth image frame with the license plate quality meeting the preset requirement from the plurality of third image frames when the corresponding second license plate marking area is determined to exist;
when the corresponding second license plate marking area does not exist, judging whether the corresponding first license plate marking area exists or not for each first image frame in the plurality of first image frames; and screening and obtaining a fourth image frame with the license plate quality meeting the preset requirement from the plurality of first image frames when the corresponding first license plate marking area is determined to exist.
Optionally, the content determining unit determines the license plate content of the license plate based on the first license plate marking region and the second license plate marking region, including:
inputting the fourth image frame into a pre-trained license plate recognition model;
performing feature extraction on the input fourth image frame based on a first CNN feature extraction layer in the license plate recognition model to obtain a feature matrix H1
Based on the incidence matrix calculation layer in the license plate recognition model, the feature matrix H is enhanced1Obtaining a dependency matrix O according to the dependency among the characteristic vectors;
extracting the characteristics of the dependency matrix O based on an RNN characteristic extraction layer in the license plate recognition model to obtain a characteristic matrix H2
And predicting characters and probability corresponding to each time step timeout based on a first classifier in the license plate recognition model, sequencing the characters with the maximum probability according to the timeout to obtain license plate contents, and outputting the license plate contents to obtain n license plate contents of the output license plate.
Optionally, after obtaining the output n license plate contents of the license plate, the content determining unit is further configured to:
and when the n is determined to be not smaller than a preset value, or the n is determined to be smaller than the preset value and not equal to 0, and the n license plate contents accord with a preset license plate coding rule, determining the license plate contents of the license plate according to the n license plate contents.
Optionally, the pre-training of the license plate recognition model by the content determination unit includes:
inputting image frames in pre-collected sample data into a license plate recognition model;
adjusting parameters of the license plate recognition model according to the connection time sequence classification CTC loss and the cumulative cross entropy CCE loss so that the license plate recognition model outputs license plate content corresponding to the image frame;
the sample data comprises an image frame containing a license plate region and license plate content corresponding to the image frame; the CTC loss is obtained by calculation according to the arrangement sequence of characters in the output license plate content and the arrangement sequence of characters in the license plate content corresponding to the image frame; and the CCE loss is calculated according to the occurrence frequency of the same character in the output license plate content and the occurrence frequency of the same character in the license plate content corresponding to the image frame.
Optionally, the adjusting the parameters of the license plate recognition model by the content determination unit according to the CTC loss in the connection timing classification includes:
feature extraction is carried out on the image frame in the sample data based on a first CNN feature extraction layer in the license plate recognition model, and a feature matrix H is obtained1
Based on the incidence matrix calculation layer in the license plate recognition model, the feature matrix H is enhanced1Obtaining a dependency matrix O according to the dependency among the characteristic vectors;
extracting the characteristics of the dependency matrix O based on an RNN characteristic extraction layer in the license plate recognition model to obtain a characteristic matrix H2
Based on a first classifier in the license plate recognition model, predicting characters and probability corresponding to the time step timeout, and sequencing the characters with the maximum probability according to the time step to obtain and output first license plate content;
calculating connection time sequence classification CTC loss according to the arrangement sequence of the characters in the first license plate content and the arrangement sequence of the characters in the license plate content corresponding to the image frame in the sample data;
and adjusting parameters of the license plate recognition model according to the CTC loss.
Optionally, the license plate recognition model further includes a second CNN feature extraction layer and a second classifier, and the adjusting, by the content determining unit, parameters of the license plate recognition model according to the cumulative cross-entropy CCE loss includes:
the feature matrix H is subjected to feature matrix pair based on a second CNN feature extraction layer in the license plate recognition model1Extracting the characteristics to obtain a characteristic matrix H3
Based on a second classifier in the license plate recognition model, predicting characters and probability corresponding to each time step timeout, and sequencing the characters with the maximum probability according to the timeout to obtain and output second license plate content;
calculating the cumulative cross entropy CCE loss according to the occurrence times of the same characters in the second license plate content and the occurrence times of the same characters in the license plate content corresponding to the image frame in the sample data;
and adjusting parameters of the license plate recognition model according to the CCE loss.
Optionally, the content determination unit enhances the feature matrix H1Obtaining a dependency matrix O according to the dependency between the feature vectors, including:
computing and characterizing the feature matrix H1A distance matrix of distances between different feature vectors;
computing and characterizing the feature matrix H1The incidence matrix of incidence relation between different characteristic vectors;
calculating the product of the distance matrix and the incidence matrix to obtain a correlation matrix;
calculating the correlation matrix and the feature matrix H1And presetting the product of the weight matrix to obtain a dependency matrix O.
Optionally, the calculating, by the content determining unit, cumulative cross entropy CCE loss according to the number of occurrences of the same character in the second license plate content and the number of occurrences of the same character in the license plate content corresponding to the image frame in the sample data includes:
calculating the feature matrix H of each character c3Cumulative probability of occurrence of all positions in
Figure BDA0003164033310000151
Figure BDA0003164033310000152
Wherein,
Figure BDA0003164033310000153
for each character c in the feature matrix H3The probability of occurrence of each position i, i-1, 2, …,t, T is the feature matrix H3The total number of the positions in (a) is,
Figure BDA0003164033310000154
Figure BDA0003164033310000155
Figure BDA0003164033310000156
c is a set of characters of the license plate,
Figure BDA0003164033310000157
is a space;
for the cumulative probability ocNormalization is carried out to obtain the normalized cumulative probability
Figure BDA0003164033310000158
Calculating the occurrence frequency L of the character c in the license plate content corresponding to the image frame in the sample datacAnd to said LcThe normalization is carried out, and the normalization is carried out,
Figure BDA0003164033310000159
computing cumulative cross-entropy CCE loss
Figure BDA00031640333100001510
Wherein I is the input license plate image and S is the second license plate content.
Optionally, after obtaining the output n license plate contents of the license plate, the content determining unit is further configured to:
when the n is determined to be not smaller than a preset value, or when the n is determined to be smaller than the preset value and not equal to 0 and the n license plate contents meet a preset license plate coding rule, determining a license plate state label for identifying the license plate state to be normal;
and when n is determined to be smaller than a preset value and not equal to 0, and the n license plate contents do not accord with a preset license plate coding rule, or when n is determined to be equal to 0, respectively inputting the plurality of second image frames into a pre-trained license plate state classification model, obtaining output license plate state labels for identifying the license plate state to be normal/abnormal, and determining the license plate state labels as the license plate state labels output by the license plate state classification model.
Optionally, the content determining unit obtains a fourth image frame with a license plate quality meeting a preset requirement by screening from the plurality of third image frames/first image frames, and the fourth image frame comprises:
when the second license plate marking area/the first license plate marking area is determined to be a license plate image, calculating the definition of the license plate image, judging whether the license plate is abnormal according to the license plate image, and calculating the inclination angle of the license plate;
and screening a fourth image frame with the license plate quality meeting a preset requirement from the plurality of third image frames/first image frames, wherein the preset requirement is that the definition is not less than a first preset threshold, the inclination angle is not greater than a second preset threshold, and the license plate is not abnormal.
Optionally, the content determining unit determines that the second license plate indicia region is a license plate image, including:
when the width-to-height ratio of the second license plate marking area is determined to be within a preset threshold range and the width of the second license plate marking area is larger than a third preset threshold, expanding the second license plate marking area;
inputting the expanded second license plate marking area into a pre-trained classification model, and judging whether the second license plate marking area is a license plate image;
when the judgment result that the output license plate image is obtained, determining that the second license plate marking area/the first license plate marking area is the license plate image;
the content determination unit determines that the first license plate indicia area is a license plate image, including:
when the width-to-height ratio of the first license plate marking area is determined to be within a preset threshold range and the width of the first license plate marking area is larger than a third preset threshold, expanding the first license plate marking area;
inputting the expanded first license plate marking area into a pre-trained classification model, and judging whether the first license plate marking area is a license plate image;
and when the judgment result that the license plate image is output is obtained, determining that the first license plate marking area is the license plate image.
Optionally, the calculating the tilt angle of the license plate by the content determining unit includes:
graying and threshold segmentation are carried out on the expanded second license plate mark area/first license plate mark area to obtain a binary image;
carrying out connected domain segmentation and connected domain marking on the binary image, determining each character in the binary image, and determining the highest point and the lowest point of each character;
respectively performing straight line fitting on the highest point and the lowest point, determining a first straight line where the highest point is located and a second straight line where the lowest point is located, and calculating the slope of the first straight line and the average slope k of the slopes of the second straight lines;
and calculating the inclination angle theta of the license plate region according to the average slope, wherein the inclination angle theta is atan (k).
Optionally, after determining whether the license plate is abnormal according to the license plate image, the content determining unit is further configured to:
calculating the total number of characters in the binary image corresponding to the determined non-abnormal license plate;
and screening out fourth image frames corresponding to the license plates with the total number smaller than a fourth preset threshold value.
Optionally, the cropping unit crops the corresponding first image frame according to the enlarged vehicle marking region, including:
and cutting off the upper half part of the corresponding first image frame according to the width of the enlarged vehicle mark area and the preset aspect ratio.
In a fourth aspect, the present invention provides a computer program medium having a computer program stored thereon, which when executed by a processor, performs the steps of the license plate recognition method as provided in the first aspect above.
In a fifth aspect, the present invention provides a chip, which is coupled to a memory in a device, so that when the chip calls the program instructions stored in the memory at runtime, the chip implements the above aspects of the embodiments of the present application and any method that may be involved in the aspects.
In a sixth aspect, the present invention provides a computer program product, which, when run on an electronic device, causes the electronic device to perform a method for implementing the above aspects of the embodiments of the present application and any possible aspects related thereto.
The license plate recognition method, the license plate recognition device and the license plate recognition equipment have the following beneficial effects:
the problem of license plate missing report caused by cutting off the license plate due to inaccurate vehicle detection can be solved.
Drawings
Fig. 1 is a schematic view of an application scenario of a license plate recognition method according to an embodiment of the present invention;
fig. 2 is a flowchart of a license plate recognition method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a license plate quality evaluation model according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a license plate recognition model according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a license plate image frame including similar characters according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a license plate recognition model training method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an image frame of a passively shielded license plate according to an embodiment of the present invention;
fig. 8 is a flowchart of a specific implementation of a license plate recognition method according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a license plate recognition device according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a license plate recognition device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein.
The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the embodiments of the present application, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: in the description of the embodiments of the present application, "a" or "a" refers to two or more, and other terms and the like should be understood similarly, the preferred embodiments described herein are only used for explaining and explaining the present application, and are not used for limiting the present application, and features in the embodiments and examples of the present application may be combined with each other without conflict.
Hereinafter, some terms in the embodiments of the present invention are explained to facilitate understanding by those skilled in the art.
(1) The term "Convolutional Neural Networks (CNN)" in the embodiments of the present invention is a type of feed-forward Neural network that includes convolution calculation and has a deep structure, and is one of the representative algorithms for deep learning.
(2) In the embodiment of the present invention, the term "Recurrent Neural Network (RNN)" is a type of Recurrent Neural Network that takes sequence data as input, recurses in the evolution direction of the sequence, and all nodes are connected in a chain manner.
(3) The term "single-stage detection method" in the embodiment of the present invention refers to a target detection algorithm that combines extraction and detection into one without explicitly giving a process of extracting a candidate region, and directly obtains a final detection result, and the detection speed is often faster.
(4) In the embodiment of the invention, the term "You Only need to Look at Once (Yoly Look one, Yolo) algorithm" is used for processing the target detection problem into a regression problem, and a convolutional neural network structure is used for directly predicting the position and the class probability of an object from an input image.
(5) The term "two-stage detection method" in the embodiments of the present invention refers to a target detection algorithm that generates a target candidate frame first and then classifies the candidate frame.
(6) The term AlexNet classification model in the embodiment of the invention is a classification model which deepens the structure of a network and learns richer and higher-dimensional image characteristics on the basis of the LeNet classification model.
(7) The term "Visual Geometry Group Net (VGGNet) classification model" in the embodiment of the present invention is a deep convolutional neural network developed by oxford university computer vision combination, which explores the relationship between the depth and the performance of a convolutional neural network, and a convolutional neural network with 16-19 layers of depth is successfully constructed by repeatedly stacking a small convolutional kernel of 3 x 3 and a maximum pooling layer of 2 x 2.
(8) The term 'Deep residual network (ResNet) classification model' in the embodiment of the invention solves the problem that a Deep convolutional neural network model is difficult to train, and greatly improves the network depth.
(9) The term "Convolutional Recurrent Neural Network (CRNN)" in the embodiment of the present invention is mainly used to identify a text sequence of an indefinite length end to end, and the text identification is converted into a sequence learning problem depending on a time sequence without cutting a single character, so that the sequence identification based on an image can be implemented.
In view of the above problems in the license plate recognition scheme in the prior art, the present application provides a license plate recognition method, device and apparatus.
A license plate recognition method, a license plate recognition device and license plate recognition equipment in the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Example 1
As shown in fig. 1, an embodiment of the present invention provides a schematic diagram of an application scenario of a license plate recognition method, including:
the image acquisition device 101 is used for acquiring a plurality of original image frames after the vehicle reaches a preset acquisition area;
the license plate recognition device 102 is used for acquiring a plurality of original image frames acquired by the image acquisition device after the vehicle reaches a preset acquisition area; respectively carrying out vehicle detection and license plate detection on the multiple original image frames to obtain multiple first image frames, wherein the first image frames comprise a vehicle mark area aiming at a vehicle and a first license plate mark area aiming at a license plate; expanding vehicle marking areas contained in the plurality of first image frames, and cutting the corresponding first image frames according to the expanded vehicle marking areas to obtain a plurality of second image frames; respectively carrying out license plate detection on the plurality of second image frames to obtain a plurality of third image frames, wherein the third image frames comprise second license plate marking areas aiming at license plates; and determining the license plate content of the license plate based on the first license plate marking area and the second license plate marking area.
The image capturing device is installed near the preset capturing area.
The image acquisition equipment and the license plate recognition equipment are in communication connection through a network, and the network can be a local area network, a wide area network and the like.
After the image acquisition equipment acquires a plurality of original image frames, the original image frames are stored or sent to the license plate recognition equipment.
The embodiment of the present invention does not implement any specific type of the image capturing device, and any device that can implement the original image frame capturing function may be applied to the embodiment of the present invention.
The embodiment of the invention does not carry out any type on the specific type of the license plate recognition equipment, and any equipment which can realize the related functions of the license plate recognition equipment can be applied to the embodiment of the invention.
It should be noted that the application scenario is only an example of an application scenario of the license plate recognition method provided in the embodiment of the present invention, and does not constitute a specific limitation to the embodiment of the present invention, and on the basis of the application scenario, some entities may be added or deleted, for example, the image capturing device 101 may be one or more, and all functions of the image capturing device and the license plate recognition device may be integrated on one device, and the like.
The embodiment of the invention provides a flow chart of a license plate identification method, as shown in fig. 2, comprising the following steps:
step S201, acquiring a plurality of original image frames acquired after a vehicle reaches a preset acquisition area;
the original image frame may be obtained in real time, or may be obtained by obtaining a pre-stored original image frame.
The number of the acquired original image frames can be specifically set according to specific implementation conditions, and as an optional implementation mode, the number of the acquired original image frames is greater than 3.
Step S202, respectively carrying out vehicle detection and license plate detection on the multiple original image frames to obtain multiple first image frames, wherein the first image frames comprise a vehicle mark area for a vehicle and a first license plate mark area for a license plate;
and respectively inputting the plurality of original image frames into a pre-trained target detection model, simultaneously detecting the vehicle and the license plate, and outputting a vehicle detection result and a license plate detection result.
The vehicle detection is used for detecting vehicle coordinates, and the vehicle detection result is the vehicle mark area for the vehicle; the license plate detection is used for detecting license plate coordinates, and the license plate detection result is the first license plate marking area aiming at the license plate.
As an alternative embodiment, the vehicle marking area is marked by a vehicle marking frame; the first license plate indicia area is marked by a first license plate indicia frame.
The target detection model may be implemented by using a Single-stage detection method, such as a Single Shot multi box Detector (SSD) algorithm, a YOLO algorithm; or a two-stage detection method, such as a Faster Region with a Convolutional Neural network function (fast RCNN) algorithm, etc.
Step S203, expanding vehicle marking areas contained in the plurality of first image frames, and cutting the corresponding first image frames according to the expanded vehicle marking areas to obtain a plurality of second image frames;
the direction of the enlargement processing for the vehicle mark region may be any one of up, down, left, and right, or any arbitrary direction.
Since most license plates of motor vehicles are suspended at a position under the vehicle, when the vehicle marking area is cut, the lower part of the license plate is cut out of the vehicle marking area due to inaccurate vehicle detection.
The vehicle marking area is expanded downwards, so that the condition that the license plate is cut off due to inaccurate vehicle detection can be reduced.
The degree of expansion of the vehicle marking region may be specifically limited according to specific implementation conditions, and a preset ratio may be expanded, for example, when the vehicle marking region is expanded downward, the expansion ratio may be set to 10% of the height of the vehicle marking region; the predetermined size may be enlarged, for example, when the vehicle marking area is enlarged downward, the enlarged size may be set to 1 cm.
Because most license plates of motor vehicles are hung at the lower positions of the vehicles, when the corresponding first image frames are cut, the positions of the license plates relative to the vehicles need to be considered, and the corresponding first image frames are cut according to the enlarged vehicle marking areas, and the method comprises the following steps:
and cutting off the upper half part of the corresponding first image frame according to the width of the enlarged vehicle mark area and the preset aspect ratio.
The cutting of the upper half part of the first image frame can enable the license plate to occupy a larger area in the second image frame after cutting, so that the license plate can be detected more easily subsequently, and the condition of missing detection of the license plate is reduced.
The aspect ratio of the cropping may be specifically set according to specific implementation, for example, the aspect ratio is set to 1: 1, this is not limited in any way in the embodiments of the present invention.
Step S204, license plate detection is respectively carried out on the second image frames to obtain third image frames, and the third image frames comprise second license plate marking areas aiming at license plates;
and respectively inputting the second image frames to a pre-trained license plate detection model, carrying out license plate detection and outputting license plate detection results.
The license plate detection is used for detecting license plate coordinates, and the license plate detection result is a second license plate marking area aiming at the license plate.
The license plate detection model can be realized by using a Single-stage detection method, such as a Single Shot multi box Detector (SSD) algorithm and a YOLO algorithm; or a two-stage detection method, such as fast RCNN, etc.
As an optional implementation manner, the license plate detection model may further output a license plate type, confidence information of whether the license plate is a license plate, and the like.
The license plate types can be used for judging the types of the license plates, such as blue plates, yellow plates, warning plates, new energy plates and the like. The confidence information can be used for assisting in judging whether the license plate is detected.
Step S205, determining license plate content of the license plate based on the first license plate marking area and the second license plate marking area.
When the second license plate marking area exists, determining license plate content according to the second license plate marking area; and when the second license plate marking area does not exist, determining license plate content according to the first license plate marking area.
In order to improve the accuracy and precision of license plate recognition, it is necessary to determine, in the first image frame or the third image frame, an image frame whose quality meets a preset requirement for license plate content recognition, and after obtaining a plurality of third image frames, the method further includes:
judging whether a corresponding second license plate marking area exists in each of the plurality of third image frames;
screening a fourth image frame with the license plate quality meeting the preset requirement from the plurality of third image frames when the corresponding second license plate marking area is determined to exist;
when the corresponding second license plate marking area does not exist, judging whether the corresponding first license plate marking area exists or not for each first image frame in the plurality of first image frames; and screening and obtaining a fourth image frame with the license plate quality meeting the preset requirement from the plurality of first image frames when the corresponding first license plate marking area is determined to exist.
Judging whether the license plate detection model used in the step S204 detects a second license plate marking area, if so, screening the quality of the license plate in a third image frame of the second license plate marking area; otherwise, judging whether the target detection model used in the step S202 detects a first license plate marking area, if so, screening license plate quality in a first image frame in which the first license plate marking area is detected; otherwise, determining that the license plate is not detected.
The license plate quality evaluation method comprises the following steps of performing license plate quality evaluation on the second license plate marking area/the first license plate marking area, and screening and obtaining a fourth image frame with license plate quality meeting preset requirements from the plurality of third image frames/the first image frames:
when the second license plate marking area/the first license plate marking area is determined to be a license plate image, calculating the definition of the license plate image, judging whether the license plate is abnormal according to the license plate image, and calculating the inclination angle of the license plate;
and screening a fourth image frame with the license plate quality meeting a preset requirement from the plurality of third image frames/the first image frames, wherein the preset requirement is that the definition is not less than a first preset threshold, the inclination angle is not more than a second preset threshold, and the license plate is not abnormal.
And inputting the second license plate marking area/the first license plate marking area into a pre-trained license plate quality evaluation model, carrying out quality evaluation on the license plate, and outputting a fourth image frame with license plate quality meeting preset requirements.
The license plate quality evaluation model has the following four functions: (1) judging whether the second license plate marking area/the first license plate marking area is a license plate image or not; (2) calculating the definition of the license plate image; (3) judging whether the license plate is abnormal or not according to the license plate image; (4) and calculating the inclination angle of the license plate. The four functions can be realized by an independent license plate quality evaluation model, or a plurality of models can be combined to form a license plate quality evaluation model, and the four functions are respectively realized by each sub-model.
As shown in fig. 3, an embodiment of the present invention provides a schematic diagram of a license plate quality evaluation model 300, including:
a license plate existence classification model 301, configured to determine whether a second license plate marking area/a first license plate marking area is a license plate image according to an input second license plate marking area/first license plate marking area;
a definition screening model 302, configured to calculate a definition of the license plate image, and screen an image frame with the definition not less than a first preset threshold;
an abnormal license plate screening model 303, configured to determine whether a license plate is abnormal according to the license plate image, and screen an image frame in which the license plate is not abnormal;
and the inclination angle screening model 304 is used for calculating the inclination angle of the license plate and screening the image frames with the inclination angles not greater than a second preset threshold value.
And determining a fourth image frame with the license plate quality meeting the preset requirement according to the screening results of the definition screening model, the abnormal license plate screening model and the inclination angle screening model.
It should be noted that the license plate existence classification model, the definition screening model, the abnormal license plate screening model and the inclination angle screening model are obtained by independent training.
The embodiment of the invention provides a license plate quality evaluation model aiming at the condition of passing vehicles in the original image frame, and the image frame is screened respectively according to the aspects of whether the image frame is a license plate image, the definition, the angle, whether the license plate is abnormal and the like, so that the license plate content recognition precision is improved, the false alarm is reduced, and the time for performing license plate recognition on unnecessary image frames is reduced.
The license plate quality evaluation model or the license plate existence classification model adopts the following implementation mode to determine that the second license plate marking area is a license plate image:
when the width-to-height ratio of the second license plate marking area is determined to be within a preset threshold range and the width of the second license plate marking area is larger than a third preset threshold, expanding the second license plate marking area;
inputting the expanded second license plate marking area into a pre-trained classification model, and judging whether the second license plate marking area is a license plate image;
and when the judgment result that the output license plate image is obtained, determining that the second license plate marking area/the first license plate marking area is the license plate image.
The license plate quality evaluation model or the license plate existence classification model adopts the following implementation mode to determine that the first license plate marking area is a license plate image:
when the width-to-height ratio of the first license plate marking area is determined to be within a preset threshold range and the width of the first license plate marking area is larger than a third preset threshold, expanding the first license plate marking area;
inputting the expanded first license plate marking area into a pre-trained classification model, and judging whether the first license plate marking area is a license plate image;
and when the judgment result that the license plate image is output is obtained, determining that the first license plate marking area is the license plate image.
First, an aspect ratio is determined for the license plate marking region, where the aspect ratio r is the width w of the license plate marking region/the height h of the license plate marking region.
The license plate marking area comprises the first license plate marking area and the second license plate marking area, and details are not repeated.
The preset threshold range of the aspect ratio may be specifically set according to specific implementation, for example, the preset threshold range of the aspect ratio is set to [0.7, 7.5], which is not limited in this embodiment of the present invention.
If the width-height ratio of the license plate marking region does not accord with the preset threshold range, directly judging that the width-height ratio of the license plate marking region is abnormal, and not performing the next judgment; and if the width-height ratio of the license plate marking region meets the preset threshold range, judging the width of the license plate marking region.
The third preset threshold of the width may be specifically set according to a specific implementation condition, for example, the third preset threshold of the width of the single-layer license plate is set to be 50 according to actual experience, and the third preset threshold of the width of the double-layer license plate is set to be 40 according to actual experience, which is not limited in this embodiment of the present invention.
If the width of the license plate marking area is smaller than the third preset threshold, directly judging that the width of the license plate marking area is too small, and not performing the next judgment; and if the width of the license plate marking area is not smaller than the third preset threshold value, expanding the license plate marking area.
It should be noted that the expansion direction and the expansion degree for expanding the license plate marking region may be specifically set according to a specific embodiment: the expansion direction can be any one of up, down, left and right or any direction; the expansion degree may be a preset ratio of expansion or a preset size of expansion. For example, the number plate marking region is extended upwards and downwards by 1/10 the height of the number plate marking region, and extended leftwards and rightwards by 1/10 the width of the number plate marking region, so as to reduce the number plate incompleteness caused by inaccurate number plate detection.
And inputting the expanded license plate marking area into a pre-trained classification model, and judging whether the license plate is an image of the license plate. The classification model can use an AlexNet classification model, a VGGNet classification model, a ResNet classification model and the like, and if the judgment result output by the classification model is a non-license plate image, the definition screening model, the abnormal license plate screening model and the inclination angle screening model are not used for carrying out next judgment; and if the judgment result output by the classification model is the license plate image, continuing to use the definition screening model, the abnormal license plate screening model and the inclination angle screening model to perform the next judgment.
The definition screening model can adopt an AlexNet classification model, a VGGNet classification model, a ResNet classification model and the like to classify a second license plate marking region/a first license plate marking region of an input license plate image, the definition probability of each license plate marking region is output as a classification result, and the definition probability is recorded as definition Q.
It should be noted that the first preset threshold of the definition Q may be specifically set according to a specific implementation, for example, the first preset threshold 50 is set, and the embodiment of the present invention is not limited in this respect.
The enlarged license plate marking region is input into the inclination angle screening model, the inclination angle screening model is implemented by an image processing method, any calculation method capable of calculating the inclination angle of the license plate can be applied to the inclination angle screening model, and the embodiment of the invention is not limited to this, and as an optional implementation method, the inclination angle screening model calculates the inclination angle of the license plate by adopting the following implementation method:
graying and threshold segmentation are carried out on the expanded second license plate mark area/first license plate mark area to obtain a binary image;
performing connected domain segmentation and connected domain marking on the binary image, determining each character in the binary image, and determining the highest point and the lowest point of each character;
respectively performing straight line fitting on the highest point and the lowest point, determining a first straight line where the highest point is located and a second straight line where the lowest point is located, and calculating the slope of the first straight line and the average slope k of the slopes of the second straight lines;
and calculating an inclination angle theta of the license plate region from the average slope, wherein the inclination angle theta is atan (k).
The threshold segmentation may be implemented by global threshold segmentation or local threshold segmentation, which is not limited in this embodiment of the present invention.
Each character in the binary image may be determined by performing a character box marking on the character, which is not limited in this embodiment of the present invention.
Putting the determined highest point of each character in a set U, putting the determined lowest point of each character in a set D, and then determining a first straight line l where the highest points of all the characters are located by using a straight line fitting method according to all the points in the set U and the set DUAnd a second straight line l in which the lowest points of all characters are locatedDAnd calculating the slopes of the two straight lines as kUAnd kDUsing said kUAnd kDCalculating the average slope k ═ k (k)U+kD)/2。
It should be noted that the second preset threshold of the inclination angle θ may be specifically set according to a specific implementation, for example, the second preset threshold is set to 25 degrees in consideration that in practical applications, when the inclination angle of the license plate exceeds 25 degrees, the human eye is hardly able to recognize the inclination angle, which is not limited in any way by the embodiment of the present invention.
The abnormal license plate screening model can adopt an AlexNet classification model, a VGGNet classification model, a ResNet classification model and the like to classify the input license plate images in the expanded license plate marking region, and output whether each license plate is abnormal or not as a classification result.
The above-mentioned license plate abnormality means whether the license plate is contaminated, damaged, shielded by an obstacle, or the like.
When the definition of the license plate image is low, an error recognition result of the abnormal license plate is easy to generate, and the number of characters in the license plate is abnormal when the number of the characters in the license plate is abnormal, so that the number of the characters used by the inclination angle screening model is combined to assist in judging whether the license plate is abnormal, and after judging whether the license plate is abnormal according to the license plate image, the method further comprises the following steps:
calculating the total number of characters in the binary image corresponding to the determined non-abnormal license plate;
and screening out fourth image frames corresponding to the license plates with the total number smaller than a fourth preset threshold value.
The total number of the characters may be obtained by obtaining the number of the character frames in the tilt angle filtering model, or may be calculated by using a detection method.
It should be noted that the fourth preset threshold may be specifically set according to a specific implementation, for example, the number of the fourth preset thresholds is set to 7, which is not limited in this embodiment of the present invention.
When the total number of the characters is less than 7, determining the license plate as an abnormal license plate; and when the total number of the characters is not less than 7 and the license plate output result of the abnormal license plate screening model is not abnormal, the license plate is considered to be normal.
And when the license plate definition angle Q is larger than or equal to 50, the inclination angle theta is smaller than or equal to 25 and the license plate is abnormal, the image frame is considered as a fourth image frame with the license plate quality meeting the preset requirement.
According to the embodiment of the invention, the license plate is subjected to quality evaluation, the width-to-height ratio and the width of the license plate are limited, whether the license plate is classified or not is judged, the definition of the license plate, the inclination angle of the license plate and whether the license plate is stained or not are judged, the license plates which do not conform to the rules and have poor license plate quality are screened out, and only the license plate content of the screened license plate is identified, so that the license plate identification precision is improved, the false alarm is reduced, the time for identifying the license plate content of an unnecessary image frame is shortened, and the license plate content identification efficiency is improved.
After a fourth image frame with the license plate quality meeting the preset requirement is obtained through screening, determining the license plate content of the license plate based on the first license plate marking area and the second license plate marking area in the following mode:
inputting the fourth image frame into a pre-trained license plate recognition model;
performing feature extraction on the input fourth image frame based on the first CNN feature extraction layer in the license plate recognition model to obtain a feature matrix H1
Based on the incidence matrix calculation layer in the license plate recognition model, the characteristic matrix H is enhanced1Obtaining a dependency matrix O according to the dependency among the characteristic vectors;
extracting the characteristics of the dependency matrix O based on the RNN characteristic extraction layer in the license plate recognition model to obtain a characteristic matrix H2
And predicting characters and probability corresponding to each time step timeout based on a first classifier in the license plate recognition model, sequencing the characters with the maximum probability according to the timeout to obtain license plate contents, and outputting the license plate contents to obtain n license plate contents of the output license plate.
As shown in fig. 4, an embodiment of the present invention provides a schematic diagram of a license plate recognition model.
The license plate recognition model comprises a first CNN feature extraction layer, an incidence matrix calculation layer, an RNN feature extraction layer and a first classifier.
When the license plate recognition model is used for CTC decoding, one character in the license plate can occupy more than one time step, and if local association of feature vectors is not considered, decoding errors can be caused, for example, the first Chinese character of the license plate, "Qiongjing" can be decoded to obtain "Qiongjing". In order to alleviate the above problem, an embodiment of the present invention proposes to add a correlation matrix calculation layer between the first CNN feature extraction layer and the RNN feature extraction layer, where the correlation matrix calculation layer considers not only distance information between feature vectors but also correlation information between feature vectors, and thus may be used to enhance correlation information between different time step features of a same character extracted by the first CNN feature extraction layer.
The correlation matrix calculation layer enhances the feature matrix H by adopting the following embodiments1Obtaining a dependency matrix O according to the dependency among the characteristic vectors:
calculating and characterizing the feature matrix H1A distance matrix of distances between different feature vectors;
calculating and characterizing the feature matrix H1The incidence matrix of incidence relation between different characteristic vectors;
calculating the product of the distance matrix and the incidence matrix to obtain a correlation matrix;
calculating the correlation matrix and the feature matrix H1And presetting the product of the weight matrix to obtain a dependency matrix O.
The distance matrix is SD(i,j)=exp(-dij+δ)/exp(-dij+ δ) +1, wherein dijI-j |, representing the above feature matrix H1The geometric distance between different feature positions, delta is a preset scale factor;
the above-mentioned incidence matrix SA(i,j)=ai·aj/‖ai‖‖ajII of wherein aiIs the above feature matrix H1Middle feature vector hiWhere i 1, T is the above feature matrix H1The total number of positions in.
The correlation matrix S ═ S (S)A*SD)。
The dependency matrix O ═ SH1WcWherein W iscIs a predetermined weight matrix.
In the embodiment of the invention, a correlation matrix calculation layer is added between a first CNN feature extraction layer and an RNN feature extraction layer, the correlation matrix calculation layer considers not only distance information between features but also feature adjacent information, and can be used for enhancing a feature matrix H extracted by the first CNN feature extraction layer1The dependency and relevance among all the characteristic vectors improve the recognition precision of the first Chinese characters of the license plate with the left and right structures, and achieve higher license plate recognition rate.
In the related art, by using CTC loss as a loss function of a license plate recognition model, many paths can be correctly decoded, and each path affects the extraction of CNN features, so that it is difficult to extract effective features from CNN because the specific position of each character is unknown when the features are extracted.
Secondly, when the image frame quality is not good, similar and similar characters in the license plate are easy to be identified by mistake. As shown in FIG. 5, an embodiment of the present invention provides a schematic diagram of a license plate image frame including similar characters. It is clearly felt that the number 0 and the letter D, the number 2 and the letter Z, the number 1 and the letter I, etc. are similar characters. Table 1 below provides an example of similar characters in a license plate.
TABLE 1 near character set Table
Similar character set numbering Character(s)
1 2、Z
2 0、D、U、Q
3 8、B
4 1、T、Y
5 7、T、Z
6 4、A
7 N、H
8 E、F
9 5、S
10 6、G
Finally, when the license plate recognition model performs CTC character decoding, adjacent identical characters are filtered, for example, the result output by the classifier is "zhe _ a _1__22_3_ 45", and the result output by the CTC decoding is "zhe a 12345", so that when there are consecutive identical characters in a license plate, the license plate characters are lost.
In order to overcome the above problems, the embodiment of the present invention adopts the following implementation manner to train the license plate recognition model in advance:
inputting image frames in pre-collected sample data into a license plate recognition model;
adjusting parameters of the license plate recognition model according to a Connection Timing Classification (CTC) loss and a Cumulative Cross Entropy (CCE) loss, so that the license plate recognition model outputs license plate content corresponding to the image frame;
the sample data comprises an image frame containing a license plate area and license plate content corresponding to the image frame; the CTC loss is obtained by calculation according to the arrangement sequence of characters in the output license plate content and the arrangement sequence of characters in the license plate content corresponding to the image frame; the CCE loss is calculated according to the occurrence frequency of the same character in the output license plate content and the occurrence frequency of the same character in the license plate content corresponding to the image frame.
As shown in fig. 6, an embodiment of the present invention provides a schematic diagram of training a license plate recognition model.
And optimizing weights of the RNN branch including the correlation matrix calculation layer, the RNN feature extraction layer, and the first classifier using CTC loss, and optimizing weights of the CNN branch including the first CNN feature extraction layer, the second CNN feature extraction layer, and the second classifier using CCE loss.
It should be noted that, when the license plate recognition model is used, the second CNN feature extraction layer and the second classifier are omitted, so as to reduce time consumption of license plate recognition and improve efficiency of license plate recognition.
According to the embodiment of the invention, CCE loss is adopted to assist the training of the branch CRNN + CTC loss, but the CCE loss is removed in the using stage, so that not only is no time consumption increased, but also the recognition precision of similar characters and adjacent same characters is improved, and the recognition precision of license plate characters is improved.
The method for adjusting the parameters of the license plate recognition model according to the connection time sequence classification CTC loss comprises the following steps:
feature extraction is carried out on the image frames in the sample data based on a first CNN feature extraction layer in the license plate recognition model, and a feature matrix H is obtained1
Based on the incidence matrix calculation layer in the license plate recognition model, the characteristic matrix H is enhanced1Obtaining a dependency matrix O according to the dependency among the characteristic vectors;
vehicle license plate recognition based on the aboveAn RNN feature extraction layer in the model performs feature extraction on the dependency matrix O to obtain a feature matrix H2
Based on a first classifier in the license plate recognition model, predicting characters and probability corresponding to each time step timeout, and sequencing the characters with the maximum probability according to the timeout to obtain and output first license plate content;
calculating connection time sequence classification CTC loss according to the arrangement sequence of the characters in the first license plate content and the arrangement sequence of the characters in the license plate content corresponding to the image frame in the sample data;
and adjusting parameters of the license plate recognition model according to the CTC loss.
CTCs focus on the order of arrangement between characters.
The license plate recognition model further comprises a second CNN feature extraction layer and a second classifier, and parameters of the license plate recognition model are adjusted according to the cumulative cross entropy CCE loss, wherein the parameters comprise:
the feature matrix H is subjected to feature matrix matching based on a second CNN feature extraction layer in the license plate recognition model1Extracting the characteristics to obtain a characteristic matrix H3
Based on a second classifier in the license plate recognition model, predicting characters and probability corresponding to each time step timeout, and sequencing the characters with the maximum probability according to the timeout to obtain and output second license plate content;
calculating the cumulative cross entropy CCE loss according to the occurrence frequency of the same character in the second license plate content and the occurrence frequency of the same character in the license plate content corresponding to the image frame in the sample data;
and adjusting parameters of the license plate recognition model according to the CCE loss.
The CCE pays attention to the occurrence frequency of each character, so that the sequence of license plate characters is not considered, the occurrence frequency of each license plate character only needs to be accurately predicted, and the occurrence frequency of each character can be converted into the cumulative probability of each class, so that the CCE only needs the digital supervision information in the character and the sequence label thereof, and does not need sequence information.
The embodiment of the invention adopts the following implementation mode to calculate the cumulative cross entropy CCE loss, comprising the following steps:
calculating the characteristic matrix H of each character c3Cumulative probability of occurrence of all positions in
Figure BDA0003164033310000331
Figure BDA0003164033310000332
Wherein,
Figure BDA0003164033310000333
for each character c in the above-mentioned feature matrix H3I is 1,2, …, and T is the above feature matrix H3The total number of the positions in (a) is,
Figure BDA0003164033310000334
Figure BDA0003164033310000335
Figure BDA0003164033310000336
c is a set of characters of the license plate,
Figure BDA0003164033310000337
is a space;
for the above cumulative probability ocNormalization is carried out to obtain the normalized cumulative probability
Figure BDA0003164033310000338
Calculating the occurrence frequency L of the character c in the license plate content corresponding to the image frame in the sample datacAnd to the above LcThe normalization is carried out, and the normalization is carried out,
Figure BDA0003164033310000339
computing cumulative cross-entropy CCE loss
Figure BDA0003164033310000341
Wherein I is the input license plate image, and S is the second license plate content.
It should be noted that, each type of license plate character at each position i (i ═ 1,2, …, T) in the feature map of the last layer is obtained by the softmax classifier
Figure BDA0003164033310000342
Probability of occurrence
Figure BDA0003164033310000343
After obtaining the n license plate contents of the license plate output by the license plate recognition model, the method further comprises the following steps:
and when the n is determined to be not smaller than a preset value or the n is determined to be smaller than the preset value and not equal to 0 and the n license plate contents accord with a preset license plate coding rule, determining the license plate contents of the license plate according to the n license plate contents.
It should be noted that the value of the preset value may be specifically set according to a specific implementation scenario, for example, the preset value is set to 3, which is not limited in this embodiment of the present invention.
The specific implementation of determining the license plate content of the license plate may be to vote to determine the most probable character and the arrangement order of the characters in the final license plate content according to the n license plate contents.
In the related technology, for a vehicle image without a detected license plate, the license plate state of the vehicle image is not output, the license plate state of the detected vehicle image of the license plate is obtained by reclassifying a normal license plate and a part of stained license plates according to a character segmentation result, the segmented threshold is manually set, the generalization performance is poor, whether the segmented license plate is a stained shielding license plate or not is directly judged according to different segmented thresholds of segmented characters, the conditions of intentional shielding and passive shielding cannot be identified, and the false alarm rate is high.
As shown in fig. 7, an embodiment of the invention provides a schematic diagram of passively blocking an image frame of a license plate.
The license plates in the two image frames are respectively shielded by pedestrians and guideboards.
The scheme in the related art cannot identify the situation of the passively shielded license plate.
In order to overcome the above problem, the embodiment of the present invention further includes, after obtaining n license plate contents of the output license plate, that:
when the n is determined to be not smaller than a preset value, or when the n is determined to be smaller than the preset value and not equal to 0 and the n license plate contents meet a preset license plate coding rule, determining a license plate state label for identifying the license plate state to be normal;
and when the n is determined to be smaller than a preset value and not equal to 0 and the n license plate contents do not accord with a preset license plate coding rule or the n is determined to be equal to 0, respectively inputting the plurality of second image frames into a pre-trained license plate state classification model, obtaining output license plate state labels for identifying the license plate state to be normal/abnormal, and determining the license plate state labels to be the license plate state labels output by the license plate state classification model.
The preset value is the same as the preset value used when the license plate content of the license plate is determined.
The preset license plate coding rule may be specifically set according to a specific implementation condition, for example, according to a related agreed specification, and specifically may be: the first digit of the license plate is set as a Chinese character representing a provincial administrative district where the vehicle user is located, the second digit of the license plate is set as an English letter representing a regional administrative district where the vehicle user is located, and the rest digits of the license plate are serial number codes of the license plate.
And when the n is smaller than a preset value, judging whether the n is 0 or not.
And when the n is equal to 0, respectively inputting the second image frames into a pre-trained license plate state classification model, obtaining output license plate state labels for identifying the license plate state as normal/abnormal, and determining the license plate state labels as the license plate state labels output by the license plate state classification model.
When n is not equal to 0, judging whether the n license plate contents meet a preset license plate coding rule or not; if the license plate state is consistent with the standard license plate state, determining a license plate state label for marking the license plate state as normal; and if the license plate state labels do not accord with the license plate state labels, the second image frames are respectively input into a pre-trained license plate state classification model, the output license plate state labels for identifying the license plate state as normal/abnormal are obtained, and the license plate state labels are determined to be the license plate state labels output by the license plate state classification model.
The license plate state classification model may adopt an AlexNet classification model, a VGGNet classification model, a ResNet classification model, or the like, which is not limited in this embodiment of the present invention.
The abnormal license plate state is intentionally contaminated and shielded by human, and does not include passive shielding, for example, the two cases shown in fig. 7, the license plate is shielded due to the shielding of the image acquisition device by leaves, and the like.
It should be noted that the license plate status label for identifying whether the license plate status is normal or abnormal is a label determined according to the decision of the plurality of second image frames.
The license plate state label is determined through decision, so that the classification precision of the stained license plate can be improved, and the false alarm of the non-artificially stained shielding license plate is reduced.
As shown in fig. 8, an embodiment of the present invention provides a flowchart of a specific implementation manner of a license plate recognition method, including:
step S801, acquiring N original image frames acquired after a vehicle reaches a preset acquisition area, wherein N is greater than 3;
in step S802, the serial number i of the current original image frame is set to 0, and the current number n of the fourth image frame whose license plate quality meets the preset requirement is set to 0.
Step S803, determine whether i is smaller than N, if yes, go to step S804, otherwise go to step S815;
step S804, respectively carrying out vehicle detection and license plate detection on the plurality of original image frames to obtain a plurality of first image frames, wherein the first image frames comprise a vehicle mark area for a vehicle and a first license plate mark area for a license plate;
step S805, downwards expanding vehicle marking areas contained in a plurality of first image frames, and cutting the corresponding first image frames according to the expanded vehicle marking areas to obtain a plurality of second image frames;
step 806, respectively performing license plate detection on the plurality of second image frames to obtain a plurality of third image frames, wherein the third image frames comprise second license plate marking areas for license plates;
step S807, judging whether the second license plate marking area exists, if so, executing step S808, otherwise, executing step S809;
step S808, inputting the second license plate marking area into a license plate quality evaluation model, and executing step S811;
step S809, determining whether the first license plate mark area exists, if yes, executing step S810, otherwise, executing step S814;
step S810, inputting the first license plate marking area into a license plate quality evaluation model, and executing step S811;
step S811, judging whether the license plate quality meets the preset requirement, if so, executing step S812, otherwise, executing step S814;
step S812, add one to the current n;
step S813, inputting the fourth image frame meeting the preset requirement to a pre-trained license plate recognition model, and recording the output license plate content;
step S814, adding one to the current i, and executing step S803;
step S815, determining whether n is not less than 3, if yes, executing step S818, otherwise, executing step S816;
step S816, determining whether n is equal to 0, if yes, executing step S819, otherwise, executing step S817;
step S817, judge above-mentioned n license plate content accord with preserve license plate code rule, if, carry out step S818, otherwise, carry out step S819.
Step S818, determining the license plate content of the license plate according to the n license plate contents, and determining a license plate state label for marking the license plate state as normal;
step S819, inputting the second image frame into a pre-trained license plate state classification model, obtaining an output license plate state label for identifying whether the license plate state is normal or abnormal, and determining that the license plate state label is the license plate state label output by the license plate state classification model.
Example 2
An embodiment of the present invention provides a schematic diagram of a license plate recognition device 900, which includes a memory 901 and a processor 902, as shown in fig. 9, where:
the memory is used for storing a computer program;
the processor is used for reading the program in the memory and executing the following steps:
acquiring a plurality of original image frames acquired after a vehicle reaches a preset acquisition area;
respectively carrying out vehicle detection and license plate detection on the multiple original image frames to obtain multiple first image frames, wherein the first image frames comprise a vehicle marking area for a vehicle and a first license plate marking area for a license plate;
expanding vehicle marking areas contained in the plurality of first image frames, and cutting the corresponding first image frames according to the expanded vehicle marking areas to obtain a plurality of second image frames;
respectively carrying out license plate detection on the plurality of second image frames to obtain a plurality of third image frames, wherein the third image frames comprise second license plate marking areas aiming at license plates;
determining license plate content of the license plate based on the first license plate marking area and the second license plate marking area.
Optionally, after obtaining the plurality of third image frames, the processor is further configured to:
judging whether a corresponding second license plate marking area exists in each of the plurality of third image frames;
screening a fourth image frame with the license plate quality meeting the preset requirement from the plurality of third image frames when the corresponding second license plate marking area is determined to exist;
when the corresponding second license plate marking area does not exist, judging whether the corresponding first license plate marking area exists or not for each first image frame in the plurality of first image frames; and screening and obtaining a fourth image frame with the license plate quality meeting the preset requirement from the plurality of first image frames when the corresponding first license plate marking area is determined to exist.
Optionally, the processor determines license plate content of the license plate based on the first license plate indicia area and the second license plate indicia area, including:
inputting the fourth image frame into a pre-trained license plate recognition model;
performing feature extraction on the input fourth image frame based on a first CNN feature extraction layer in the license plate recognition model to obtain a feature matrix H1
Based on the incidence matrix calculation layer in the license plate recognition model, the feature matrix H is enhanced1Obtaining a dependency matrix O according to the dependency among the characteristic vectors;
extracting the characteristics of the dependency matrix O based on an RNN characteristic extraction layer in the license plate recognition model to obtain a characteristic matrix H2
And predicting characters and probability corresponding to each time step timeout based on a first classifier in the license plate recognition model, sequencing the characters with the maximum probability according to the timeout to obtain license plate contents, and outputting the license plate contents to obtain n license plate contents of the output license plate.
Optionally, after obtaining the output n license plate contents of the license plate, the processor is further configured to:
and when the n is determined to be not smaller than a preset value, or the n is determined to be smaller than the preset value and not equal to 0, and the n license plate contents accord with a preset license plate coding rule, determining the license plate contents of the license plate according to the n license plate contents.
Optionally, the processor trains the license plate recognition model in advance, including:
inputting image frames in pre-collected sample data into a license plate recognition model;
adjusting parameters of the license plate recognition model according to the connection time sequence classification CTC loss and the cumulative cross entropy CCE loss so that the license plate recognition model outputs license plate content corresponding to the image frame;
the sample data comprises an image frame containing a license plate region and license plate content corresponding to the image frame; the CTC loss is obtained by calculation according to the arrangement sequence of characters in the output license plate content and the arrangement sequence of characters in the license plate content corresponding to the image frame; and the CCE loss is calculated according to the occurrence frequency of the same character in the output license plate content and the occurrence frequency of the same character in the license plate content corresponding to the image frame.
Optionally, the processor adjusts parameters of the license plate recognition model according to the connection timing classification CTC loss, and the parameters include:
feature extraction is carried out on the image frame in the sample data based on a first CNN feature extraction layer in the license plate recognition model, and a feature matrix H is obtained1
Based on the incidence matrix calculation layer in the license plate recognition model, the feature matrix H is enhanced1Obtaining a dependency matrix O according to the dependency among the characteristic vectors;
extracting the characteristics of the dependency matrix O based on an RNN characteristic extraction layer in the license plate recognition model to obtain a characteristic matrix H2
Based on a first classifier in the license plate recognition model, predicting characters and probability corresponding to the time step timeout, and sequencing the characters with the maximum probability according to the time step to obtain and output first license plate content;
calculating connection time sequence classification CTC loss according to the arrangement sequence of the characters in the first license plate content and the arrangement sequence of the characters in the license plate content corresponding to the image frame in the sample data;
and adjusting parameters of the license plate recognition model according to the CTC loss.
Optionally, the license plate recognition model further includes a second CNN feature extraction layer and a second classifier, and the processor adjusts parameters of the license plate recognition model according to the cumulative cross-entropy CCE loss, including:
the feature matrix H is subjected to feature matrix pair based on a second CNN feature extraction layer in the license plate recognition model1Extracting the characteristics to obtain a characteristic matrix H3
Based on a second classifier in the license plate recognition model, predicting characters and probability corresponding to each time step timeout, and sequencing the characters with the maximum probability according to the timeout to obtain and output second license plate content;
calculating the cumulative cross entropy CCE loss according to the occurrence times of the same characters in the second license plate content and the occurrence times of the same characters in the license plate content corresponding to the image frame in the sample data;
and adjusting parameters of the license plate recognition model according to the CCE loss.
Optionally, the processor enhances the feature matrix H1Obtaining a dependency matrix O according to the dependency between the feature vectors, including:
computing and characterizing the feature matrix H1A distance matrix of distances between different feature vectors;
computing and characterizing the feature matrix H1The incidence matrix of incidence relation between different characteristic vectors;
calculating the product of the distance matrix and the incidence matrix to obtain a correlation matrix;
calculating the correlation matrix and the feature matrix H1And presetting the product of the weight matrix to obtain a dependency matrix O.
Optionally, the processor calculates cumulative cross entropy CCE loss according to the number of occurrences of the same character in the second license plate content and the number of occurrences of the same character in the license plate content corresponding to the image frame in the sample data, and the method includes:
calculating the feature matrix H of each character c3Cumulative probability of occurrence of all positions in
Figure BDA0003164033310000401
Figure BDA0003164033310000402
Wherein,
Figure BDA0003164033310000403
for each character c in the feature matrix H3I ═ 1,2, …, T being the feature matrix H3The total number of the positions in (a) is,
Figure BDA0003164033310000404
Figure BDA0003164033310000405
Figure BDA0003164033310000406
c is a set of characters of the license plate,
Figure BDA0003164033310000407
is a space;
for the cumulative probability ocNormalization is carried out to obtain the normalized cumulative probability
Figure BDA0003164033310000408
Calculating the occurrence frequency L of the character c in the license plate content corresponding to the image frame in the sample datacAnd to said LcThe normalization is carried out, and the normalization is carried out,
Figure BDA0003164033310000409
computing cumulative cross-entropy CCE loss
Figure BDA00031640333100004010
Wherein I is the input license plate image and S is the second license plate content.
Optionally, after obtaining the output n license plate contents of the license plate, the processor is further configured to:
when the n is determined to be not smaller than a preset value, or when the n is determined to be smaller than the preset value and not equal to 0 and the n license plate contents meet a preset license plate coding rule, determining a license plate state label for identifying the license plate state to be normal;
and when n is determined to be smaller than a preset value and not equal to 0, and the n license plate contents do not accord with a preset license plate coding rule, or when n is determined to be equal to 0, respectively inputting the plurality of second image frames into a pre-trained license plate state classification model, obtaining output license plate state labels for identifying the license plate state to be normal/abnormal, and determining the license plate state labels as the license plate state labels output by the license plate state classification model.
Optionally, the processor obtains a fourth image frame with a license plate quality meeting a preset requirement by screening from the plurality of third image frames/first image frames, and the method includes:
when the second license plate marking area/the first license plate marking area is determined to be a license plate image, calculating the definition of the license plate image, judging whether the license plate is abnormal according to the license plate image, and calculating the inclination angle of the license plate;
and screening a fourth image frame with the license plate quality meeting a preset requirement from the plurality of third image frames/first image frames, wherein the preset requirement is that the definition is not less than a first preset threshold, the inclination angle is not greater than a second preset threshold, and the license plate is not abnormal.
Optionally, the processor determines that the second license plate indicia region is a license plate image, including:
when the width-to-height ratio of the second license plate marking area is determined to be within a preset threshold range and the width of the second license plate marking area is larger than a third preset threshold, expanding the second license plate marking area;
inputting the expanded second license plate marking area into a pre-trained classification model, and judging whether the second license plate marking area is a license plate image;
when the judgment result that the output license plate image is obtained, determining that the second license plate marking area/the first license plate marking area is the license plate image;
the processor determining that the first license plate indicia area is a license plate image, comprising:
when the width-to-height ratio of the first license plate marking area is determined to be within a preset threshold range and the width of the first license plate marking area is larger than a third preset threshold, expanding the first license plate marking area;
inputting the expanded first license plate marking area into a pre-trained classification model, and judging whether the first license plate marking area is a license plate image;
and when the judgment result that the license plate image is output is obtained, determining that the first license plate marking area is the license plate image.
Optionally, the processor calculates a tilt angle of the license plate, including:
graying and threshold segmentation are carried out on the expanded second license plate mark area/first license plate mark area to obtain a binary image;
carrying out connected domain segmentation and connected domain marking on the binary image, determining each character in the binary image, and determining the highest point and the lowest point of each character;
respectively performing straight line fitting on the highest point and the lowest point, determining a first straight line where the highest point is located and a second straight line where the lowest point is located, and calculating the slope of the first straight line and the average slope k of the slopes of the second straight lines;
and calculating the inclination angle theta of the license plate region according to the average slope, wherein the inclination angle theta is atan (k).
Optionally, after determining whether the license plate is abnormal according to the license plate image, the processor is further configured to:
calculating the total number of characters in the binary image corresponding to the determined non-abnormal license plate;
and screening out fourth image frames corresponding to the license plates with the total number smaller than a fourth preset threshold value.
Optionally, the processor crops the corresponding first image frame according to the enlarged vehicle marking region, including:
and cutting off the upper half part of the corresponding first image frame according to the width of the enlarged vehicle mark area and the preset aspect ratio.
An embodiment of the present invention provides a schematic diagram of a license plate recognition device, as shown in fig. 10, including:
an image frame acquiring unit 1001 configured to acquire a plurality of original image frames acquired after a vehicle reaches a preset acquisition area;
the first detection unit 1002 is configured to perform vehicle detection and license plate detection on the multiple original image frames respectively to obtain multiple first image frames, where the first image frames include a vehicle mark region for a vehicle and a first license plate mark region for a license plate;
a cropping unit 1003, configured to perform expansion processing on vehicle marking regions included in the multiple first image frames, and crop corresponding first image frames according to the expanded vehicle marking regions to obtain multiple second image frames;
the second detection unit 1004 is configured to perform license plate detection on the plurality of second image frames respectively to obtain a plurality of third image frames, where the third image frames include a second license plate mark region for a license plate;
a content determining unit 1005 configured to determine license plate content of the license plate based on the first license plate marking region and the second license plate marking region.
Optionally, after obtaining a plurality of third image frames, the content determining unit is further configured to:
judging whether a corresponding second license plate marking area exists in each of the plurality of third image frames;
screening a fourth image frame with the license plate quality meeting the preset requirement from the plurality of third image frames when the corresponding second license plate marking area is determined to exist;
when the corresponding second license plate marking area does not exist, judging whether the corresponding first license plate marking area exists or not for each first image frame in the plurality of first image frames; and screening and obtaining a fourth image frame with the license plate quality meeting the preset requirement from the plurality of first image frames when the corresponding first license plate marking area is determined to exist.
Optionally, the content determining unit determines the license plate content of the license plate based on the first license plate marking region and the second license plate marking region, including:
inputting the fourth image frame into a pre-trained license plate recognition model;
performing feature extraction on the input fourth image frame based on a first CNN feature extraction layer in the license plate recognition model to obtain a feature matrix H1
Based on the incidence matrix calculation layer in the license plate recognition model, the feature matrix H is enhanced1Obtaining a dependency matrix O according to the dependency among the characteristic vectors;
extracting the characteristics of the dependency matrix O based on an RNN characteristic extraction layer in the license plate recognition model to obtain a characteristic matrix H2
And predicting characters and probability corresponding to each time step timeout based on a first classifier in the license plate recognition model, sequencing the characters with the maximum probability according to the timeout to obtain license plate contents, and outputting the license plate contents to obtain n license plate contents of the output license plate.
Optionally, after obtaining the output n license plate contents of the license plate, the content determining unit is further configured to:
and when the n is determined to be not smaller than a preset value, or the n is determined to be smaller than the preset value and not equal to 0, and the n license plate contents accord with a preset license plate coding rule, determining the license plate contents of the license plate according to the n license plate contents.
Optionally, the pre-training of the license plate recognition model by the content determination unit includes:
inputting image frames in pre-collected sample data into a license plate recognition model;
adjusting parameters of the license plate recognition model according to the connection time sequence classification CTC loss and the cumulative cross entropy CCE loss so that the license plate recognition model outputs license plate content corresponding to the image frame;
the sample data comprises an image frame containing a license plate region and license plate content corresponding to the image frame; the CTC loss is obtained by calculation according to the arrangement sequence of characters in the output license plate content and the arrangement sequence of characters in the license plate content corresponding to the image frame; and the CCE loss is calculated according to the occurrence frequency of the same character in the output license plate content and the occurrence frequency of the same character in the license plate content corresponding to the image frame.
Optionally, the adjusting the parameters of the license plate recognition model by the content determination unit according to the CTC loss in the connection timing classification includes:
feature extraction is carried out on the image frame in the sample data based on a first CNN feature extraction layer in the license plate recognition model, and a feature matrix H is obtained1
Based on the incidence matrix calculation layer in the license plate recognition model, the feature matrix H is enhanced1Obtaining a dependency matrix O according to the dependency among the characteristic vectors;
extracting the characteristics of the dependency matrix O based on an RNN characteristic extraction layer in the license plate recognition model to obtain a characteristic matrix H2
Based on a first classifier in the license plate recognition model, predicting characters and probability corresponding to the time step timeout, and sequencing the characters with the maximum probability according to the time step to obtain and output first license plate content;
calculating connection time sequence classification CTC loss according to the arrangement sequence of the characters in the first license plate content and the arrangement sequence of the characters in the license plate content corresponding to the image frame in the sample data;
and adjusting parameters of the license plate recognition model according to the CTC loss.
Optionally, the license plate recognition model further includes a second CNN feature extraction layer and a second classifier, and the adjusting, by the content determining unit, parameters of the license plate recognition model according to the cumulative cross-entropy CCE loss includes:
the feature matrix H is subjected to feature matrix pair based on a second CNN feature extraction layer in the license plate recognition model1Extracting the characteristics to obtain a characteristic matrix H3
Based on a second classifier in the license plate recognition model, predicting characters and probability corresponding to each time step timeout, and sequencing the characters with the maximum probability according to the timeout to obtain and output second license plate content;
calculating the cumulative cross entropy CCE loss according to the occurrence times of the same characters in the second license plate content and the occurrence times of the same characters in the license plate content corresponding to the image frame in the sample data;
and adjusting parameters of the license plate recognition model according to the CCE loss.
Optionally, the content determination unit enhances the feature matrix H1Obtaining a dependency matrix O according to the dependency between the feature vectors, including:
computing and characterizing the feature matrix H1A distance matrix of distances between different feature vectors;
computing and characterizing the feature matrix H1The incidence matrix of incidence relation between different characteristic vectors;
calculating the product of the distance matrix and the incidence matrix to obtain a correlation matrix;
calculating the correlation matrix and the feature matrix H1And presetting the product of the weight matrix to obtain a dependency matrix O.
Optionally, the calculating, by the content determining unit, cumulative cross entropy CCE loss according to the number of occurrences of the same character in the second license plate content and the number of occurrences of the same character in the license plate content corresponding to the image frame in the sample data includes:
calculating the feature matrix H of each character c3Cumulative probability of occurrence of all positions in
Figure BDA0003164033310000451
Figure BDA0003164033310000461
Wherein,
Figure BDA0003164033310000462
for each character c in the feature matrix H3The probability of occurrence of each position i, i ═1,2, …, T, T is the feature matrix H3The total number of the positions in (a) is,
Figure BDA0003164033310000463
Figure BDA0003164033310000464
Figure BDA0003164033310000465
c is a set of characters of the license plate,
Figure BDA0003164033310000466
is a space;
for the cumulative probability ocNormalization is carried out to obtain the normalized cumulative probability
Figure BDA0003164033310000467
Calculating the occurrence frequency L of the character c in the license plate content corresponding to the image frame in the sample datacAnd to said LcThe normalization is carried out, and the normalization is carried out,
Figure BDA0003164033310000468
computing cumulative cross-entropy CCE loss
Figure BDA0003164033310000469
Wherein I is the input license plate image and S is the second license plate content.
Optionally, after obtaining the output n license plate contents of the license plate, the content determining unit is further configured to:
when the n is determined to be not smaller than a preset value, or when the n is determined to be smaller than the preset value and not equal to 0 and the n license plate contents meet a preset license plate coding rule, determining a license plate state label for identifying the license plate state to be normal;
and when n is determined to be smaller than a preset value and not equal to 0, and the n license plate contents do not accord with a preset license plate coding rule, or when n is determined to be equal to 0, respectively inputting the plurality of second image frames into a pre-trained license plate state classification model, obtaining output license plate state labels for identifying the license plate state to be normal/abnormal, and determining the license plate state labels as the license plate state labels output by the license plate state classification model.
Optionally, the content determining unit obtains a fourth image frame with a license plate quality meeting a preset requirement by screening from the plurality of third image frames/first image frames, and the fourth image frame comprises:
when the second license plate marking area/the first license plate marking area is determined to be a license plate image, calculating the definition of the license plate image, judging whether the license plate is abnormal according to the license plate image, and calculating the inclination angle of the license plate;
and screening a fourth image frame with the license plate quality meeting a preset requirement from the plurality of third image frames/first image frames, wherein the preset requirement is that the definition is not less than a first preset threshold, the inclination angle is not greater than a second preset threshold, and the license plate is not abnormal.
Optionally, the content determining unit determines that the second license plate indicia region is a license plate image, including:
when the width-to-height ratio of the second license plate marking area is determined to be within a preset threshold range and the width of the second license plate marking area is larger than a third preset threshold, expanding the second license plate marking area;
inputting the expanded second license plate marking area into a pre-trained classification model, and judging whether the second license plate marking area is a license plate image;
when the judgment result that the output license plate image is obtained, determining that the second license plate marking area/the first license plate marking area is the license plate image;
the content determination unit determines that the first license plate indicia area is a license plate image, including:
when the width-to-height ratio of the first license plate marking area is determined to be within a preset threshold range and the width of the first license plate marking area is larger than a third preset threshold, expanding the first license plate marking area;
inputting the expanded first license plate marking area into a pre-trained classification model, and judging whether the first license plate marking area is a license plate image;
and when the judgment result that the license plate image is output is obtained, determining that the first license plate marking area is the license plate image.
Optionally, the calculating the tilt angle of the license plate by the content determining unit includes:
graying and threshold segmentation are carried out on the expanded second license plate mark area/first license plate mark area to obtain a binary image;
carrying out connected domain segmentation and connected domain marking on the binary image, determining each character in the binary image, and determining the highest point and the lowest point of each character;
respectively performing straight line fitting on the highest point and the lowest point, determining a first straight line where the highest point is located and a second straight line where the lowest point is located, and calculating the slope of the first straight line and the average slope k of the slopes of the second straight lines;
and calculating the inclination angle theta of the license plate region according to the average slope, wherein the inclination angle theta is atan (k).
Optionally, after determining whether the license plate is abnormal according to the license plate image, the content determining unit is further configured to:
calculating the total number of characters in the binary image corresponding to the determined non-abnormal license plate;
and screening out fourth image frames corresponding to the license plates with the total number smaller than a fourth preset threshold value.
Optionally, the cropping unit crops the corresponding first image frame according to the enlarged vehicle marking region, including:
and cutting off the upper half part of the corresponding first image frame according to the width of the enlarged vehicle mark area and the preset aspect ratio.
The present invention also provides a computer program medium having a computer program stored thereon, which when executed by a processor, implements the steps of the license plate recognition method provided in embodiment 1 above.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The technical solutions provided by the present application are introduced in detail, and the present application applies specific examples to explain the principles and embodiments of the present application, and the descriptions of the above examples are only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (18)

1. A license plate recognition method is characterized by comprising the following steps:
acquiring a plurality of original image frames acquired after a vehicle reaches a preset acquisition area;
respectively carrying out vehicle detection and license plate detection on the multiple original image frames to obtain multiple first image frames, wherein the first image frames comprise a vehicle marking area for a vehicle and a first license plate marking area for a license plate;
expanding vehicle marking areas contained in the plurality of first image frames, and cutting the corresponding first image frames according to the expanded vehicle marking areas to obtain a plurality of second image frames;
respectively carrying out license plate detection on the plurality of second image frames to obtain a plurality of third image frames, wherein the third image frames comprise second license plate marking areas aiming at license plates;
determining license plate content of the license plate based on the first license plate marking area and the second license plate marking area.
2. The method of claim 1, wherein after obtaining the plurality of third image frames, further comprising:
judging whether a corresponding second license plate marking area exists in each of the plurality of third image frames;
screening a fourth image frame with the license plate quality meeting the preset requirement from the plurality of third image frames when the corresponding second license plate marking area is determined to exist;
when the corresponding second license plate marking area does not exist, judging whether the corresponding first license plate marking area exists or not for each first image frame in the plurality of first image frames; and screening and obtaining a fourth image frame with the license plate quality meeting the preset requirement from the plurality of first image frames when the corresponding first license plate marking area is determined to exist.
3. The method of claim 2, wherein determining license plate content for the license plate based on the first license plate indicia area and the second license plate indicia area comprises:
inputting the fourth image frame into a pre-trained license plate recognition model;
performing feature extraction on the input fourth image frame based on a first CNN feature extraction layer in the license plate recognition model to obtain a feature matrix H1
Based on the incidence matrix calculation layer in the license plate recognition model, the feature matrix H is enhanced1Obtaining a dependency matrix O according to the dependency among the characteristic vectors;
extracting the characteristics of the dependency matrix O based on an RNN characteristic extraction layer in the license plate recognition model to obtain a characteristic matrix H2
And predicting characters and probability corresponding to each time step timeout based on a first classifier in the license plate recognition model, sequencing the characters with the maximum probability according to the timeout to obtain license plate contents, and outputting the license plate contents to obtain n license plate contents of the output license plate.
4. The method of claim 3, wherein after obtaining the outputted n license plate contents of the license plate, further comprising:
and when the n is determined to be not smaller than a preset value, or the n is determined to be smaller than the preset value and not equal to 0, and the n license plate contents accord with a preset license plate coding rule, determining the license plate contents of the license plate according to the n license plate contents.
5. The method of claim 3, wherein pre-training the license plate recognition model comprises:
inputting image frames in pre-collected sample data into a license plate recognition model;
adjusting parameters of the license plate recognition model according to the connection time sequence classification CTC loss and the cumulative cross entropy CCE loss so that the license plate recognition model outputs license plate content corresponding to the image frame;
the sample data comprises an image frame containing a license plate region and license plate content corresponding to the image frame; the CTC loss is obtained by calculation according to the arrangement sequence of characters in the output license plate content and the arrangement sequence of characters in the license plate content corresponding to the image frame; and the CCE loss is calculated according to the occurrence frequency of the same character in the output license plate content and the occurrence frequency of the same character in the license plate content corresponding to the image frame.
6. The method of claim 5, wherein adjusting parameters of the license plate recognition model based on Connected Temporal Classification (CTC) loss comprises:
feature extraction is carried out on the image frame in the sample data based on a first CNN feature extraction layer in the license plate recognition model, and a feature matrix H is obtained1
Based on the incidence matrix calculation layer in the license plate recognition model, the feature matrix H is enhanced1Obtaining a dependency matrix O according to the dependency among the characteristic vectors;
extracting the characteristics of the dependency matrix O based on an RNN characteristic extraction layer in the license plate recognition model to obtain a characteristic matrix H2
Based on a first classifier in the license plate recognition model, predicting characters and probability corresponding to the time step timeout, and sequencing the characters with the maximum probability according to the time step to obtain and output first license plate content;
calculating connection time sequence classification CTC loss according to the arrangement sequence of the characters in the first license plate content and the arrangement sequence of the characters in the license plate content corresponding to the image frame in the sample data;
and adjusting parameters of the license plate recognition model according to the CTC loss.
7. The method of claim 6, wherein the license plate recognition model further comprises a second CNN feature extraction layer and a second classifier, and the adjusting of the parameters of the license plate recognition model according to the cumulative cross-entropy CCE loss comprises:
the feature matrix H is subjected to feature matrix pair based on a second CNN feature extraction layer in the license plate recognition model1Extracting the characteristics to obtain a characteristic matrix H3
Based on a second classifier in the license plate recognition model, predicting characters and probability corresponding to each time step timeout, and sequencing the characters with the maximum probability according to the timeout to obtain and output second license plate content;
calculating the cumulative cross entropy CCE loss according to the occurrence times of the same characters in the second license plate content and the occurrence times of the same characters in the license plate content corresponding to the image frame in the sample data;
and adjusting parameters of the license plate recognition model according to the CCE loss.
8. The method of claim 6, wherein the feature matrix H is enhanced1Obtaining a dependency matrix O according to the dependency between the feature vectors, including:
computing and characterizing the feature matrix H1A distance matrix of distances between different feature vectors;
computing and characterizing the feature matrix H1The incidence matrix of incidence relation between different characteristic vectors;
calculating the product of the distance matrix and the incidence matrix to obtain a correlation matrix;
calculating the correlation matrix and the feature matrix H1And presetting the product of the weight matrix to obtain a dependency matrix O.
9. The method of claim 7, wherein calculating a cumulative cross-entropy CCE loss based on the number of occurrences of the same character in the second license plate content and the number of occurrences of the same character in the license plate content corresponding to the image frame in the sample data comprises:
calculating the feature matrix H of each character c3Cumulative probability of occurrence of all positions in
Figure FDA0003164033300000041
Figure FDA0003164033300000042
Wherein,
Figure FDA0003164033300000043
for each character c in the feature matrix H3Each bit inSetting the probability of i occurrence, i is 1,2, …, and T is the feature matrix H3The total number of the positions in (a) is,
Figure FDA0003164033300000044
Figure FDA0003164033300000045
c is a set of characters of the license plate,
Figure FDA0003164033300000046
is a space;
for the cumulative probability ocNormalization is carried out to obtain the normalized cumulative probability
Figure FDA0003164033300000047
Calculating the occurrence frequency L of the character c in the license plate content corresponding to the image frame in the sample datacAnd to said LcThe normalization is carried out, and the normalization is carried out,
Figure FDA0003164033300000048
computing cumulative cross-entropy CCE loss
Figure FDA0003164033300000049
Wherein I is the input license plate image and S is the second license plate content.
10. The method of claim 3, wherein after obtaining the outputted n license plate contents of the license plate, further comprising:
when the n is determined to be not smaller than a preset value, or when the n is determined to be smaller than the preset value and not equal to 0 and the n license plate contents meet a preset license plate coding rule, determining a license plate state label for identifying the license plate state to be normal;
and when n is determined to be smaller than a preset value and not equal to 0, and the n license plate contents do not accord with a preset license plate coding rule, or when n is determined to be equal to 0, respectively inputting the plurality of second image frames into a pre-trained license plate state classification model, obtaining output license plate state labels for identifying the license plate state to be normal/abnormal, and determining the license plate state labels as the license plate state labels output by the license plate state classification model.
11. The method according to claim 2, wherein the step of screening out a fourth image frame from the plurality of third image frames/first image frames, wherein the fourth image frame has a license plate quality meeting a preset requirement, comprises:
when the second license plate marking area/the first license plate marking area is determined to be a license plate image, calculating the definition of the license plate image, judging whether the license plate is abnormal according to the license plate image, and calculating the inclination angle of the license plate;
and screening a fourth image frame with the license plate quality meeting a preset requirement from the plurality of third image frames/first image frames, wherein the preset requirement is that the definition is not less than a first preset threshold, the inclination angle is not greater than a second preset threshold, and the license plate is not abnormal.
12. The method of claim 11, wherein determining that the second license plate indicia region is a license plate image comprises:
when the width-to-height ratio of the second license plate marking area is determined to be within a preset threshold range and the width of the second license plate marking area is larger than a third preset threshold, expanding the second license plate marking area;
inputting the expanded second license plate marking area into a pre-trained classification model, and judging whether the second license plate marking area is a license plate image;
when the judgment result that the output license plate image is obtained, determining that the second license plate marking area/the first license plate marking area is the license plate image;
determining that the first license plate indicia area is a license plate image, comprising:
when the width-to-height ratio of the first license plate marking area is determined to be within a preset threshold range and the width of the first license plate marking area is larger than a third preset threshold, expanding the first license plate marking area;
inputting the expanded first license plate marking area into a pre-trained classification model, and judging whether the first license plate marking area is a license plate image;
and when the judgment result that the license plate image is output is obtained, determining that the first license plate marking area is the license plate image.
13. The method of claim 12, wherein calculating the tilt angle of the license plate comprises:
graying and threshold segmentation are carried out on the expanded second license plate mark area/first license plate mark area to obtain a binary image;
carrying out connected domain segmentation and connected domain marking on the binary image, determining each character in the binary image, and determining the highest point and the lowest point of each character;
respectively performing straight line fitting on the highest point and the lowest point, determining a first straight line where the highest point is located and a second straight line where the lowest point is located, and calculating the slope of the first straight line and the average slope k of the slopes of the second straight lines;
and calculating the inclination angle theta of the license plate region according to the average slope, wherein the inclination angle theta is atan (k).
14. The method of claim 13, wherein after determining whether the license plate is abnormal according to the license plate image, the method further comprises:
calculating the total number of characters in the binary image corresponding to the determined non-abnormal license plate;
and screening out fourth image frames corresponding to the license plates with the total number smaller than a fourth preset threshold value.
15. The method of claim 1, wherein cropping the corresponding first image frame based on the enlarged vehicle marking region comprises:
and cutting off the upper half part of the corresponding first image frame according to the width of the enlarged vehicle mark area and the preset aspect ratio.
16. A license plate recognition device comprising a memory and a processor, wherein:
the memory is used for storing a computer program;
the processor is used for reading the program in the memory and executing the license plate recognition method of any one of claims 1-15.
17. A license plate recognition device, comprising:
the image frame acquisition unit is used for acquiring a plurality of original image frames acquired after the vehicle reaches a preset acquisition area;
the first detection unit is used for respectively carrying out vehicle detection and license plate detection on the multiple original image frames to obtain multiple first image frames, and the first image frames comprise a vehicle mark area aiming at a vehicle and a first license plate mark area aiming at a license plate;
the cutting unit is used for expanding vehicle marking areas contained in the first image frames and cutting the corresponding first image frames according to the expanded vehicle marking areas to obtain second image frames;
the second detection unit is used for respectively carrying out license plate detection on the plurality of second image frames to obtain a plurality of third image frames, and the third image frames comprise second license plate marking areas aiming at license plates;
a content determination unit configured to determine license plate content of the license plate based on the first license plate marking area and the second license plate marking area.
18. A computer program medium, having a computer program stored thereon, which, when being executed by a processor, carries out the steps of the license plate recognition method according to any one of claims 1 to 15.
CN202110799265.2A 2021-07-15 2021-07-15 License plate recognition method, device and equipment Pending CN113610770A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110799265.2A CN113610770A (en) 2021-07-15 2021-07-15 License plate recognition method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110799265.2A CN113610770A (en) 2021-07-15 2021-07-15 License plate recognition method, device and equipment

Publications (1)

Publication Number Publication Date
CN113610770A true CN113610770A (en) 2021-11-05

Family

ID=78304651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110799265.2A Pending CN113610770A (en) 2021-07-15 2021-07-15 License plate recognition method, device and equipment

Country Status (1)

Country Link
CN (1) CN113610770A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019296A (en) * 2022-08-04 2022-09-06 之江实验室 Cascading-based license plate detection and identification method and device
CN116453105A (en) * 2023-06-20 2023-07-18 青岛国实科技集团有限公司 Ship license plate identification method and system based on knowledge distillation deep neural network

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018112900A1 (en) * 2016-12-23 2018-06-28 深圳先进技术研究院 License plate recognition method and apparatus, and user equipment
CN110232381A (en) * 2019-06-19 2019-09-13 梧州学院 License Plate Segmentation method, apparatus, computer equipment and computer readable storage medium
CN110909692A (en) * 2019-11-27 2020-03-24 北京格灵深瞳信息技术有限公司 Abnormal license plate recognition method and device, computer storage medium and electronic equipment
CN111723800A (en) * 2020-06-22 2020-09-29 瑞安市辉煌网络科技有限公司 License plate calibration and identification method and system based on convolutional neural network and electronic equipment
CN111881741A (en) * 2020-06-22 2020-11-03 浙江大华技术股份有限公司 License plate recognition method and device, computer equipment and computer-readable storage medium
CN112115904A (en) * 2020-09-25 2020-12-22 浙江大华技术股份有限公司 License plate detection and identification method and device and computer readable storage medium
CN112580643A (en) * 2020-12-09 2021-03-30 浙江智慧视频安防创新中心有限公司 License plate recognition method and device based on deep learning and storage medium
CN112686252A (en) * 2020-12-28 2021-04-20 中国联合网络通信集团有限公司 License plate detection method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018112900A1 (en) * 2016-12-23 2018-06-28 深圳先进技术研究院 License plate recognition method and apparatus, and user equipment
CN110232381A (en) * 2019-06-19 2019-09-13 梧州学院 License Plate Segmentation method, apparatus, computer equipment and computer readable storage medium
CN110909692A (en) * 2019-11-27 2020-03-24 北京格灵深瞳信息技术有限公司 Abnormal license plate recognition method and device, computer storage medium and electronic equipment
CN111723800A (en) * 2020-06-22 2020-09-29 瑞安市辉煌网络科技有限公司 License plate calibration and identification method and system based on convolutional neural network and electronic equipment
CN111881741A (en) * 2020-06-22 2020-11-03 浙江大华技术股份有限公司 License plate recognition method and device, computer equipment and computer-readable storage medium
CN112115904A (en) * 2020-09-25 2020-12-22 浙江大华技术股份有限公司 License plate detection and identification method and device and computer readable storage medium
CN112580643A (en) * 2020-12-09 2021-03-30 浙江智慧视频安防创新中心有限公司 License plate recognition method and device based on deep learning and storage medium
CN112686252A (en) * 2020-12-28 2021-04-20 中国联合网络通信集团有限公司 License plate detection method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HU SR等: "Integrated determination of network origin-destination trip matrix and heterogeneous sensor selection and location strategy", IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, vol. 17, no. 1, 20 January 2016 (2016-01-20), pages 195 - 205, XP011595332, DOI: 10.1109/TITS.2015.2473691 *
柳思健;: "基于卷积网络的车辆定位与细粒度分类算法", 自动化与仪表, no. 07, 15 July 2018 (2018-07-15) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019296A (en) * 2022-08-04 2022-09-06 之江实验室 Cascading-based license plate detection and identification method and device
CN116453105A (en) * 2023-06-20 2023-07-18 青岛国实科技集团有限公司 Ship license plate identification method and system based on knowledge distillation deep neural network
CN116453105B (en) * 2023-06-20 2023-08-18 青岛国实科技集团有限公司 Ship license plate identification method and system based on knowledge distillation deep neural network

Similar Documents

Publication Publication Date Title
CN108509859B (en) Non-overlapping area pedestrian tracking method based on deep neural network
US11455805B2 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN108921083B (en) Illegal mobile vendor identification method based on deep learning target detection
CN112734775B (en) Image labeling, image semantic segmentation and model training methods and devices
CN110781836A (en) Human body recognition method and device, computer equipment and storage medium
CN107633226B (en) Human body motion tracking feature processing method
Roy et al. Bayesian classifier for multi-oriented video text recognition system
US12056589B2 (en) Methods and systems for accurately recognizing vehicle license plates
CN105574550A (en) Vehicle identification method and device
KR20170026222A (en) Method and device for classifying an object of an image and corresponding computer program product and computer-readable medium
CN104504395A (en) Method and system for achieving classification of pedestrians and vehicles based on neural network
CN113610770A (en) License plate recognition method, device and equipment
CN111027475A (en) Real-time traffic signal lamp identification method based on vision
CN112309126B (en) License plate detection method and device, electronic equipment and computer readable storage medium
CN107133629B (en) Picture classification method and device and mobile terminal
CN112990282B (en) Classification method and device for fine-granularity small sample images
CN112949620B (en) Scene classification method and device based on artificial intelligence and electronic equipment
CN114708555A (en) Forest fire prevention monitoring method based on data processing and electronic equipment
CN113515968A (en) Method, device, equipment and medium for detecting street abnormal event
CN116977937A (en) Pedestrian re-identification method and system
Kaljahi et al. A scene image classification technique for a ubiquitous visual surveillance system
CN109543498A (en) A kind of method for detecting lane lines based on multitask network
CN113076963B (en) Image recognition method and device and computer readable storage medium
CN115187884A (en) High-altitude parabolic identification method and device, electronic equipment and storage medium
CN116959099A (en) Abnormal behavior identification method based on space-time diagram convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination