CN111914845A - Character layering method and device in license plate and electronic equipment - Google Patents

Character layering method and device in license plate and electronic equipment Download PDF

Info

Publication number
CN111914845A
CN111914845A CN202010633195.9A CN202010633195A CN111914845A CN 111914845 A CN111914845 A CN 111914845A CN 202010633195 A CN202010633195 A CN 202010633195A CN 111914845 A CN111914845 A CN 111914845A
Authority
CN
China
Prior art keywords
character
characters
coordinates
determining
license plate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010633195.9A
Other languages
Chinese (zh)
Inventor
刘志辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010633195.9A priority Critical patent/CN111914845A/en
Publication of CN111914845A publication Critical patent/CN111914845A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Abstract

The application provides a method and a device for layering characters in a license plate and electronic equipment, which are used for distinguishing upper-layer characters and lower-layer characters in the inclined license plate containing double-layer characters and improving the license plate recognition accuracy of an uncorrected license plate. The method comprises the steps of determining coordinates of each endpoint in two endpoint sets of alternative segmentation lines based on the coordinates of a plurality of character datum points detected in an image to be recognized, wherein horizontal coordinate values of the endpoints in the same endpoint set are the same, and horizontal coordinate values of the endpoints in different endpoint sets are different; determining a plurality of alternative dividing lines according to two end point sets of the alternative dividing lines, wherein the end points of the alternative dividing lines belong to different end point sets; determining target segmentation lines of the characters according to the distance between the reference point and the alternative segmentation lines; determining an upper layer character and a lower layer character of the plurality of characters based on a positional relationship of the target dividing line and the reference point.

Description

Character layering method and device in license plate and electronic equipment
Technical Field
The present application relates to the field of information processing, and in particular, to a method and an apparatus for layering characters in a license plate, and an electronic device.
Background
The license plate recognition technology is widely applied to vehicle management scenes such as toll stations of highways, parking lots, entrances and exits of communities and the like. Automatic license plate discernment can realize the automated management to the vehicle, reduces the human cost, promotes vehicle management efficiency. The process of identifying the license plate can be realized by identifying the intelligent IC card or the bar code installed on the vehicle and matching the identified IC card or the bar code with the license plate number. In order to avoid installing additional hardware devices, license plate recognition is generally implemented by using a method for recognizing license plate characters in a vehicle image.
In a scene that the license plate contains double-layer characters, the positions of all characters in the license plate need to be identified so as to distinguish upper-layer characters from lower-layer characters according to the positions of the characters. Because the angle of the collected license plate image is inclined, in order to improve the accuracy of character position determination in the related technology, the license plate image needs to be corrected before license plate recognition. Because the inclination angles of the license plates in the images are different, the image correction effects are different, and if the images are excessively corrected or the correction effects are poor, the recognition of the characters and the determination of the positions of the characters are affected, so that the license plate recognition error rate is high, and even the license plate recognition cannot be performed.
Disclosure of Invention
The application provides a method and a device for layering characters in a license plate and electronic equipment, which are used for distinguishing upper-layer characters and lower-layer characters in the inclined license plate containing double-layer characters and improving the license plate recognition accuracy of an uncorrected license plate.
The technical scheme of the application is as follows:
according to a first aspect of an embodiment of the present application, a method for layering characters in a license plate is provided, including:
determining coordinates of each endpoint in two endpoint sets of the alternative segmentation lines based on the coordinates of a plurality of character datum points detected in the image to be recognized, wherein horizontal coordinate values of endpoints in the same endpoint set are the same, and horizontal coordinate values of endpoints in different endpoint sets are different;
determining a plurality of alternative dividing lines according to two end point sets of the alternative dividing lines, wherein the end points of the alternative dividing lines belong to different end point sets;
determining target segmentation lines of a plurality of characters according to the distance between the reference point and the alternative segmentation lines;
an upper layer character and a lower layer character of the plurality of characters are determined based on the positional relationship of the target dividing line and the reference point.
In the embodiment, the coordinates of the character reference points in the license plate image which is not corrected are used for determining the segmentation lines capable of distinguishing the upper-layer characters and the lower-layer characters in the license plate, the upper-layer characters and the lower-layer characters are distinguished according to the position relation between the determined segmentation lines and the character reference points, the upper-layer characters and the lower-layer characters are effectively distinguished, a character distinguishing method is provided for the recognition of the double-layer character license plate without the graphic correction processing process, and the accuracy of license plate recognition is improved.
In a possible implementation manner, in the method for layering characters in a license plate provided by the present application, determining a target segmentation line of a plurality of characters according to a distance between a reference point and a candidate segmentation line includes:
determining a minimum distance value of the distance between the reference point and each alternative dividing line;
and determining the candidate parting line with the maximum minimum distance value as the target parting line.
In the above embodiment, the minimum distance value of the distance between each character reference point of the plurality of characters and any one of the candidate segmentation lines is calculated, and only the candidate segmentation line corresponding to the minimum distance with the largest value among the determined minimum distance values is determined as the target segmentation line, so that the segmentation line which can most distinguish the upper layer character and the lower layer character in the double-layer character is selected from the candidate segmentation lines, and the character layering accuracy is improved.
In a possible implementation manner, in the method for layering characters in a license plate provided by the present application, a vertical coordinate value of each end point is not greater than a maximum vertical coordinate value in coordinates of the reference point and is not less than a minimum vertical coordinate value in coordinates of the reference point.
In the above embodiment, the vertical coordinate value of the end point of the alternative dividing line is not greater than the maximum vertical coordinate value of the plurality of character reference points and is not less than the minimum vertical coordinate value, so that the alternative dividing line is distributed in the region where the character is located, the number of the alternative dividing lines is reduced, and the time for determining the target dividing line can be shortened.
In a possible implementation manner, in the method for layering characters in a license plate provided by the present application, after the endpoints in each endpoint set are sorted according to the vertical coordinate value, the distance between two adjacent endpoints is equal to a preset interval value.
In the above embodiment, the number of endpoints in the endpoint set may be controlled by presetting the interval value, so as to control the number of the alternative dividing lines.
In a possible implementation manner, in the method for layering characters in a license plate provided by the present application, a horizontal coordinate value of an endpoint in one endpoint set of the two endpoint sets is a minimum horizontal coordinate value in coordinates of the reference point, and a horizontal coordinate value of an endpoint in the other endpoint set is a maximum horizontal coordinate value in coordinates of the reference point.
In the above embodiment, the two endpoint values of each alternative segmentation line are respectively the minimum horizontal coordinate value and the maximum horizontal coordinate value of the coordinates of the multiple character reference points, so that the determined target segmentation line can effectively distinguish all characters, and the character layering accuracy is improved.
In a possible implementation manner, in the method for layering characters in a license plate provided by the present application, after determining an upper layer character and a lower layer character in a plurality of characters based on a position relationship between a target segmentation line and a reference point, the method further includes:
sorting the upper layer characters according to the horizontal coordinate values of the upper layer character datum points; sequencing the lower layer characters according to the horizontal coordinate value of the lower layer character datum point;
and splicing the sorted lower layer characters to the sorted upper layer characters to obtain the character sorting of the image to be recognized.
In the above embodiment, after the upper layer characters and the lower layer characters in the image to be recognized are determined, the characters of each layer are respectively sorted, the sorted structures of the upper layer characters and the lower layer characters are spliced together to obtain the character sorting of the image to be recognized, and the character sorting of the double-layer character license plate without correction processing is realized.
In a possible implementation manner, in the method for layering characters in a license plate provided by the present application, before determining coordinates of each endpoint in two endpoint sets of a candidate segmentation line based on coordinates of a plurality of character reference points detected in an image to be recognized, the method further includes:
determining the probability value of each character center point in each position of the feature map corresponding to the image to be recognized by using the trained character detection network;
performing maximum pooling operation on a first matrix formed by the maximum probability values of all the positions to obtain a second matrix;
determining a target position, wherein the probability value of the target position in the first matrix is the same as the probability value in the second matrix;
and determining the coordinates of the target position as the coordinates of the character reference points contained in the image to be recognized.
In the above embodiment, in the probability values of the center points of the characters determined by the character detection network at the positions in the feature map, the second matrix obtained by pooling the first matrix composed of the maximum probability values of the positions is matched with the first matrix, and the positions with the same positions and the same probability values in the first matrix and the second matrix are determined as the target positions of the character reference points included in the image to be recognized. The target position coordinates of the recognized character center points are used as the character reference point coordinates when characters are layered, and character layering can be performed on a plurality of character reference points determined in a license plate recognition scene by applying a target detection method.
In a possible implementation manner, in the method for layering characters in a license plate provided by the present application, before determining coordinates of each endpoint in two endpoint sets of a candidate segmentation line based on coordinates of a plurality of character reference points detected in an image to be recognized, the method further includes:
determining the vertex coordinates of a detection box containing characters of an image to be recognized by using a trained character recognition network;
and using the designated vertex coordinates in the determined vertex coordinates of the detection frame or the center coordinates of the detection frame calculated based on the determined vertex coordinates of the detection frame as the coordinates of the contained character reference points.
In the above embodiment, the specified vertex in the vertex coordinates of the detection box containing the character determined in the character recognition network or the center coordinates of the detection box obtained through calculation is used as the coordinates of the plurality of character reference points, and the plurality of character reference points determined in the embryo recognition scene by applying the character segmentation method may be subjected to character layering.
According to a second aspect of the embodiments of the present application, there is provided a device for layering characters in a license plate, the device including:
the set determining unit is used for determining the coordinates of each endpoint in two endpoint sets of the alternative segmentation lines based on the coordinates of a plurality of character reference points detected in the image to be recognized, wherein the horizontal coordinate values of the endpoints in the same endpoint set are the same, and the horizontal coordinate values of the endpoints in different endpoint sets are different;
the alternative dividing line determining unit is used for determining a plurality of alternative dividing lines according to two end point sets of the alternative dividing lines, wherein the end points of the alternative dividing lines belong to different end point sets;
the processing unit is used for determining target segmentation lines of a plurality of characters according to the distance between the reference point and the alternative segmentation lines;
and the character layering unit is used for determining an upper layer character and a lower layer character in the plurality of characters based on the position relation between the target dividing line and the reference point.
In a possible implementation manner, in the device for layering characters in a license plate provided by the present application, the processing unit is specifically configured to:
determining a minimum distance value of the distance between the reference point and each alternative dividing line;
and determining the candidate parting line with the maximum minimum distance value as the target parting line.
In a possible implementation manner, in the device for layering characters in a license plate provided by the present application, a vertical coordinate value of each end point is not greater than a maximum vertical coordinate value in coordinates of the reference point and is not less than a minimum vertical coordinate value in coordinates of the reference point.
In a possible implementation manner, in the device for layering characters in a license plate provided by the present application, after the endpoints in each endpoint set are sorted according to the vertical coordinate value, the distance between two adjacent endpoints is equal to a preset interval value.
In a possible implementation manner, in the device for layering characters in a license plate provided by the present application, a horizontal coordinate value of an endpoint in one endpoint set of the two endpoint sets is a minimum horizontal coordinate value in coordinates of the reference point, and a horizontal coordinate value of an endpoint in the other endpoint set is a maximum horizontal coordinate value in coordinates of the reference point.
In a possible implementation manner, the device for layering characters in a license plate provided by the present application further includes a character sorting unit, configured to:
after determining an upper layer character and a lower layer character in the plurality of characters based on the position relation between the target dividing line and the reference point, sorting the upper layer character according to a horizontal coordinate value of the reference point of the upper layer character; sequencing the lower layer characters according to the horizontal coordinate value of the lower layer character datum point;
and splicing the sorted lower layer characters to the sorted upper layer characters to obtain the character sorting of the image to be recognized.
In a possible implementation manner, the device for layering characters in a license plate provided by the present application further includes a first coordinate determining unit, configured to:
before determining the coordinates of each endpoint in two endpoint sets of the alternative segmentation lines based on the coordinates of a plurality of character datum points detected in the image to be recognized, determining the probability value of each character center point in each position in the feature map corresponding to the image to be recognized by using a trained character detection network;
performing maximum pooling operation on a first matrix formed by the maximum probability values of all the positions to obtain a second matrix;
determining a target position, wherein the probability value of the target position in the first matrix is the same as the probability value in the second matrix;
and determining the coordinates of the target position as the coordinates of the character reference points contained in the image to be recognized.
In a possible implementation manner, in the device for layering characters in a license plate provided by the present application, the second coordinate determination unit is configured to:
before determining the coordinates of each endpoint in two endpoint sets of the alternative segmentation lines based on the coordinates of a plurality of character datum points detected in the image to be recognized, determining the vertex coordinates of a detection frame containing characters in the image to be recognized by using a trained character recognition network;
and using the designated vertex coordinates in the determined vertex coordinates of the detection frame or the center coordinates of the detection frame calculated based on the determined vertex coordinates of the detection frame as the coordinates of the contained character reference points.
According to a third aspect of embodiments of the present application, there is provided an electronic apparatus, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method for layering characters in a license plate of any one of the first aspect.
According to a fourth aspect of embodiments of the present application, there is provided a storage medium, where instructions executed by a processor of an electronic device enable the electronic device to perform the method for layering characters in a license plate of any one of the first aspect.
In addition, for technical effects brought by any one implementation manner in the second to fourth aspects, reference may be made to technical effects brought by different implementation manners in the first aspect, and details are not described here. On the basis of the common knowledge in the field, the above preferred conditions can be combined randomly to obtain the preferred embodiments of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application and are not to be construed as limiting the application.
FIG. 1 is a schematic flow chart diagram illustrating a method for layering characters in a license plate according to an exemplary embodiment;
FIG. 2 is a schematic diagram illustrating a partition line endpoint and a partition line in accordance with an illustrative embodiment;
FIG. 3 is a schematic flow chart diagram illustrating a license plate recognition method in accordance with an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating a license plate image feature map in accordance with an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating an exemplary arrangement of a character layering apparatus in a license plate;
FIG. 6 is a block diagram illustrating the structure of an electronic device in accordance with an exemplary embodiment;
fig. 7 is a block diagram illustrating another electronic device according to an example embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application scenario described in the embodiment of the present application is for more clearly illustrating the technical solution of the embodiment of the present application, and does not form a limitation on the technical solution provided in the embodiment of the present application, and it can be known by a person skilled in the art that with the occurrence of a new application scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems. In the description of the present application, the term "plurality" means two or more unless otherwise specified.
The method aims to solve the problems that the license plate recognition error rate is high and even the license plate cannot be recognized due to over correction or poor correction effect on the inclined license plate image in the related technology. The application provides a method for layering characters in a license plate, which is used for determining a target segmentation line capable of distinguishing upper-layer characters from lower-layer characters by only detecting coordinates of character reference points and setting segmentation lines with different slopes. And then detecting the character reference points on two sides of the target segmentation line according to the position relation between the character reference points and the determined target segmentation line, so as to realize the distinguishing of the upper layer character and the lower layer character.
The method for layering characters in the license plate can be applied to a license plate recognition method without an image correction process and can also be applied to a license plate recognition method without a character segmentation process. The method for layering characters in the license plate only utilizes the character reference points, but not the accurate positions of the characters in the license plate, so that the method can be applied to the license plate recognition process comprising a simple character detection model.
In an actual application scenario, the coordinates of the character reference point in the method for layering characters in a license plate provided by the present application may be coordinates of a center point of a character in a license plate image or coordinates of a designated point of the character (for example, a point at a lower left corner, a point at an upper left corner, a point at a lower right corner, or a point at an upper right corner). If a target detection method based on deep learning is adopted for character recognition in the license plate recognition process, the coordinates of the character reference points can also be the coordinates of the center points of characters in a feature map of a license plate image determined by a target detection model. If a target detection algorithm is adopted for character detection, the coordinates of the appointed vertex of the character detection box or the coordinates of the central point of the detection box can be used as the coordinates of the character reference point in the license plate character layering method provided by the application.
Fig. 1 is a schematic flowchart illustrating a method for layering characters in a license plate according to an exemplary embodiment, where as shown in fig. 1, the method for layering characters in a license plate includes the following steps:
step S101, determining coordinates of each endpoint in two endpoint sets of the alternative segmentation lines based on the coordinates of a plurality of character reference points detected in the image to be recognized, wherein horizontal coordinate values of endpoints in the same endpoint set are the same, and horizontal coordinate values of endpoints in different endpoint sets are different.
In specific implementation, a plurality of characters are detected from the image to be recognized and are used for forming the characters of the license plate number. And determining two endpoint sets of the alternative segmentation lines by using the coordinates of the plurality of character reference points detected in the image to be recognized. And one end point is selected from the two end point sets, and a plurality of alternative dividing lines can be determined due to different horizontal coordinate values of the end points in different end point sets.
In an actual application scenario, the horizontal coordinate value and the vertical coordinate value of the end points in the two end point sets may be determined according to the coordinates of the plurality of character reference points.
The horizontal coordinate value of the end point in one end point set of the two end point sets is the minimum horizontal coordinate value in the coordinates of the reference point, and the horizontal coordinate value of the end point in the other end point set is the maximum horizontal coordinate value in the coordinates of the reference point.
In specific implementation, the number of characters is assumed to be n, and the coordinate of each character reference point is (x)1,y1),(x2,y2),…,(xn,yn) The minimum value x of the horizontal coordinates of all the reference points can be determined according to the coordinates of a plurality of character reference pointsmin=min(x1,x2,…,xn) And maximum value xmax=max(x1,x2,…,xn). The horizontal coordinate value of the end point in one end point set of the two end point sets is the minimum horizontal coordinate value x in the coordinates of all the reference pointsminThe horizontal coordinate value of the end point in the other end point set is the maximum horizontal coordinate value x in the coordinates of all the reference pointsmax
In order to shorten the processing speed, the vertical coordinate value of each end point may be made not larger than the maximum vertical coordinate value in the coordinates of the reference point and not smaller than the minimum vertical coordinate value in the coordinates of the reference point by controlling the number of the candidate dividing lines.
In specific implementation, the minimum value y of the vertical coordinates of all the reference points is determinedmin=min(y1,y2,…,yn) And maximum value ymax=max(y1,y2,…,yn). The vertical coordinate values of the end points in the two end point sets are not larger than the maximum vertical coordinate value y in the coordinates of all the reference pointsmaxThe minimum vertical coordinate value y in the coordinates of not less than all the reference pointsminThe alternative segmentation lines may all be in the region where the character is located.
In practical application scenarios, to reduce character distinguishing errors, the maximum vertical coordinate of the endpoints in the two endpoint sets may be set as ymaxAnd setting the minimum vertical coordinate value as ymin
Further, in order to reduce the number of the alternative dividing lines and the calculation amount, the number of the endpoints in the two endpoint sets of the alternative dividing lines can be controlled in an equal interval point taking mode so as to control the number of the alternative dividing lines. After the endpoints in each endpoint set are sorted according to the vertical coordinate value, the distance between two adjacent endpoints is equal to the preset interval value.
In practice, at the maximum vertical coordinate ymaxAnd the minimum vertical coordinate value yminAnd selecting a plurality of vertical coordinate values at equal intervals according to a preset interval value, and controlling the number of endpoints in each endpoint set through the set interval value so as to control the number of the alternative dividing lines. It should be noted that the preset interval value or the number of endpoints may be set according to an actual application scenario, and the preset interval value of each endpoint set may be the same or different.
In one possible embodiment, as shown in FIG. 2, the black dots are character reference points. When determining two end point sets of the alternative dividing line, the minimum horizontal coordinate value x in the coordinates of the character reference points can be firstly determinedminMaximum horizontal coordinate value xmaxMinimum vertical coordinate value yminThe maximum vertical coordinate value ymaxFour points D1 (x) are determinedmin,ymin)、D2(xmin,ymax)、D3(xmax,ymin)、D4(xmax,ymax). For convenience of description, the horizontal coordinate value of the endpoint in the two endpoint sets of the alternative dividing line is xminThe endpoint set is marked as a first endpoint set, and the horizontal coordinate value of the endpoint is xmaxIs denoted as a second set of endpoints. In fig. 2, D1 and D2 belong to a first set of endpoints, endpoints in the first set of endpoints are marked with white triangles, D3 and D4 belong to a second set of endpoints, and endpoints in the second set of endpoints are marked with white rectangles. The other endpoints in the first set of endpoints are selected at a preset interval value between D1 and D2, and the other endpoints in the second set of endpoints are selected at a preset interval value between D3 and D4.
The quantity of the two endpoint sets can be controlled by setting the horizontal coordinate value and the vertical coordinate value of each endpoint in the two endpoint sets of the alternative dividing lines and setting the preset interval value, so that the quantity of the alternative dividing lines is controlled, and the alternative dividing lines are all in the region where the characters are located.
Step S102, according to two end point sets of the alternative dividing lines, a plurality of alternative dividing lines are determined, wherein the end points of the alternative dividing lines belong to different end point sets.
In specific implementation, any end point is selected from the two end point sets respectively, and the two selected end points are connected to generate an alternative dividing line. If m endpoints are included in both endpoint sets, m × m alternative dividing lines can be formed at most. Alternative parting line L1, alternative parting line L2, alternative parting line L3 are shown in FIG. 2.
Step S103, according to the distance between the reference point and the alternative dividing line, the target dividing line of a plurality of characters is determined.
In specific implementation, the target segmentation line capable of distinguishing the upper layer character from the lower layer character is determined according to the calculated distance between all the character reference points and each alternative segmentation line.
For example, the minimum distance value of the distance between the reference point and each candidate dividing line is determined, and the candidate dividing line having the largest minimum distance value is determined as the target dividing line.
In specific implementation, all the character reference points (x) can be calculated for each alternative segmentation linei,yi),0<i ≦ n, where i is an integer, and the alternative dividing line L (Ax + By + C ═ 0)
Figure BDA0002566608740000111
Determining distance between all character reference points and the alternative dividing line d1,d2,…,dnThe minimum value in (f), denoted dmin.
According to the above process, all alternative dividing lines L are traversedj,0<j is less than or equal to m, and j is an integer, all alternative dividing lines L can be determinedjCorresponding dmin value, minimum value set of distance between all character reference points and each alternative segmentation line can be recorded as
Figure BDA0002566608740000112
And maximizes the value in the set
Figure BDA0002566608740000113
Corresponding alternative dividing line LkAnd determining as the target parting line. The target dividing line is the dividing line which can distinguish the upper layer character from the lower layer character most in all the alternative dividing lines because the shortest distance from all the reference points is the largest.
In a practical application scenario, a variable interval with an initial value of 0, all character reference points (x)i,yi) With any alternative dividing line LjDistance between { d }1,d2,…,dnAnd if the minimum value dmin of the minimum value dmin is larger than the variable interval, updating the value of the variable interval to be the minimum value dmin, and recording the alternative segmentation line corresponding to the minimum value dmin. After traversing all the alternative dividing lines, the value of the variable interval is the minimum value set of the distances between all the character reference points and all the alternative dividing lines
Figure BDA0002566608740000114
The largest value in (1). And determining the alternative dividing line corresponding to the recorded variable interval as a target dividing line.
Step S104, based on the position relation between the target dividing line and the reference point, determining an upper layer character and a lower layer character in the plurality of characters.
In specific implementation, the upper layer character and the lower layer character in the plurality of characters can be determined according to the position relation between the target segmentation line and the reference point. Assume that alternative parting line L2 in fig. 2 is the target parting line. From the two endpoint coordinate values of L2, the line equation of L2 can be determined, for example, Ax + By + C is 0 or y is kx + b. If the value of y obtained by substituting the horizontal coordinate value of each reference point into the linear equation of L2 is greater than the vertical coordinate value of the reference point, the reference point of the upper layer character is defined, otherwise, if the value of y is less than the vertical coordinate value of the reference point, the reference point of the lower layer character is defined.
Similarly, a reference point having a value of x obtained by substituting the vertical coordinate of each reference point into the linear equation of L2, which is smaller than the horizontal coordinate value of the reference point, may be the reference point of the upper layer character, whereas a reference point having a horizontal coordinate value larger than the reference point may be the reference point of the lower layer character.
After the upper layer characters and the lower layer characters of the characters are accurately distinguished, in a license plate recognition scene, the arrangement sequence of the characters is required to be determined so as to obtain the number of the license plate. In one possible implementation mode, the upper layer characters are sequenced according to the horizontal coordinate values of the upper layer character datum points; sequencing the lower layer characters according to the horizontal coordinate value of the lower layer character datum point;
and splicing the sorted lower layer characters to the sorted upper layer characters to obtain the character sorting of the image to be recognized.
In specific implementation, the characters of each layer can be sequenced from small to large according to the horizontal coordinate values of the reference points of the characters of each layer, so that the sequence of the characters of each layer can be obtained. And splicing the sorting result of the lower layer characters to the sorting result of the upper layer characters to obtain the character sorting of the image to be recognized. And in the license plate recognition scene, obtaining the license plate number in the license plate.
Fig. 3 is a schematic flow chart of a license plate recognition method to which the character layering method in a license plate provided by the present application is applied according to an exemplary embodiment, and as shown in fig. 3, the license plate recognition method includes the following steps:
in step S301, a vehicle region in the image to be recognized is determined by the vehicle detection model.
In specific implementation, the vehicle detection model can be realized by adopting a target detection algorithm You Only Look one series third edition Yoloov 3 algorithm, or by adopting a Single Shot MultiBox Detector target detection algorithm and other general target detection algorithms. The training samples in training the vehicle detection model may be labeled image samples containing a vehicle. And inputting the image to be recognized into the trained vehicle detection model to obtain the vehicle area in the image to be recognized.
Step S302, according to the vehicle area, determining a vehicle image.
In specific implementation, the image containing the vehicle is cut out from the image to be identified according to the detected vehicle area.
Step S303, determining a license plate region and a license plate type in the vehicle image by using the license plate detection model.
In a practical application scenario, the license plate types include a single-layer character and a double-layer character. The license plate detection model can also adopt the same detection algorithm as the vehicle detection model. The training samples in the process of training the license plate detection model can be marked samples containing single-layer license plate images and samples containing double-layer license plate images. And inputting the license plate detection model into the determined vehicle image to obtain a license plate region in the vehicle image.
And step S304, determining a license plate image according to the license plate area.
In specific implementation, the license plate image is cut out from the vehicle image according to the detected license plate image. In order to eliminate the influence of incomplete license plate images caused by inaccurate detection frames of license plate detection during cutting, the license plate images can be enlarged and cut in a certain proportion.
Step S305, determining a feature map of the license plate image and probability values of all characters at all positions in the feature map by using a character detection network.
In specific implementation, the character detection network may be a convolutional neural network with a down-sampling multiple of 4, the number of channels of the last convolutional layer of the character detection network is the number of character categories C to be classified, and for a license plate image with an input resolution of W × H, the size of the finally obtained feature map is W/4 × H/4 × C, that is, the obtained feature map is a three-dimensional tensor. The characters in the license plate image are detected by using a character detection algorithm, the positions of the characters and the size of a target do not need to be accurately detected, and only the relative position of the center point of the characters in the feature map is judged.
In the process of training the character detection network, the size of the input license plate image and the size of the label are the same as the size of the feature map, the position of the real center point of the character in the label, which is mapped on the feature map, is 1, and the numerical values of the rest positions are set to be 0.
And inputting the license plate image into a character detection network to obtain a feature map of the license plate image and a probability value of each position of each character in the feature map.
Step S306, the character category and the reference point coordinates of the character are determined.
In practical application scenarios, the trained character detection network may be used to determine probability values of the center points of the characters at the positions in the feature map corresponding to the image to be recognized. And performing maximum pooling operation on the first matrix formed by the maximum probability values of the positions to obtain a second matrix. And determining a target position, wherein the probability value of the target position in the first matrix is the same as that in the second matrix, and determining the coordinates of the target position as the coordinates of the character datum points contained in the license plate image.
And taking the maximum value of the determined dimension of the characteristic diagram along the category direction to obtain a two-dimensional tensor, namely selecting the probability value with the maximum probability value of the same position of the characteristic diagram in different categories to form a two-dimensional tensor, and recording the two-dimensional tensor as a first matrix. And simultaneously, recording character classification categories corresponding to the maximum probability values of the selected positions. The location with the higher probability value reflects a higher probability that the location is the center point of the character. As shown in fig. 4, a license plate image and a two-dimensional tensor 402 obtained by taking the maximum value of a feature map 401 corresponding to the license plate image along the dimension of the category direction are shown. To suppress points around the peak point (the position with the larger probability value), the first matrix may be subjected to a maximum pooling operation, for example, a maximum pooling operation with a step size of 1 and a convolution kernel of 3 × 3, to obtain the second matrix 403. And matching the first matrix with the second matrix, and determining the target position with the probability value in the first matrix being equal to the probability value in the second matrix and the position being the same.
And determining the coordinates of the target position as the coordinates of the character reference points contained in the license plate image. And determining the character type corresponding to the target position according to the character type corresponding to the maximum probability value of each position selected by the record.
Step S307, judging whether the license plate type is a double-layer character, if so, executing step S308, otherwise, executing step S309.
In specific implementation, the license plate of the vehicle in the image to be recognized is determined to be a single-layer character or a double-layer character according to the license plate type detected in the step S303. If the license plate type is a double-layer character, the next step is to execute step S308. If the license plate type is a single-layer character, the next step is executed to step S309.
Step S308, performing character layering on the determined characters.
In specific implementation, the coordinates of each endpoint in two endpoint sets of the alternative segmentation lines are determined based on the coordinates of a plurality of character reference points detected in the image to be recognized, wherein the horizontal coordinate values of the endpoints in the same endpoint set are the same, and the horizontal coordinate values of the endpoints in different endpoint sets are different. And determining a plurality of alternative dividing lines according to the two end point sets of the alternative dividing lines, wherein the end points of the alternative dividing lines belong to different end point sets. And determining target segmentation lines of the characters according to the distance between the reference point and the alternative segmentation lines. An upper layer character and a lower layer character of the plurality of characters are determined based on the positional relationship of the target dividing line and the reference point. The specific implementation process of the step can refer to the method for layering characters in the license plate in the embodiment.
Step S309, determining character sequencing of the single-layer license plate according to the determined reference point coordinates of the characters.
And during specific implementation, sequencing the determined characters according to the horizontal coordinate values of the reference points of the determined characters to obtain the character sequencing of the image to be recognized.
And S310, determining character sequencing of the double-layer license plate according to the reference point coordinates of the upper-layer characters and the reference point coordinates of the lower-layer characters.
In specific implementation, the upper layer characters are sequenced according to the horizontal coordinate values of the upper layer character reference points, and the lower layer characters are sequenced according to the horizontal coordinate values of the lower layer character reference points. And splicing the sorted lower layer characters to the sorted upper layer characters to obtain the character sorting of the image to be recognized.
In the above embodiments of the present application, the position (coordinate) of the character in the feature map is determined, and character layering is performed on the double-layer character, so as to distinguish the upper-layer character from the lower-layer character. And reordering the characters according to the classification of the upper layer characters and the lower layer characters to obtain the license plate number.
In an actual application scenario, the trained character recognition network is also used for determining the position (coordinates) of a detection frame containing characters in the license plate image. The coordinates of the designated vertex (for example, a point at the lower left corner, a point at the upper right corner, and a point at the lower right corner) in the coordinates of the vertices of the detection box are used as the coordinates of the reference point of the character included in the detection box. The coordinates of the center point of the detection frame may be calculated from the coordinates of the vertices of the detection frame, and the coordinates of the center point of the detection frame may be used as the coordinates of the character reference points included in the detection frame.
Fig. 5 is a schematic structural diagram illustrating a device for layering characters in a license plate according to an exemplary embodiment, where the device for layering characters in a license plate includes:
a set determining unit 501, configured to determine coordinates of each endpoint in two endpoint sets of the candidate segmentation lines based on coordinates of the multiple character reference points detected in the image to be recognized, where horizontal coordinate values of endpoints in the same endpoint set are the same, and horizontal coordinate values of endpoints in different endpoint sets are different;
a candidate dividing line determining unit 502, configured to determine a plurality of candidate dividing lines according to two endpoint sets of the candidate dividing lines, where endpoints of the candidate dividing lines belong to different endpoint sets;
a processing unit 503, configured to determine a target segmentation line of the plurality of characters according to a distance between the reference point and the candidate segmentation line;
a character layering unit 504 for determining an upper layer character and a lower layer character of the plurality of characters based on the positional relationship of the target dividing line and the reference point.
In a possible implementation manner, in the device for layering characters in a license plate provided by the present application, the processing unit 503 is specifically configured to:
determining a minimum distance value of the distance between the reference point and each alternative dividing line;
and determining the candidate parting line with the maximum minimum distance value as the target parting line.
In a possible implementation manner, in the device for layering characters in a license plate provided by the present application, a vertical coordinate value of each end point is not greater than a maximum vertical coordinate value in coordinates of the reference point and is not less than a minimum vertical coordinate value in coordinates of the reference point.
In a possible implementation manner, in the device for layering characters in a license plate provided by the present application, after the endpoints in each endpoint set are sorted according to the vertical coordinate value, the distance between two adjacent endpoints is equal to a preset interval value.
In a possible implementation manner, in the device for layering characters in a license plate provided by the present application, a horizontal coordinate value of an endpoint in one endpoint set of the two endpoint sets is a minimum horizontal coordinate value in coordinates of the reference point, and a horizontal coordinate value of an endpoint in the other endpoint set is a maximum horizontal coordinate value in coordinates of the reference point.
In a possible implementation manner, the device for layering characters in a license plate provided by the present application further includes a character sorting unit 505, configured to:
after determining an upper layer character and a lower layer character in the plurality of characters based on the position relation between the target dividing line and the reference point, sorting the upper layer character according to a horizontal coordinate value of the reference point of the upper layer character; sequencing the lower layer characters according to the horizontal coordinate value of the lower layer character datum point;
and splicing the sorted lower layer characters to the sorted upper layer characters to obtain the character sorting of the image to be recognized.
In a possible implementation manner, the device for layering characters in a license plate provided by the present application further includes a first coordinate determining unit 506, configured to:
before determining the coordinates of each endpoint in two endpoint sets of the alternative segmentation lines based on the coordinates of a plurality of character datum points detected in the image to be recognized, determining the probability value of each character center point in each position in the feature map corresponding to the image to be recognized by using a trained character detection network;
performing maximum pooling operation on a first matrix formed by the maximum probability values of all the positions to obtain a second matrix;
determining a target position, wherein the probability value of the target position in the first matrix is the same as the probability value in the second matrix;
and determining the coordinates of the target position as the coordinates of the character reference points contained in the image to be recognized.
In a possible implementation manner, the device for layering characters in a license plate provided by the present application further includes a second coordinate determining unit 507, configured to:
before determining the coordinates of each endpoint in two endpoint sets of the alternative segmentation lines based on the coordinates of a plurality of character datum points detected in the image to be recognized, determining the vertex coordinates of a detection frame containing characters in the image to be recognized by using a trained character recognition network;
and using the designated vertex coordinates in the determined vertex coordinates of the detection frame or the center coordinates of the detection frame calculated based on the determined vertex coordinates of the detection frame as the coordinates of the contained character reference points.
In addition, the method and the apparatus for layering characters in a license plate provided in the embodiments of the present application described with reference to fig. 1, 2, and 5 may be implemented by an electronic device. Fig. 6 is a schematic structural diagram of an electronic device 600 according to an exemplary embodiment, and as shown in fig. 6, the electronic device 600 according to the embodiment of the present application includes:
a processor 610;
a memory 620 for storing instructions executable by the processor 610;
the processor 610 is configured to execute instructions to implement the method for layering characters in a license plate in the embodiment of the present application.
In an exemplary embodiment, a storage medium comprising instructions, such as the memory 620 comprising instructions, executable by the processor 610 of the character layering device in a license plate to perform the method described above is also provided. Alternatively, the storage medium may be a non-transitory computer readable storage medium, for example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 7 shows a schematic structural diagram of another electronic device provided in the embodiment of the present application. The electronic device may include a processor 701 and a memory 702 storing computer program instructions.
Specifically, the processor 701 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 702 may include a mass storage memory for storing data or instructions. By way of example, and not limitation, memory 702 may include a Hard Disk Drive (HDD), a floppy Disk Drive, flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 702 may include removable or non-removable (or fixed) media, where appropriate. The memory 702 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 702 is non-volatile solid-state memory. In a particular embodiment, the memory 702 includes Read Only Memory (ROM). Where appropriate, the ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory or a combination of two or more of these.
The processor 701 reads and executes the computer program instructions stored in the memory 702 to implement the method for layering characters in a license plate in the above embodiment.
In one example, the electronic device may also include a communication interface 703 and a bus 710. As shown in fig. 7, the processor 701, the memory 702, and the communication interface 703 are connected by a bus 710 to complete mutual communication.
The communication interface 703 is mainly used for implementing communication between modules, apparatuses, units and/or devices in this embodiment of the application.
Bus 710 includes hardware, software, or both to couple the components of the electronic device to each other. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hypertransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. Bus 710 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
In addition, in combination with the method for layering characters in a license plate in the foregoing embodiments, embodiments of the present application may provide a computer-readable storage medium to implement. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement the method for layering characters in a license plate of any of the above embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method for layering characters in a license plate is characterized by comprising the following steps:
determining coordinates of each endpoint in two endpoint sets of the alternative segmentation lines based on the coordinates of a plurality of character datum points detected in the image to be recognized, wherein horizontal coordinate values of endpoints in the same endpoint set are the same, and horizontal coordinate values of endpoints in different endpoint sets are different;
determining a plurality of alternative dividing lines according to two end point sets of the alternative dividing lines, wherein the end points of the alternative dividing lines belong to different end point sets;
determining target segmentation lines of the characters according to the distance between the reference point and the alternative segmentation lines;
determining an upper layer character and a lower layer character of the plurality of characters based on a positional relationship of the target dividing line and the reference point.
2. The method of claim 1, wherein determining the target segmentation line for the plurality of characters based on the distance between the reference point and the candidate segmentation line comprises:
determining a minimum distance value of the distance between the reference point and each alternative dividing line;
and determining the candidate parting line with the maximum minimum distance value as the target parting line.
3. The method of claim 1, wherein the vertical coordinate value of each end point is not greater than the largest vertical coordinate value of the coordinates of the reference point and not less than the smallest vertical coordinate value of the coordinates of the reference point.
4. The method according to claim 3, wherein after the endpoints in each endpoint set are sorted according to the vertical coordinate value, the distance between two adjacent endpoints is equal to the preset interval value.
5. The method according to claim 1, wherein the horizontal coordinate value of the endpoint in one of the two sets of endpoints is the smallest horizontal coordinate value of the coordinates of the reference point, and the horizontal coordinate value of the endpoint in the other set of endpoints is the largest horizontal coordinate value of the coordinates of the reference point.
6. The method according to any one of claims 1 to 5, wherein after the determining of the upper layer character and the lower layer character in the plurality of characters based on the positional relationship of the target dividing line and the reference point, the method further comprises:
sorting the upper layer characters according to the horizontal coordinate values of the upper layer character datum points; and sorting the lower layer characters according to the horizontal coordinate values of the lower layer character reference points;
and splicing the sorted lower layer characters to the sorted upper layer characters to obtain the character sorting of the image to be recognized.
7. The method according to claim 1, wherein before the determining coordinates of each end point in two end point sets of the alternative dividing line based on coordinates of a plurality of character reference points detected in the image to be recognized, the method further comprises:
determining the probability value of each character center point in each position of the feature map corresponding to the image to be recognized by using the trained character detection network;
performing maximum pooling operation on a first matrix formed by the maximum probability values of all the positions to obtain a second matrix;
determining a target location having a probability value in the first matrix that is the same as the probability value in the second matrix;
and determining the coordinates of the target position as the coordinates of the character reference points included in the image to be recognized.
8. The method according to claim 1, wherein before the determining coordinates of each end point in two end point sets of the alternative dividing line based on coordinates of a plurality of character reference points detected in the image to be recognized, the method further comprises:
determining the vertex coordinates of a detection box containing characters of the image to be recognized by utilizing a trained character recognition network;
and using the designated vertex coordinates in the determined vertex coordinates of the detection frame or the coordinates of the center point of the detection frame calculated based on the determined vertex coordinates of the detection frame as the coordinates of the contained character reference points.
9. A device for layering characters in a license plate, the device comprising:
the set determining unit is used for determining the coordinates of each endpoint in two endpoint sets of the alternative segmentation lines based on the coordinates of a plurality of character reference points detected in the image to be recognized, wherein the horizontal coordinate values of the endpoints in the same endpoint set are the same, and the horizontal coordinate values of the endpoints in different endpoint sets are different;
the alternative dividing line determining unit is used for determining a plurality of alternative dividing lines according to two end point sets of the alternative dividing lines, wherein the end points of the alternative dividing lines belong to different end point sets;
the processing unit is used for determining target segmentation lines of the characters according to the distance between the reference point and the alternative segmentation lines;
and the character layering unit is used for determining an upper layer character and a lower layer character in the plurality of characters based on the position relation between the target dividing line and the reference point.
10. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of layering characters in a license plate of any of claims 1-8.
CN202010633195.9A 2020-07-02 2020-07-02 Character layering method and device in license plate and electronic equipment Pending CN111914845A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010633195.9A CN111914845A (en) 2020-07-02 2020-07-02 Character layering method and device in license plate and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010633195.9A CN111914845A (en) 2020-07-02 2020-07-02 Character layering method and device in license plate and electronic equipment

Publications (1)

Publication Number Publication Date
CN111914845A true CN111914845A (en) 2020-11-10

Family

ID=73227277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010633195.9A Pending CN111914845A (en) 2020-07-02 2020-07-02 Character layering method and device in license plate and electronic equipment

Country Status (1)

Country Link
CN (1) CN111914845A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634141A (en) * 2020-12-23 2021-04-09 浙江大华技术股份有限公司 License plate correction method, device, equipment and medium
CN114241124A (en) * 2021-11-17 2022-03-25 埃洛克航空科技(北京)有限公司 Method, device and equipment for determining stitching edge in three-dimensional model
WO2024011888A1 (en) * 2022-07-13 2024-01-18 北京京东乾石科技有限公司 License plate recognition method and apparatus, and computer-readable storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819738A (en) * 2012-07-24 2012-12-12 东南大学 License plate identification system based on OpenMP multithreading framework
CN104094281A (en) * 2012-03-05 2014-10-08 欧姆龙株式会社 Image processing method for character recognition, and character recognition device and program using this method
CN105740909A (en) * 2016-02-02 2016-07-06 华中科技大学 Text recognition method under natural scene on the basis of spatial transformation
WO2018028306A1 (en) * 2016-08-11 2018-02-15 杭州海康威视数字技术股份有限公司 Method and device for recognizing license plate number
CN108009543A (en) * 2017-11-29 2018-05-08 深圳市华尊科技股份有限公司 A kind of licence plate recognition method and device
CN108416346A (en) * 2017-02-09 2018-08-17 浙江宇视科技有限公司 The localization method and device of characters on license plate
CN109241975A (en) * 2018-08-27 2019-01-18 电子科技大学 A kind of registration number character dividing method based on character center point location
CN109447064A (en) * 2018-10-09 2019-03-08 温州大学 A kind of duplicate rows License Plate Segmentation method and system based on CNN
KR101979654B1 (en) * 2018-01-15 2019-05-17 주식회사 비엔인더스트리 License plate recognition apparatus and the method thereof
CN109993171A (en) * 2019-03-12 2019-07-09 电子科技大学 A kind of registration number character dividing method based on multi-template and more ratios
CN110163199A (en) * 2018-09-30 2019-08-23 腾讯科技(深圳)有限公司 Licence plate recognition method, license plate recognition device, car license recognition equipment and medium
CN110543882A (en) * 2018-05-29 2019-12-06 北京深鉴智能科技有限公司 Character string recognition method and device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104094281A (en) * 2012-03-05 2014-10-08 欧姆龙株式会社 Image processing method for character recognition, and character recognition device and program using this method
CN102819738A (en) * 2012-07-24 2012-12-12 东南大学 License plate identification system based on OpenMP multithreading framework
CN105740909A (en) * 2016-02-02 2016-07-06 华中科技大学 Text recognition method under natural scene on the basis of spatial transformation
WO2018028306A1 (en) * 2016-08-11 2018-02-15 杭州海康威视数字技术股份有限公司 Method and device for recognizing license plate number
CN108416346A (en) * 2017-02-09 2018-08-17 浙江宇视科技有限公司 The localization method and device of characters on license plate
CN108009543A (en) * 2017-11-29 2018-05-08 深圳市华尊科技股份有限公司 A kind of licence plate recognition method and device
KR101979654B1 (en) * 2018-01-15 2019-05-17 주식회사 비엔인더스트리 License plate recognition apparatus and the method thereof
CN110543882A (en) * 2018-05-29 2019-12-06 北京深鉴智能科技有限公司 Character string recognition method and device
CN109241975A (en) * 2018-08-27 2019-01-18 电子科技大学 A kind of registration number character dividing method based on character center point location
CN110163199A (en) * 2018-09-30 2019-08-23 腾讯科技(深圳)有限公司 Licence plate recognition method, license plate recognition device, car license recognition equipment and medium
CN109447064A (en) * 2018-10-09 2019-03-08 温州大学 A kind of duplicate rows License Plate Segmentation method and system based on CNN
CN109993171A (en) * 2019-03-12 2019-07-09 电子科技大学 A kind of registration number character dividing method based on multi-template and more ratios

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634141A (en) * 2020-12-23 2021-04-09 浙江大华技术股份有限公司 License plate correction method, device, equipment and medium
CN112634141B (en) * 2020-12-23 2024-03-29 浙江大华技术股份有限公司 License plate correction method, device, equipment and medium
CN114241124A (en) * 2021-11-17 2022-03-25 埃洛克航空科技(北京)有限公司 Method, device and equipment for determining stitching edge in three-dimensional model
WO2024011888A1 (en) * 2022-07-13 2024-01-18 北京京东乾石科技有限公司 License plate recognition method and apparatus, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
US11455805B2 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN111914845A (en) Character layering method and device in license plate and electronic equipment
CN106599792B (en) Method for detecting hand driving violation behavior
CN108182383B (en) Vehicle window detection method and device
CN110913243B (en) Video auditing method, device and equipment
CN111797829A (en) License plate detection method and device, electronic equipment and storage medium
CN111369801B (en) Vehicle identification method, device, equipment and storage medium
CN109740609A (en) A kind of gauge detection method and device
CN109165654B (en) Training method of target positioning model and target positioning method and device
CN113221750A (en) Vehicle tracking method, device, equipment and storage medium
CN114038004A (en) Certificate information extraction method, device, equipment and storage medium
CN111191482B (en) Brake lamp identification method and device and electronic equipment
CN111401143A (en) Pedestrian tracking system and method
CN111178359A (en) License plate number recognition method, device and equipment and computer storage medium
CN112329886A (en) Double-license plate recognition method, model training method, device, equipment and storage medium
CN111126286A (en) Vehicle dynamic detection method and device, computer equipment and storage medium
CN112651417A (en) License plate recognition method, device, equipment and storage medium
CN113591543B (en) Traffic sign recognition method, device, electronic equipment and computer storage medium
CN110942073A (en) Container trailer number identification method and device and computer equipment
CN112634141B (en) License plate correction method, device, equipment and medium
CN111639642B (en) Image processing method, device and apparatus
CN114359862A (en) Signal lamp identification method and device, electronic equipment and storage medium
CN109977937B (en) Image processing method, device and equipment
CN112381034A (en) Lane line detection method, device, equipment and storage medium
CN110705479A (en) Model training method, target recognition method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination