CN111598104B - License plate character recognition method and system - Google Patents

License plate character recognition method and system Download PDF

Info

Publication number
CN111598104B
CN111598104B CN202010612343.9A CN202010612343A CN111598104B CN 111598104 B CN111598104 B CN 111598104B CN 202010612343 A CN202010612343 A CN 202010612343A CN 111598104 B CN111598104 B CN 111598104B
Authority
CN
China
Prior art keywords
image
character
point
license plate
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010612343.9A
Other languages
Chinese (zh)
Other versions
CN111598104A (en
Inventor
张鹏
吴猛猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHENGDU PENGYE SOFTWARE CO LTD
Original Assignee
CHENGDU PENGYE SOFTWARE CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHENGDU PENGYE SOFTWARE CO LTD filed Critical CHENGDU PENGYE SOFTWARE CO LTD
Priority to CN202010612343.9A priority Critical patent/CN111598104B/en
Publication of CN111598104A publication Critical patent/CN111598104A/en
Application granted granted Critical
Publication of CN111598104B publication Critical patent/CN111598104B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Character Input (AREA)
  • Traffic Control Systems (AREA)
  • Character Discrimination (AREA)

Abstract

The invention discloses a license plate character recognition method and a license plate character recognition system, comprising the steps of obtaining a license plate image and generating a self-adaptive binarization image; filtering interference contours contained in the self-adaptive binary image and positioning the accurate boundary of the self-adaptive binary image; generating character center distance projection based on the finely positioned self-adaptive binarization image; generating segmented character information based on character center distance projection and connected domains in the self-adaptive binary image; and generating license plate information based on the character information. The invention generates an accurate self-adaptive binarized image by improving the image binarization method, and solves the problem that characters cannot be separated out when illumination or brightness is uneven in the traditional global binarization method. And a preset segmentation block is generated by the proposed character center distance calculation method, and finally, the problem that character segmentation is easy to make mistakes in the traditional license plate character recognition method is solved by utilizing a mode of combining the preset segmentation block and the connected domain, so that the accuracy of license plate character recognition is improved.

Description

License plate character recognition method and system
Technical Field
The invention relates to the technical field of license plate recognition, in particular to a license plate character recognition method and a license plate character recognition system.
Background
With the rapid development of road construction and the continuous increase of the number of automobiles in various countries, traffic management tasks become heavy, and automatic detection and identification of automobile license plates by utilizing a computer automobile license plate identification technology plays a very important role in modern traffic monitoring. The license plate detection and recognition technology is one of important research subjects of digital image processing and pattern recognition technology in the intelligent traffic field, plays an important role in promoting the development of an intelligent traffic system and the development of traffic industry, and has a wider market prospect.
Existing license plate recognition techniques include: license plate positioning, license plate correction, character segmentation and character recognition. The license plate character segmentation is an important component in license plate detection and recognition technology, and the connected domain method in the traditional license plate character segmentation method takes target pixels on the horizontal line of an image as starting points, and all connected domains containing the starting points in the image are extracted through region growing. Since the letters and numbers in the license plate are written by one pen, namely only one communication branch is included, each communication domain is a character, and the rest of the image is removed as noise. The method has high requirements for removing noise interference, and the phenomenon of adhesion between characters and the edges of license plates is very common, so that a plurality of characters are extracted as one character, and the segmentation is wrong. In addition, many Chinese characters comprise a plurality of connected domains after binarization, and the phenomenon of stroke fracture can appear after binarization of numbers and letters, so that partial character information can be lost by using the traditional connected domain method, and character segmentation is easy to be wrong.
In summary, the conventional license plate recognition method still has the problems of partial character information loss and easy error in character segmentation, so that the problem of inaccurate license plate recognition and the like is caused.
Disclosure of Invention
In view of the above, the invention provides a license plate character recognition method and a license plate character recognition system, which solve the problems of inaccurate license plate recognition and the like in the traditional license plate recognition method.
In order to solve the problems, the technical scheme of the invention is to adopt a license plate character recognition method, which comprises the following steps: s1: acquiring a license plate image and generating a self-adaptive binarization image; s2: filtering interference contours contained in the adaptive binary image and positioning the accurate boundary of the adaptive binary image; s3: generating character center distance projection based on the finely positioned self-adaptive binarization image; s4: generating segmented character information based on the character center distance projection and a connected domain in the adaptive binarization image; s5: and generating license plate information based on the character information.
Optionally, the S3 includes: acquiring the width W and the height H of the self-adaptive binarized image after the fine positioning; traversing the center distance of each row of pixels, wherein the center distance of each row of pixels is accumulated by the center distances of all pixel points contained in the row of pixels; weighted average is carried out on the center distances of the rows of pixels with higher values according to the first preset proportion, an average maximum center distance is generated, and the average maximum center distance of the second preset proportion is used as a separation center distance; traversing all the row pixels with the center distances smaller than the separation center distance and generating a plurality of preset segmentation blocks; and traversing the preset segmentation blocks to obtain the preset segmentation blocks with the largest width and generating geometric positioning points.
Optionally, the S4 includes: s41: searching a first connected domain of the binarized image in a first direction by taking the geometric positioning point as an initial point; s42: traversing the widest threshold segmentation block contained in the connected domain as a character segmentation point when the width of the connected domain is larger than a first threshold, regarding the connected domain as a single character if the connected domain does not contain the character segmentation block, searching the widest preset segmentation block as a character segmentation point according to the first direction in a section formed by adding the abscissa of the starting point of the connected domain to the abscissa of the starting point to the abscissa of the first threshold when the width of the connected domain is smaller than a second threshold, and if the found preset segmentation block is the connected domain, the first threshold segmentation block represents that the character is smaller, otherwise, representing that the character has defects; s43: and taking the character segmentation point as an initial point and repeating the steps until all characters of all geometric positioning points in the first direction are segmented.
Optionally, the S4 further includes: searching a first connected domain of the binarized image in a second direction by taking the geometric locating point as an initial point, and generating urban character segmentation points according to the segmentation method contained in the S42; and searching a first connected domain of the binarized image in the second direction by taking the urban character segmentation point as an initial point, and generating a provincial character segmentation point according to the segmentation method contained in the S42.
Optionally, when the provincial character segmentation point is generated, if the interval width between the urban character segmentation point and the provincial character segmentation point is smaller than a third threshold value, a coordinate point, in which the urban character segmentation point extends by the third threshold length in the second direction, is used as the updated provincial character segmentation point.
Optionally, the S1 includes: acquiring a license plate image and graying the license plate image to generate a gray image with unified size; determining a size radius of a fuzzy block based on the size of the gray level image, traversing the gray level image based on the fuzzy block, generating a Gaussian weighted image, comparing the gray level value of a pixel point of the Gaussian weighted image with the gray level value of a pixel point corresponding to the gray level image, and generating a self-adaptive binarization image, wherein the gray level value of the pixel point of the Gaussian weighted image is larger than the gray level value of the pixel point corresponding to the gray level image, the gray level value of the pixel point of the Gaussian weighted image is updated to 0, the gray level value of the pixel point of the Gaussian weighted image is smaller than the gray level value of the pixel point corresponding to the gray level image, and the gray level value of the pixel point of the Gaussian weighted image is updated to 255.
Optionally, the S2 includes: performing contour searching based on the self-adaptive binary image, and generating the self-adaptive binary image for filtering interference contours by filtering contours with the height being smaller than 0.4 times of the height of the binary image and/or the width being smaller than 5 pixels; and counting the gray value jump times of the self-adaptive binary image for filtering the interference profile and generating a precisely positioned self-adaptive binary image.
Optionally, the S5 includes: acquiring sample data; setting a neural network model and generating a prediction model by training the sample data; and inputting the segmented character information into the neural network to generate the license plate information.
Correspondingly, the invention provides a license plate character recognition system, which comprises: the image acquisition unit is used for acquiring license plate images; the data processing unit is used for receiving the license plate image and generating a self-adaptive binary image, filtering interference contours contained in the self-adaptive binary image, positioning the accurate boundary of the self-adaptive binary image, generating character center distance projection based on the self-adaptive binary image after the accurate positioning, and generating segmented character information based on the character center distance projection and a connected domain in the self-adaptive binary image; and the neural network unit can generate license plate information based on the segmented character information.
The primary improvement of the invention is to provide a license plate character recognition method, which can accurately separate the foreground and the background of the image when the illumination of the license plate image is uneven by improving the image binarization method, thereby generating an accurate self-adaptive binarization image and avoiding the problems of character adhesion, deletion and the like of the traditional binarization image. And the outline information of the characters is fully utilized by setting the character center distance projection, so that the anti-interference performance is enhanced. And finally, generating preset dividing blocks through the character center distance, effectively utilizing the preset dividing blocks to solve the problem that character division is easy to make mistakes in the traditional license plate character recognition method, and improving the accuracy of license plate character recognition.
Drawings
FIG. 1 is a simplified flow chart of a license plate character recognition method of the present invention;
FIG. 2 is a simplified schematic diagram of an adaptive binarized image of the present invention;
FIG. 3 is a simplified schematic illustration of a center-to-center projection image of the present invention;
FIG. 4 is a first simplified schematic illustration of a character segmentation image of the present invention;
FIG. 5 is a second simplified schematic illustration of a character segmentation image of the present invention; and
fig. 6 is a simplified block diagram of a license plate character recognition system of the present invention.
Detailed Description
In order to make the technical solution of the present invention better understood by those skilled in the art, the present invention will be further described in detail with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, a license plate character recognition method includes:
s1: and acquiring a license plate image and generating a self-adaptive binarized image. Specifically, acquiring a license plate image and graying the license plate image to generate a gray image with unified size; determining a size radius of a fuzzy block based on the size of the gray level image, traversing the gray level image based on the fuzzy block, generating a Gaussian weighted image, comparing the gray level value of a pixel point of the Gaussian weighted image with the gray level value of a pixel point corresponding to the gray level image, and generating a self-adaptive binarization image, wherein the gray level value of the pixel point of the Gaussian weighted image is larger than the gray level value of the pixel point corresponding to the gray level image, the gray level value of the pixel point of the Gaussian weighted image is updated to 0, the gray level value of the pixel point of the Gaussian weighted image is smaller than the gray level value of the pixel point corresponding to the gray level image, and the gray level value of the pixel point of the Gaussian weighted image is updated to 255.
Further, the grayvalue=0.3×r+0.59×g+0.11×b; the uniform size may be 272 pixels by 72 pixels; the radius of the size of the fuzzy block is 10% of the height of the gray image, and the specific steps for generating the Gaussian weighted image are as follows: placing the central point of the fuzzy block in a first target point of the gray level image, carrying out standard Gaussian weighting on all pixel points in the gray level image covered by the fuzzy block, and updating gray level values of all pixel points; repeating the steps until the gray values of all the pixel points in the gray image are updated.
A fixed threshold is selected to separate the picture into foreground and background compared to conventional global threshold binarization methods. The self-adaptive binarization method can select the threshold value corresponding to each point according to all elements in the fuzzy block, so that the high-accuracy separation of the license plate foreground and the background is realized, and the problems of character adhesion, deletion and the like of the traditional binarization image are effectively avoided. As shown in fig. 2, compared with the conventional binarization method, the self-adaptive binary image can be clearly found, and the characters can be accurately separated under the condition of uneven illumination, so that the problems of adhesion and missing of the characters are avoided.
S2: filtering interference contours contained in the adaptive binary image and positioning accurate boundaries of the adaptive binary image. Specifically, contour searching is performed based on the adaptive binary image, and the adaptive binary image for filtering the interference contour is generated by filtering the contour with the height being smaller than 0.4 times of the height of the binary image and/or the width being smaller than 5 pixels, namely updating the gray value of the interference contour to 0; and counting the gray value jump times of the self-adaptive binary image for filtering the interference profile and generating a precisely positioned self-adaptive binary image. And counting the number of times of pixel value jump in the horizontal direction through carrying out the statistics on the filtered image. The first pixel jump frequency from top to bottom is less than 10 and is set as the upper boundary of the license plate character, the first pixel jump frequency from bottom to top is less than 10 and is set as the lower boundary of the license plate character, and the image between the upper boundary and the lower boundary is the precisely positioned license plate.
S3: and generating character center distance projection based on the finely positioned self-adaptive binarized image.
S4: and generating segmented character information based on the character center distance projection and a connected domain in the self-adaptive binary image.
S5: and generating license plate information based on the character information.
According to the invention, by improving the image binarization method, the image foreground and the background can be accurately separated when the license plate image is unevenly illuminated, so that an accurate self-adaptive binarization image is generated, and the problems of character adhesion, deletion and the like of the traditional binarization image are avoided. And the outline information of the characters is fully utilized by setting the character center distance projection, so that the anti-interference performance is enhanced. And finally, generating preset dividing blocks through the character center distance, effectively utilizing the preset dividing blocks to solve the problem that character division is easy to make mistakes in the traditional license plate character recognition method, and improving the accuracy of license plate character recognition.
Further, the step S3 includes: acquiring the width W and the height H of the self-adaptive binarized image after the fine positioning; traversing the center distance of each row of pixels, wherein the center distance of each row of pixels is accumulated by the center distances of all pixel points contained in the row of pixels; weighted average is carried out on the center distances of the rows of pixels with higher values according to the first preset proportion, an average maximum center distance is generated, and the average maximum center distance of the second preset proportion is used as a separation center distance; traversing all the row pixels with the center distances smaller than the separation center distance and generating a plurality of preset segmentation blocks; and traversing the preset segmentation blocks to obtain the preset segmentation blocks with the largest width and generating geometric positioning points. Specifically, the formula for generating the center distance can be
Figure BDA0002562473870000061
Where M is the center distance of the column and Pi is whether the point contributes to the center distance, it can be seen that the point contributes to the center distance only when the pixel value val of the point is greater than 0; the first preset proportion can be 41%, and the first preset proportion can be flexibly adjusted according to the change of the uniform size; the second preset proportion can be 30%, and the accuracy of the image shooting device and the shooting environment, namely the quality of the initial license plate image, can be flexibly adjusted. In order to avoid the interference of preset segmentation blocks with smaller partial width caused by certain flaws in the initial license plate image shooting quality, the preset segmentation blocks with the width smaller than 5 pixels are removed when geometric positioning points are calculated.
Because of error factors such as poor shooting device and influence of external environment light which may exist in the actual working process, in order to prevent errors of the determined geometric positioning points, as shown in fig. 5, the mode of generating the geometric positioning points can be optimized to traverse the largest preset segmentation block which exists between 0.2 times of the image width from the initial point and 0.5 times of the image width from the initial point as the geometric positioning points according to the actual situation. Wherein the initial point is the initial point of one end of the distance image close to the province code; in fig. 5, the horizontal dividing line is the pitch in the separation, and the vertical dividing line is the position 0.2 times the image width from the initial point and the position 0.5 times the image width from the initial point.
To facilitate understanding of how character information is partitioned, the S4 includes: s41: searching a first connected domain of the binarized image in a first direction by taking the geometric positioning point as an initial point; s42: traversing the widest threshold segmentation block contained in the connected domain as a character segmentation point when the width of the connected domain is larger than a first threshold, regarding the connected domain as a single character if the connected domain does not contain the character segmentation block, searching the widest preset segmentation block as a character segmentation point according to the first direction in a section formed by adding the abscissa of the starting point of the connected domain to the abscissa of the starting point to the abscissa of the first threshold when the width of the connected domain is smaller than a second threshold, and if the found preset segmentation block is the connected domain, the first threshold segmentation block represents that the character is smaller, otherwise, representing that the character has defects; s43: and taking the character segmentation point as an initial point and repeating the steps until all characters of all geometric positioning points in the first direction are segmented. Searching a first connected domain of the binarized image in a second direction by taking the geometric locating point as an initial point, and generating urban character segmentation points according to the segmentation method contained in the S42; and searching a first connected domain of the binarized image in the second direction by taking the urban character segmentation point as an initial point, and generating a provincial character segmentation point according to the segmentation method contained in the S42. Specifically, when the provincial character segmentation point is generated, if the interval width between the urban character segmentation point and the provincial character segmentation point is smaller than a third threshold value, a coordinate point, in which the urban character segmentation point extends by the third threshold length in the second direction, is used as the updated provincial character segmentation point. Specifically, the first direction is defined as a horizontal direction from the province character toward the number character; the second direction is defined as a horizontal direction from the numeric character toward the provincial character; the first threshold is defined as a 60 pixel width for characterizing the maximum character width; the second threshold is defined as a 20 pixel width for characterizing the minimum character width; the third threshold is defined as the width average of all segmented characters and is used to characterize the province code minimum width. In order to avoid interference caused by the initial image flaws during character segmentation, secondary interference preset segmentation block filtering is performed, namely preset segmentation blocks with the width smaller than 5 pixels are removed.
The method and the device can effectively distinguish and solve the problematic problems which cannot be solved by the conventional license plate character recognition method such as character adhesion, smaller character width, character defect and the like based on the geometric positioning points and the preset segmentation blocks by improving the character segmentation method, and further improve the generation method of provincial character segmentation points, and effectively avoid the problem that unconnected characters such as 'Chuan' characters and the like are mistakenly segmented by setting the minimum width of provincial codes, as shown in fig. 4 and 5.
Further, the step S5 includes: sample data is acquired. Specifically, 10000 pictures are randomly generated for each possible character according to a license plate character table, wherein random generation refers to the steps of adopting multiple fonts, writing initial characters (black background and white characters) by a character size, and then enhancing the characters through a series of operations such as visual angle transformation, affine transformation, image blurring, morphological operation and the like, so that initial sample data are obtained. Setting a multi-classification neural network model and generating a prediction model by training the sample data; and inputting the segmented character information into the neural network to generate the license plate information.
Accordingly, as shown in fig. 6, a license plate character recognition system includes: the image acquisition unit is used for acquiring license plate images; the data processing unit is used for receiving the license plate image and generating a self-adaptive binary image, filtering interference contours contained in the self-adaptive binary image, positioning the accurate boundary of the self-adaptive binary image, generating character center distance projection based on the self-adaptive binary image after the accurate positioning, and generating segmented character information based on the character center distance projection and a connected domain in the self-adaptive binary image; and the neural network unit can generate license plate information based on the segmented character information.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that the above-mentioned preferred embodiment should not be construed as limiting the invention, and the scope of the invention should be defined by the appended claims. It will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the spirit and scope of the invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.

Claims (7)

1. A license plate character recognition method, comprising:
s1: acquiring a license plate image and generating a self-adaptive binarization image;
s2: filtering interference contours contained in the adaptive binary image and positioning the accurate boundary of the adaptive binary image;
s3: character generation based on finely positioned adaptive binarized imageCenter-to-center projection, comprising: acquiring the width W and the height H of the self-adaptive binarized image after the fine positioning; traversing the center distance of each row of pixels, wherein the center distance of each row of pixels is obtained by accumulating the center distances of all pixel points contained in the row of pixels, and the generation formula of the center distance of each row of pixels is as follows
Figure FDA0004131893040000011
Wherein M is the center distance of the row, H is the image height, i is the pixel point, pi is the contribution of the pixel point i to the center distance, val is the pixel value of the pixel point i; weighted average is carried out on the center distances of the rows of pixels with higher values according to the first preset proportion, an average maximum center distance is generated, and the average maximum center distance of the second preset proportion is used as a separation center distance; traversing all the row pixels with the center distances smaller than the separation center distance and generating a plurality of preset segmentation blocks; traversing the preset segmentation blocks to obtain preset segmentation blocks with the largest width and generating geometric positioning points;
s4: generating segmented character information based on the character center distance projection and a connected domain in the adaptive binary image, including: s41: searching a first connected domain of the binarized image in a first direction by taking the geometric positioning point as an initial point; s42: traversing the widest threshold segmentation block contained in the connected domain as a character segmentation point when the width of the connected domain is larger than a first threshold, regarding the connected domain as a single character if the connected domain does not contain the character segmentation block, searching the widest preset segmentation block as a character segmentation point according to the first direction in a section formed by adding the abscissa of the starting point of the connected domain to the abscissa of the starting point to the abscissa of the first threshold when the width of the connected domain is smaller than a second threshold, and if the found preset segmentation block is the connected domain, the first threshold segmentation block represents that the character is smaller, otherwise, representing that the character has defects; s43: taking the character segmentation points as initial points and repeating the steps until all characters of all geometric positioning points in the first direction are segmented;
s5: and generating license plate information based on the character information.
2. The character recognition method according to claim 1, wherein the S4 further comprises:
searching a first connected domain of the binarized image in a second direction by taking the geometric locating point as an initial point, and generating urban character segmentation points according to the segmentation method contained in the S42;
and searching a first connected domain of the binarized image in the second direction by taking the urban character segmentation point as an initial point, and generating a provincial character segmentation point according to the segmentation method contained in the S42.
3. The character recognition method according to claim 2, wherein when the provincial character division point is generated, if a section width between the city character division point and the provincial character division point is smaller than a third threshold value, a coordinate point at which the city character division point extends in the second direction by the third threshold length is taken as the updated provincial character division point.
4. The character recognition method according to claim 1, wherein the S1 includes:
acquiring a license plate image and graying the license plate image to generate a gray image with unified size;
determining a fuzzy block size radius based on the size of the gray image, traversing the gray image based on the fuzzy block and generating a Gaussian weighted image, comparing the gray value of the pixel point of the Gaussian weighted image with the gray value of the pixel point corresponding to the gray image, generating an adaptive binarization image, wherein,
updating the gray value of the pixel point of the Gaussian weighted image to 0 when the gray value of the pixel point of the Gaussian weighted image is larger than the gray value of the pixel point corresponding to the gray image,
and updating the gray value of the pixel point of the Gaussian weighted image to 255 when the gray value of the pixel point of the Gaussian weighted image is smaller than the gray value of the pixel point corresponding to the gray image.
5. The character recognition method according to claim 1, wherein the S2 includes:
performing contour searching based on the self-adaptive binary image, and generating the self-adaptive binary image for filtering interference contours by filtering contours with the height being smaller than 0.4 times of the height of the binary image and/or the width being smaller than 5 pixels;
and counting the gray value jump times of the self-adaptive binary image for filtering the interference profile and generating a precisely positioned self-adaptive binary image.
6. The character recognition method according to claim 1, wherein the S5 includes:
acquiring sample data;
setting a neural network model and generating a prediction model by training the sample data;
and inputting the segmented character information into the neural network to generate the license plate information.
7. A license plate character recognition system, comprising:
the image acquisition unit is used for acquiring license plate images;
the data processing unit is used for receiving the license plate image and generating a self-adaptive binarization image, filtering an interference contour contained in the self-adaptive binarization image and positioning the accurate boundary of the self-adaptive binarization image to obtain the width W and the height H of the self-adaptive binarization image after the accurate positioning, traversing the center distance of each row of pixels, wherein the center distance of each row of pixels is obtained by accumulating the center distances of all pixel points contained in the row of pixels, and the generation formula of the center distance of each row of pixels is that
Figure FDA0004131893040000031
Wherein M is the center distance of the row, H is the image height, i is the pixel point, pi is the center distance of the pixel point iContribution val is the pixel value of pixel i; weighted average is carried out on the center distances of the rows of pixels with higher values according to the first preset proportion, an average maximum center distance is generated, and the average maximum center distance of the second preset proportion is used as a separation center distance; traversing all the row pixels with the center distances smaller than the separation center distance and generating a plurality of preset segmentation blocks; traversing the preset segmentation blocks to obtain preset segmentation blocks with the largest width and generating geometric positioning points; searching a first connected domain of the binarized image in a first direction by taking the geometric positioning point as an initial point; traversing the widest threshold segmentation block contained in the connected domain as a character segmentation point when the width of the connected domain is larger than a first threshold, regarding the connected domain as a single character if the connected domain does not contain the character segmentation block, searching the widest preset segmentation block as a character segmentation point according to the first direction in a section formed by adding the abscissa of the starting point of the connected domain to the abscissa of the starting point and the abscissa of the first threshold when the width of the connected domain is smaller than a second threshold, if the found preset segmentation block is the connected domain, the first threshold segmentation block represents that the character is smaller, otherwise representing that the character has a defect, taking the character segmentation point as an initial point and repeating the steps until all characters of all geometric positioning points in the first direction are segmented;
and the neural network unit can generate license plate information based on the segmented character information.
CN202010612343.9A 2020-06-30 2020-06-30 License plate character recognition method and system Active CN111598104B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010612343.9A CN111598104B (en) 2020-06-30 2020-06-30 License plate character recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010612343.9A CN111598104B (en) 2020-06-30 2020-06-30 License plate character recognition method and system

Publications (2)

Publication Number Publication Date
CN111598104A CN111598104A (en) 2020-08-28
CN111598104B true CN111598104B (en) 2023-05-12

Family

ID=72192562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010612343.9A Active CN111598104B (en) 2020-06-30 2020-06-30 License plate character recognition method and system

Country Status (1)

Country Link
CN (1) CN111598104B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508024A (en) * 2020-11-11 2021-03-16 广西电网有限责任公司南宁供电局 Intelligent identification method for embossed seal font of electrical nameplate of transformer
CN113095327B (en) * 2021-03-16 2022-10-14 深圳市雄帝科技股份有限公司 Method and system for positioning optical character recognition area and storage medium thereof
CN113378847B (en) * 2021-06-28 2022-10-25 华南理工大学 Character segmentation method, system, computer device and storage medium
CN115258865A (en) * 2022-08-08 2022-11-01 成都鹏业软件股份有限公司 Identification method and device for elevator door

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01234985A (en) * 1988-03-16 1989-09-20 Toshiba Corp Character segmenting device for character reader
CN102184399A (en) * 2011-03-31 2011-09-14 上海名图信息技术有限公司 Character segmenting method based on horizontal projection and connected domain analysis
CN102496019A (en) * 2011-12-08 2012-06-13 银江股份有限公司 License plate character segmenting method
CN103324930A (en) * 2013-06-28 2013-09-25 浙江大学苏州工业技术研究院 License plate character segmentation method based on grey level histogram binaryzation
CN109034157A (en) * 2017-06-08 2018-12-18 北京君正集成电路股份有限公司 Licence plate recognition method and device
CN110059695A (en) * 2019-04-23 2019-07-26 厦门商集网络科技有限责任公司 A kind of character segmentation method and terminal based on upright projection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01234985A (en) * 1988-03-16 1989-09-20 Toshiba Corp Character segmenting device for character reader
CN102184399A (en) * 2011-03-31 2011-09-14 上海名图信息技术有限公司 Character segmenting method based on horizontal projection and connected domain analysis
CN102496019A (en) * 2011-12-08 2012-06-13 银江股份有限公司 License plate character segmenting method
CN103324930A (en) * 2013-06-28 2013-09-25 浙江大学苏州工业技术研究院 License plate character segmentation method based on grey level histogram binaryzation
CN109034157A (en) * 2017-06-08 2018-12-18 北京君正集成电路股份有限公司 Licence plate recognition method and device
CN110059695A (en) * 2019-04-23 2019-07-26 厦门商集网络科技有限责任公司 A kind of character segmentation method and terminal based on upright projection

Also Published As

Publication number Publication date
CN111598104A (en) 2020-08-28

Similar Documents

Publication Publication Date Title
CN111598104B (en) License plate character recognition method and system
CN106960208B (en) Method and system for automatically segmenting and identifying instrument liquid crystal number
CN102375982B (en) Multi-character characteristic fused license plate positioning method
CN107273896A (en) A kind of car plate detection recognition methods based on image recognition
CN110210477B (en) Digital instrument reading identification method
CN103310211B (en) A kind ofly fill in mark recognition method based on image procossing
CN107423735B (en) License plate positioning method utilizing horizontal gradient and saturation
CN101122953A (en) Picture words segmentation method
CN116071763B (en) Teaching book intelligent correction system based on character recognition
CN109543753B (en) License plate recognition method based on self-adaptive fuzzy repair mechanism
CN108509950B (en) Railway contact net support number plate detection and identification method based on probability feature weighted fusion
CN109815762B (en) Method and storage medium for remotely identifying two-dimensional code
CN110598566A (en) Image processing method, device, terminal and computer readable storage medium
CN112560538B (en) Method for quickly positioning damaged QR (quick response) code according to image redundant information
CN109190625A (en) A kind of container number identification method of wide-angle perspective distortion
CN1702684A (en) Strong noise image characteristic points automatic extraction method
CN114387591A (en) License plate recognition method, system, equipment and storage medium
CN105447489A (en) Character and background adhesion noise elimination method for image OCR system
CN116630813A (en) Highway road surface construction quality intelligent detection system
CN111476804A (en) Method, device and equipment for efficiently segmenting carrier roller image and storage medium
CN114359538A (en) Water meter reading positioning and identifying method
CN117094975A (en) Method and device for detecting surface defects of steel and electronic equipment
CN113837119A (en) Method and equipment for recognizing confusable characters based on gray level images
CN110490885B (en) Improved adaptive threshold value binarization method and VIN code character segmentation method
CN114627463A (en) Non-contact power distribution data identification method based on machine identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant