CN110533019B - License plate positioning method and device and storage medium - Google Patents

License plate positioning method and device and storage medium Download PDF

Info

Publication number
CN110533019B
CN110533019B CN201810502241.4A CN201810502241A CN110533019B CN 110533019 B CN110533019 B CN 110533019B CN 201810502241 A CN201810502241 A CN 201810502241A CN 110533019 B CN110533019 B CN 110533019B
Authority
CN
China
Prior art keywords
point
points
effective
global feature
license plate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810502241.4A
Other languages
Chinese (zh)
Other versions
CN110533019A (en
Inventor
林翠翠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201810502241.4A priority Critical patent/CN110533019B/en
Publication of CN110533019A publication Critical patent/CN110533019A/en
Application granted granted Critical
Publication of CN110533019B publication Critical patent/CN110533019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/158Segmentation of character regions using character size, text spacings or pitch estimation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a license plate positioning method, a license plate positioning device and a storage medium, and belongs to the technical field of image processing. The method comprises the following steps: the method comprises the steps of obtaining a shot image and carrying out gray processing on the shot image to obtain a gray image, wherein the shot image comprises a license plate to be positioned, determining L effective sections in the gray image based on gray values of K lines of pixel points included in the gray image, wherein the L effective sections are all used for indicating the position where the license plate possibly exists in the shot image, the K and the L are positive integers, the L is smaller than or equal to the K, and the license plate is positioned from the shot image based on the L effective sections in the gray image. Therefore, when the characters of the license plate are irregularly distributed, the problem that the license plate is inaccurately positioned due to the fact that the license plate needs to be positioned in a connected domain detection mode is avoided, and the positioning accuracy is improved.

Description

License plate positioning method and device and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a license plate positioning method and apparatus, and a storage medium.
Background
In daily life, license plates are used as identity cards of automobiles and widely applied to scenes such as parking lots, electronic toll stations and the like. In practical application scenarios, the license plate can be photographed and automatically recognized by a device such as a computer to obtain information of the license plate. The identification process mainly comprises the processes of license plate positioning, character segmentation and character identification. Therefore, license plate positioning is one of important links for realizing license plate identification.
In the related art, a connected domain detection method is usually used to locate a license plate, and the implementation process mainly includes: the method comprises the steps of carrying out binarization processing on a shot image obtained through shooting to obtain a binary image, carrying out a series of operations such as morphological expansion and corrosion on the binary image based on the characteristics that the colors corresponding to characters and a background in the binary image are white and black respectively, and determining a connected domain corresponding to each character. And then, sequentially detecting the distance between the connected domains corresponding to every two adjacent characters according to the front and back sequence of the characters in the shot image, if the distance between the two adjacent connected domains is smaller than the set distance, reserving the detected connected domains, stopping detection until the distance between the two adjacent connected domains is larger than the set distance, and determining the region formed by all reserved connected domains as the region where the license plate is located, thereby realizing license plate positioning.
In the process of implementing the present application, the inventor finds that the prior art has at least the following problems: because the characters of the license plate in some countries are distributed irregularly, for example, the first character and the second character are separated by a larger distance, at this time, the distance between the two connected domains corresponding to the first character and the second character is detected to be larger than the set distance, and therefore, the detection is stopped. However, the connected domain corresponding to the second character and the following character also belong to the area where the license plate is located, so that when the license plate is located by using a connected domain detection method, the accuracy of license plate location is easily reduced.
Disclosure of Invention
In order to solve the problems in the prior art, embodiments of the present application provide a license plate positioning method, a license plate positioning device, and a storage medium. The technical scheme is as follows:
in one aspect, a license plate positioning method is provided, and the method includes:
acquiring a shot image and carrying out gray processing on the shot image to obtain a gray image, wherein the shot image comprises a license plate to be positioned;
determining L effective sections in the gray image based on gray values of K rows of pixel points included in the gray image, wherein the L effective sections are all used for indicating possible positions of the license plate in the shot image, the K and the L are positive integers, and the L is smaller than or equal to the K;
and positioning the license plate from the shot image based on the L effective sections in the gray-scale image.
Optionally, the determining L valid segments in the grayscale image based on the grayscale values of K rows of pixel points included in the grayscale image includes:
marking local feature points in the gray image based on the gray value of each pixel point in the K rows of pixel points, wherein the local feature points comprise peak points or valley points;
filtering the marked local feature points to obtain global feature points;
and determining the L effective sections in the gray level image based on the global feature points obtained after filtering.
Optionally, the marking the local feature point in the grayscale image based on the grayscale value of each pixel point in the K rows of pixel points includes:
determining gray values of a front pixel point and a rear pixel point which belong to the same row as a target pixel point and are adjacent to the target pixel point according to the target pixel point, wherein the target pixel point is any one of the pixel points in any row including the K rows except the first pixel point and the last pixel point;
if the gray values of the target pixel points are smaller than the gray values of the front and the rear pixel points, marking the target pixel points as valley points; and if the gray values of the target pixel points are all larger than the gray values of the front and the rear pixel points, marking the target pixel points as peak points.
Optionally, the filtering the marked local feature points includes:
selecting one local feature point from the marked local feature points, and executing the following processing on the selected local feature point until all the local feature points in the marked local feature points are processed:
determining the gray value of the next local characteristic point which belongs to the same row with the selected local characteristic point and is adjacent to the selected local characteristic point;
determining the difference between the gray value of the selected local feature point and the gray value of the next local feature point;
and when the determined gray value difference is smaller than a preset gray value difference, filtering out the selected local characteristic point and the next local characteristic point.
Optionally, the determining the L valid segments in the grayscale image based on the global feature points obtained after filtering includes:
obtaining the coordinates of each global feature point obtained after filtering;
determining effective points in the global feature points obtained after filtering based on the obtained coordinates, wherein the effective points refer to points belonging to an effective segment;
and determining a line segment formed by connecting the starting effective point and the end effective point belonging to the same row of effective points as the effective segments to obtain the L effective segments.
Optionally, the determining, based on the obtained coordinates, a valid point in the global feature points obtained after filtering includes:
selecting a global feature point from the global feature points obtained after filtering, and executing the following processing on the selected global feature point until all the global feature points in the global feature points obtained after filtering are processed:
determining a designated distance based on the coordinates of the selected global feature point and the coordinates of two adjacent global feature points which belong to the same row with the selected global feature point and are adjacent to the selected global feature point, wherein the designated distance is the sum of the distances between the selected global feature point and the two adjacent global feature points;
and when the specified distance is smaller than a preset distance, determining the selected global feature point as an effective point.
Optionally, after determining the specified distance based on the coordinates of the selected global feature point and the coordinates of two preceding and following global feature points that belong to the same row as the selected global feature point and are adjacent to the selected global feature point, the method further includes:
when the designated distance is larger than the preset distance and smaller than N times of the preset distance, respectively determining the number S and T of designated pixel points between the selected global feature point and the front and rear two global feature points, wherein the designated pixel points are pixel points of which the gray values and the gray values of the selected global feature points are in the same preset gray range;
when the S is larger than a preset numerical value, the T is not larger than the preset numerical value, and the difference value between the specified distance and the coordinate length corresponding to the S specified pixel points is smaller than the preset distance, determining the selected global feature point as an effective point;
when the T is larger than a preset value, the S is not larger than the preset value, and the difference value between the designated distance and the coordinate lengths corresponding to the T designated pixel points is smaller than the preset distance, determining the selected global feature point as an effective point;
and when the S and the T are both larger than the preset numerical value, determining the sum of the coordinate lengths corresponding to the S designated pixel points and the coordinate lengths corresponding to the T designated pixel points, and if the difference value between the designated distance and the sum of the coordinate lengths is smaller than the preset distance, determining the selected global feature point as an effective point.
Optionally, the locating the license plate from the captured image based on the L valid segments in the grayscale image includes:
scanning the L effective sections one by one according to the sequence from top to bottom;
if the overlapping length of every two adjacent effective sections is greater than the preset length, the scanned effective sections are reserved and the scanning operation is continuously executed until the overlapping length of the two adjacent effective sections is less than the preset length after scanning, and the scanning operation is stopped;
and determining a left boundary, a right boundary, an upper boundary and a lower boundary of the license plate based on all the reserved effective sections, and positioning the license plate from the shot image according to the determined left boundary, right boundary, upper boundary and lower boundary.
Optionally, the determining a left boundary, a right boundary, an upper boundary and a lower boundary of the license plate based on all the retained valid segments includes:
adding the coordinates of the initial effective points of all the reserved effective sections, then averaging to obtain an average initial coordinate, adding the coordinates of the terminal effective points of all the reserved effective sections, then averaging to obtain an average terminal coordinate, determining the vertical line of the average initial coordinate as the left boundary of the license plate, and determining the vertical line of the average terminal coordinate as the right boundary of the license plate;
and determining the positions of a row of pixel points where the effective segments scanned for the first time in all the reserved effective segments are located as the upper boundary of the license plate, and determining the positions of a row of pixel points where the effective segments scanned for the last time in all the reserved effective segments are located as the lower boundary of the license plate.
Optionally, before determining L effective segments in the grayscale image based on the grayscale values of the K rows of pixel points included in the grayscale image, the method further includes:
and correspondingly adding the gray values of the pixels belonging to the same column in each adjacent M rows of pixels, and then averaging to combine each adjacent M rows of pixels into one row of pixels, wherein M is less than the total row number of the pixels included in the shot image and can be divided by the total row number.
In another aspect, a license plate location device is provided, the device comprising:
the image processing module is used for acquiring a shot image and carrying out gray processing on the shot image to obtain a gray image, wherein the shot image comprises a license plate to be positioned;
an effective segment determining module, configured to determine L effective segments in the grayscale image based on grayscale values of K rows of pixel points included in the grayscale image, where the L effective segments are all used to indicate positions where the license plate may exist in the captured image, the K and the L are positive integers, and the L is less than or equal to the K;
and the positioning module is used for positioning the license plate from the shot image based on the L effective sections in the gray image.
Optionally, the valid segment determining module includes:
the marking unit is used for marking local feature points in the gray image based on the gray value of each pixel point in the K rows of pixel points, wherein the local feature points comprise peak points or valley points;
the filtering unit is used for filtering the marked local feature points to obtain global feature points;
and the determining unit is used for determining the L effective sections in the gray level image based on the global feature points obtained after filtering.
Optionally, the marking unit is configured to:
aiming at a target pixel point, determining the gray values of a front pixel point and a rear pixel point which belong to the same line as the target pixel point and are adjacent to the target pixel point, wherein the target pixel point is any one of the pixel points in any line including the K lines except the first pixel point and the last pixel point;
if the gray values of the target pixel points are smaller than the gray values of the front and the rear pixel points, marking the target pixel points as valley points; and if the gray values of the target pixel points are all larger than the gray values of the front and the rear pixel points, marking the target pixel points as peak points.
Optionally, the filter unit is configured to:
selecting one local feature point from the marked local feature points, and executing the following processing on the selected local feature point until all the local feature points in the marked local feature points are processed:
determining the gray value of the next local characteristic point which belongs to the same row with the selected local characteristic point and is adjacent to the selected local characteristic point;
determining the difference between the gray value of the selected local feature point and the gray value of the next local feature point;
and when the determined gray value difference is smaller than a preset gray value difference, filtering out the selected local characteristic point and the next local characteristic point.
Optionally, the determining unit is configured to:
obtaining the coordinates of each global feature point obtained after filtering;
determining effective points in the global feature points obtained after filtering based on the obtained coordinates, wherein the effective points refer to points belonging to an effective segment;
and determining a line segment formed by connecting the starting effective point and the end effective point belonging to the same row of effective points as the effective segments to obtain the L effective segments.
Optionally, the determining unit is configured to:
selecting a global feature point from the global feature points obtained after filtering, and executing the following processing on the selected global feature point until all the global feature points in the global feature points obtained after filtering are processed:
determining a designated distance based on the coordinates of the selected global feature point and the coordinates of two adjacent global feature points which belong to the same row with the selected global feature point and are adjacent to the selected global feature point, wherein the designated distance is the sum of the distances between the selected global feature point and the two adjacent global feature points;
and when the specified distance is smaller than a preset distance, determining the selected global feature point as an effective point.
Optionally, the determining unit is further configured to:
when the designated distance is larger than the preset distance and smaller than N times of the preset distance, respectively determining the number S and T of designated pixel points between the selected global feature point and the front and rear two global feature points, wherein the designated pixel points are pixel points of which the gray values and the gray values of the selected global feature points are in the same preset gray range;
when the S is larger than a preset numerical value, the T is not larger than the preset numerical value, and the difference value between the specified distance and the coordinate length corresponding to the S specified pixel points is smaller than the preset distance, determining the selected global feature point as an effective point;
when the T is larger than a preset value, the S is not larger than the preset value, and the difference value between the designated distance and the coordinate lengths corresponding to the T designated pixel points is smaller than the preset distance, determining the selected global feature point as an effective point;
and when the S and the T are both larger than the preset numerical value, determining the sum of the coordinate lengths corresponding to the S designated pixel points and the coordinate lengths corresponding to the T designated pixel points, and if the difference value between the designated distance and the sum of the coordinate lengths is smaller than the preset distance, determining the selected global feature point as an effective point.
Optionally, the positioning module comprises:
the scanning unit is used for scanning the L effective sections one by one according to the sequence from top to bottom;
if the overlapping length of every two adjacent effective sections is greater than the preset length, the scanned effective sections are reserved and the scanning operation is continuously executed until the overlapping length of the two adjacent effective sections is less than the preset length after scanning, and the scanning operation is stopped;
and the positioning unit is used for determining the left boundary, the right boundary, the upper boundary and the lower boundary of the license plate based on all the reserved effective sections, and positioning the license plate from the shot image according to the determined left boundary, right boundary, upper boundary and lower boundary.
Optionally, the positioning unit is configured to:
adding the coordinates of the initial effective points of all the reserved effective sections, then averaging to obtain an average initial coordinate, adding the coordinates of the terminal effective points of all the reserved effective sections, then averaging to obtain an average terminal coordinate, determining a vertical line where the average initial coordinate is located as the left boundary of the license plate, and determining a vertical line where the average terminal coordinate is located as the right boundary of the license plate;
and determining the positions of a row of pixel points where the effective segments scanned for the first time in all the reserved effective segments are located as the upper boundary of the license plate, and determining the positions of a row of pixel points where the effective segments scanned for the last time in all the reserved effective rows are located as the lower boundary of the license plate.
Optionally, the apparatus further comprises:
and the merging module is used for correspondingly adding the gray values of the pixels belonging to the same column in each adjacent M rows of pixels and then averaging the gray values so as to merge each adjacent M rows of pixels into one row of pixels, wherein M is less than the total row number of the pixels included in the shot image and can be divided by the total row number.
In another aspect, a computer-readable storage medium is provided, having instructions stored thereon, wherein the instructions, when executed by a processor, implement the steps of any of the methods described above.
The technical scheme provided by the embodiment of the application has the following beneficial effects: and acquiring a shot image including a license plate to be positioned, and performing gray processing on the shot image to obtain a gray image. And determining L effective sections in the gray image based on the gray values of K rows of pixel points included in the gray image. Because the determined L effective sections are all used for indicating the possible positions of the license plate in the shot image, the license plate can be positioned from the shot image based on the L effective sections in the gray image. Therefore, when the characters of the license plate are irregularly distributed, the problem that the license plate is inaccurately positioned due to the fact that the license plate needs to be positioned in a connected domain detection mode is avoided, and the positioning accuracy is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of a license plate according to an exemplary embodiment;
FIG. 2 is a flow chart illustrating a license plate location method according to an exemplary embodiment;
FIG. 3 is a flow chart illustrating a license plate location method in accordance with an exemplary embodiment;
FIG. 4 is a flow chart illustrating a license plate location method according to another exemplary embodiment;
FIG. 5 is a schematic illustration of a captured image shown in accordance with an exemplary embodiment;
FIG. 6 is a schematic illustration of another captured image shown in accordance with an exemplary embodiment;
FIG. 7 is a schematic diagram illustrating an active segment corresponding to two adjacent rows of pixel points in accordance with an exemplary embodiment;
FIG. 8 is a schematic diagram illustrating a located license plate according to an exemplary embodiment;
FIG. 9 is a schematic diagram illustrating a license plate location device according to an exemplary embodiment;
FIG. 10 is a schematic diagram illustrating another license plate location device in accordance with an exemplary embodiment;
fig. 11 shows a block diagram of a terminal 400 according to an exemplary embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before describing the license plate locating method in detail, the application scenarios, terms and implementation environments related to the present application will be briefly described.
First, a brief description is given of an application scenario related to the present application.
In daily life, with the wide popularization of vehicles, the automatic license plate recognition technology becomes a hot spot of global research. In practical implementation, images of the range of the license plate are generally required to be shot, and then, information in the license plate is identified by adopting an image processing technology. In order to efficiently recognize information on a license plate, it is generally necessary to locate a region corresponding to the license plate from a captured image. Since the information in the license plate is generally composed of a plurality of characters, the related art generally locates the license plate by using a connected domain detection method based on the characteristics. However, in many countries and regions, the distribution of characters in the license plate is irregular, and in such a case, the distribution characteristics of the connected domain become weak, resulting in poor license plate location. For example, as shown in fig. 1, the characters (including numbers, letters and other symbols) of the license plate are distributed irregularly in the european/asia-pacific region and the middle east region, and the existing license plate recognition technology is difficult to accurately recognize.
Therefore, the application provides a license plate positioning method, which realizes license plate positioning by utilizing the characteristic that the inherent foreground and the background of the license plate have strong contrast. The inventive concept is as follows: based on the image processing method, the inherent attribute characteristics of the license plate are selected as follows: the characters have strong contrast with the background, so the license plate is positioned by adopting the characteristics of the gray peak-valley curve of the license plate, the gray curve corresponding to the license plate region can effectively reflect the contrast information of the foreground and the background of the license plate, the license plate region can be separated from the image according to the characteristics, the method has the characteristic of strong generalization capability, does not need the prior information of the license plate, is suitable for various license plate types and various imaging environments in overseas, and has high positioning rate (more than 99 percent). Meanwhile, by utilizing a plurality of characteristics of the corresponding gray curve of the license plate character string region, the filtering of the interference region and the positioning of the characters with equal intervals/unequal intervals are completed, and the positioning of the license plates with equal intervals and unequal intervals in various license plates can be simultaneously solved. That is, the method provided by the application can realize license plate positioning no matter whether the character distribution in the license plate is regular or not, so that the problem of inaccurate license plate positioning caused by positioning the license plate in a connected domain detection mode when the character distribution of the license plate is irregular is avoided, namely, the positioning accuracy is improved. For a specific implementation, refer to the embodiments shown in fig. 2, fig. 3 or fig. 4.
Next, terms related to the present application will be described.
An effective section: the position of the license plate possibly existing in a row of images is represented, and the effective section can record information such as left and right boundaries, average gray scale, curve level difference, the number of characteristic points and the like.
Adaboost algorithm: adaboost is an iterative algorithm, and the core idea thereof is to train different classifiers (weak classifiers) aiming at the same training set, and then to assemble the weak classifiers to form a stronger final classifier (strong classifier).
CNN: convolutional Neural Network, Convolutional Neural Network.
Finally, a brief introduction is made to the implementation environment to which the present application relates.
The license plate positioning method can be executed by a terminal, and in practical implementation, the terminal can be a terminal such as a smart phone, a tablet computer, a desktop computer, and the like. Generally, the terminal may be installed in an application place such as a parking lot, an electronic toll booth, or the like, and has an image photographing function for photographing a range where a license plate is located.
Further, in a specific implementation, the terminal may be configured with a camera and realize a shooting function through the camera configured with the terminal, or the terminal may also be connected with an external camera through a data line and realize a shooting function through the external camera connected to the terminal, which is not limited in this application.
After the application scenarios and the implementation environments related to the present application are introduced, the license plate location method related to the present application is described in detail below with reference to the accompanying drawings.
Fig. 2 illustrates a flowchart of a license plate location method provided in an embodiment of the present application, where a gray-scale image is used to locate a license plate, and both color and infrared images can be used to locate a license plate. The process of license plate location may generally include three steps: marking the characteristic points of the gray curve, selecting license plate line segments and combining the license plate line segments.
Specifically, as shown in fig. 2, the preprocessing is performed based on the grayscale image, and first, the grayscale image is vertically down-sampled, and based on the sampled data, each M rows of pixel points are averaged and combined into one row of pixel points, and further, the combined grayscale image is horizontally smoothed, so as to reduce interference of high-frequency details such as noise and texture. Then, local feature points (mainly H-shaped peak points and L-shaped valley points) are obtained according to the image data of each line, and then the local feature points are filtered, so that global feature points capable of representing the whole situation are left, namely the gray curve feature point marking is completed. The global feature points can effectively show the feature trend of a gray scale curve, and the trend of the gray scale curve can show the distribution characteristics of the foreground and the background of the license plate.
And then, selecting license plate line segments. The global feature points corresponding to the license plate region and representing the trend of the gray curve have consistency in jump amplitude, and the horizontal intervals of the rest global feature points except the large intervals among the characters are uniform, so that the gray curve can be divided into effective sections according to the feature information such as the distance between each global feature point and the adjacent global feature points, the gray value, the gray difference and the like, and the corresponding effective sections in the license plate are obtained and can be called license plate line sections. For global feature points which have larger horizontal interval with adjacent global feature points but have continuous sections of similar gray values, attribute judgment of left and right global feature points can be added, and specific implementation of the method can be seen in the following.
And finally, combining the license plate line segments to obtain a positioning block. Wherein, the merging of the effective segments needs to complete two tasks: 1. combining the license plate line segments adjacent up and down to form a candidate license plate area; 2. and confirming the upper, lower, left and right boundaries of the candidate license plate area to obtain a positioning block. In practical implementation, the license plate line segments can be merged according to the overlapping areas of the upper and lower license plate line segments. In addition, the upper, lower, left and right four boundaries of the license plate region can be determined according to the line where the uppermost and lowermost license plate line segments in the candidate license plate region are located, and the left boundary minimum value and the right boundary maximum value of the license plate line segments, so that the region of the license plate can be positioned from the gray image based on the four boundaries.
Fig. 3 is a flowchart illustrating a license plate location method according to an exemplary embodiment, where the license plate location method is applied to the terminal, and the method may include the following implementation steps:
step 101: and acquiring a shot image and carrying out gray processing on the shot image to obtain a gray image, wherein the shot image comprises a license plate to be positioned.
Step 102: determining L effective sections in the gray image based on the gray values of K rows of pixel points included in the gray image, wherein the L effective sections are all used for indicating the possible positions of the license plate in the shot image, the K and the L are positive integers, and the L is smaller than or equal to the K.
Step 103: and based on the L effective sections in the gray level image, positioning the license plate from the shot image.
In the embodiment of the application, a shot image including a license plate to be positioned is obtained, and gray processing is performed on the shot image to obtain a gray image. And determining L effective sections in the gray image based on the gray values of K rows of pixel points included in the gray image. Because the determined L effective sections are all used for indicating the possible positions of the license plate in the shot image, the license plate can be positioned from the shot image based on the L effective sections in the gray image. Therefore, when the characters of the license plate are irregularly distributed, the problem that the license plate is inaccurately positioned due to the fact that the license plate needs to be positioned in a connected domain detection mode is avoided, and the positioning accuracy is improved.
Optionally, determining L valid segments in the grayscale image based on the grayscale values of the K rows of pixel points included in the grayscale image includes:
marking local feature points in the gray image based on the gray value of each pixel point in the K rows of pixel points, wherein the local feature points comprise peak points or valley points;
filtering the marked local feature points to obtain global feature points;
and determining the L effective sections in the gray-scale image based on the global feature points obtained after filtering.
Optionally, marking local feature points in the grayscale image based on the grayscale value of each pixel point in the K rows of pixel points, including:
aiming at a target pixel point, determining the gray values of a front pixel point and a rear pixel point which belong to the same row as the target pixel point and are adjacent to the target pixel point, wherein the target pixel point is any one of the pixel points in any row, except the first pixel point and the last pixel point, of the K rows of pixel points;
if the gray value of the target pixel point is smaller than the gray values of the front and the rear pixel points, marking the target pixel point as a valley point; and if the gray value of the target pixel point is greater than the gray values of the front and the rear pixel points, marking the target pixel point as a peak point.
Optionally, filtering the marked local feature points includes:
selecting one local feature point from the marked local feature points, and executing the following processing on the selected local feature point until all the local feature points in the marked local feature points are processed:
determining the gray value of the next local characteristic point which belongs to the same row with the selected local characteristic point and is adjacent to the selected local characteristic point;
determining the difference between the gray value of the selected local characteristic point and the gray value of the next local characteristic point;
and when the determined gray value difference is smaller than the preset gray value difference, filtering out the selected local characteristic point and the next local characteristic point.
Optionally, determining the L valid segments in the grayscale image based on the global feature points obtained after filtering includes:
obtaining the coordinates of each global feature point obtained after filtering;
determining effective points in the global feature points obtained after filtering based on the obtained coordinates, wherein the effective points refer to points belonging to an effective segment;
and determining a line segment formed by connecting the starting effective point and the end effective point belonging to the same row of effective points as the effective segment to obtain the L effective segments.
Optionally, the determining the effective point in the filtered global feature points based on the obtained coordinates includes:
selecting a global feature point from the global feature points obtained after filtering, and executing the following processing on the selected global feature point until all the global feature points in the global feature points obtained after filtering are processed:
determining a designated distance based on the coordinates of the selected global feature point and the coordinates of two adjacent global feature points which belong to the same row with the selected global feature point and are adjacent to the selected global feature point, wherein the designated distance is the sum of the distances between the selected global feature point and the two adjacent global feature points;
and when the designated distance is smaller than the preset distance, determining the selected global feature point as a valid point.
Optionally, after determining the specified distance based on the coordinates of the selected global feature point and the coordinates of two preceding and following global feature points that belong to the same row as the selected global feature point and are adjacent to the selected global feature point, the method further includes:
when the designated distance is larger than the preset distance and smaller than N times of the preset distance, respectively determining the number S and T of designated pixel points between the selected global feature point and the front and rear two global feature points, wherein the designated pixel points are pixel points of which the gray values and the gray values of the selected global feature points are in the same preset gray range;
when the S is larger than a preset value, the T is not larger than the preset value, and the difference value between the specified distance and the coordinate length corresponding to the S specified pixel points is smaller than the preset distance, determining the selected global feature point as an effective point;
when the T is larger than a preset value, the S is not larger than the preset value, and the difference value between the specified distance and the coordinate length corresponding to the T specified pixel points is smaller than the preset distance, determining the selected global feature point as an effective point;
and when the S and the T are both larger than the preset numerical value, determining the sum of the coordinate lengths corresponding to the S designated pixel points and the coordinate lengths corresponding to the T designated pixel points, and if the difference value between the designated distance and the sum of the coordinate lengths is smaller than the preset distance, determining the selected global feature point as an effective point.
Optionally, locating the license plate from the captured image based on the L valid segments in the grayscale image includes:
scanning the L effective sections one by one according to the sequence from top to bottom;
if the overlapping length of every two adjacent effective sections is greater than the preset length, the scanned effective sections are reserved and the scanning operation is continuously executed until the overlapping length of the two adjacent effective sections is less than the preset length after scanning, and the scanning operation is stopped;
and determining a left boundary, a right boundary, an upper boundary and a lower boundary of the license plate based on all the reserved effective sections, and positioning the license plate from the shot image according to the determined left boundary, right boundary, upper boundary and lower boundary.
Optionally, determining a left boundary, a right boundary, an upper boundary and a lower boundary of the license plate based on all the remaining valid segments includes:
adding the coordinates of the initial effective points of all the reserved effective sections, then averaging to obtain an average initial coordinate, adding the coordinates of the terminal effective points of all the reserved effective sections, then averaging to obtain an average terminal coordinate, determining a vertical line where the average initial coordinate is located as the left boundary of the license plate, and determining a vertical line where the average terminal coordinate is located as the right boundary of the license plate;
and determining the position of a row of pixel points where the effective section scanned for the first time in all the reserved effective sections is located as the upper boundary of the license plate, and determining the position of a row of pixel points where the effective section scanned for the last time in all the reserved effective sections is located as the lower boundary of the license plate.
Optionally, before determining L effective segments in the grayscale image based on the grayscale values of K rows of pixel points included in the grayscale image, the method further includes:
and correspondingly adding the gray values of the pixels belonging to the same column in each adjacent M rows of pixels, and then averaging to combine each adjacent M rows of pixels into one row of pixels, wherein M is less than the total row number of the pixels included in the shot image and can be divided by the total row number.
All the above optional technical solutions can be combined arbitrarily to form an optional embodiment of the present application, and the present application embodiment is not described in detail again.
Fig. 4 is a flowchart illustrating a license plate location method according to another exemplary embodiment, where this embodiment illustrates that the license plate location method is applied to the terminal, the license plate location method may include the following implementation steps:
step 201: and acquiring a shot image and carrying out gray processing on the shot image to obtain a gray image, wherein the shot image comprises a license plate to be positioned.
In daily life, in order to realize license plate positioning, a user may generally install a terminal in an application scene such as a parking lot and an electronic toll collection station according to actual needs, and adjust a shooting range of the terminal to shoot a range where a license plate is located, so as to obtain a shot image including the license plate to be positioned, for example, please refer to fig. 5, where the shot image is shown as 1 in fig. 5, and then, the terminal performs gray processing on the shot image.
It should be noted that, here, the obtained captured image is taken by the terminal only for example, and in another embodiment, the obtained captured image may also be taken by another terminal and then sent to the terminal, which is not limited in this application.
It is worth mentioning that in the embodiment of the application, since the terminal performs license plate positioning based on the gray level image after gray level processing, the license plate positioning method is not affected by various license plate colors, and compared with other license plate positioning methods which need to use color features, the license plate positioning method improves the effectiveness of license plate positioning.
It should be noted that, for a specific implementation process of the terminal performing the gray processing on the captured image, reference may be made to related technologies, which are not described in detail in this application.
Step 202: and marking local feature points in the gray image based on the gray value of each pixel point in the K rows of pixel points, wherein the local feature points comprise peak points and valley points.
It will be appreciated that the grayscale image typically includes a plurality of rows of pixel points, and for ease of understanding and description, the plurality of rows of pixel points included in the grayscale image will be identified herein as K rows of pixel points. Each pixel point corresponds to a gray value, the value range of the gray value of each pixel point is [0, 255], the terminal can determine L valid segments for indicating the possible positions of the license plate in the gray image based on the gray values of the K rows of pixel points included in the gray image, and the method specifically includes several implementation processes from step 202 to step 204.
It should be noted that K and L are positive integers, and since the L valid segments are all used to indicate a position where the license plate may exist in the captured image, and the size of the area occupied by the license plate in the captured image is usually smaller than or equal to the size of the captured image, L is smaller than or equal to K.
In practical implementation, the terminal processes the K rows of pixels according to the gray scale value of each pixel in the K rows of pixels to mark a peak point and a valley point in the gray scale image, where in a general case, the peak point may also be referred to as an H-type peak point, and the valley point may also be referred to as an L-type valley point. In a specific implementation, marking the peak and the valley points in the grayscale image may include the following (1) to (2) implementation processes:
(1) and determining the gray values of a front pixel point and a rear pixel point which belong to the same row as the target pixel point and are adjacent to the target pixel point according to the target pixel point, wherein the target pixel point is any one of the pixel points in any row including the K rows except the first pixel point and the last pixel point.
For example, assuming that the target pixel is the second pixel in the nth pixel included in the K rows of pixels, the terminal determines the gray value of the first pixel in the nth pixel and the gray value of the third pixel in the nth pixel, and then compares the gray value of the second pixel with the gray value of the first pixel and compares the gray value of the second pixel with the gray value of the third pixel.
(2) If the gray value of the target pixel point is smaller than the gray values of the front and the rear pixel points, marking the target pixel point as a valley point; and if the gray value of the target pixel point is greater than the gray values of the front and the rear pixel points, marking the target pixel point as a peak point.
Continuing the above example, if the gray value of the second pixel point is smaller than the gray value of the first pixel point, and the gray value of the second pixel point is also smaller than the gray value of the third pixel point, the second pixel point is marked as the valley point in the local feature point of the nth row. And if the gray value of the second pixel point is greater than that of the first pixel point and the gray value of the second pixel point is also greater than that of the third pixel point, marking the second pixel point as a peak point in the local feature point of the nth row.
Of course, if the gray value of the target pixel point is not smaller than the gray values of the front and rear pixel points at the same time, and the gray value of the target pixel point is not larger than the gray values of the front and rear pixel points at the same time, the target pixel point is not marked, that is, the target pixel point at this time does not belong to the peak point nor the valley point.
According to the comparison and marking process, all the peak points and the valley points in the gray level image can be marked. Then, based on the gray scale value of each marked peak point and valley point, the horizontal axis represents a pixel point, and the vertical axis represents a gray scale value, that is, a gray scale curve formed by all the peak points and valley points in any row included in the gray scale image can be drawn, for example, the gray scale curve formed by all the peak points and valley points in a certain row is shown in fig. 5, and the trend of the gray scale curve can embody the distribution characteristics of the foreground and the background of the license plate.
It is worth mentioning that, for the target pixel point, the peak point and the valley point of each line are marked by comparing the gray values of the adjacent front and rear pixel points, so that the points which cannot reflect the trend of the gray curve in each line are filtered, and the gray curve is effectively expressed.
Step 203: and filtering the marked local feature points to obtain global feature points.
As can be seen from fig. 5, some points cannot clearly represent the trend characteristics of the gray scale curve, and therefore, in order to fully highlight the trend characteristics of the gray scale curve and reduce the subsequent processing amount of the feature points, the marked local feature points are usually required to be filtered to remove those local feature points which cannot clearly represent the trend of the curve.
In a specific implementation, the process of filtering the marked local feature points may include: selecting a local feature point from the marked local feature points, and executing the following processing on the selected local feature point until all the local feature points in each marked local feature point are processed:
determining the gray value of the next local feature point which belongs to the same row with the selected local feature point and is adjacent to the selected local feature point, determining the difference between the gray value of the selected local feature point and the gray value of the next local feature point, and filtering the selected local feature point and the next local feature point when the difference between the determined gray values is smaller than the preset gray difference.
The preset gray level difference can be set by a user according to actual needs in a self-defined mode, and can also be set by the terminal in a default mode, which is not limited in the application.
For example, referring to fig. 5, a local feature point a is selected from the local feature points in the nth row, where the local feature point a is a peak point. And the terminal determines the difference between the gray values of the local characteristic point A and the adjacent next local characteristic point B, and the local characteristic point B is a valley point without difficult understanding. If the difference between the gray value of the local feature point a and the next adjacent local feature point B is smaller than the preset gray difference, it indicates that neither the local feature point a nor the next local feature point B can highlight the trend of the gray curve, and therefore, both the local feature point a and the next local feature point B can be filtered.
According to the implementation process, the marked local feature points can be filtered to obtain global feature points. For example, as shown in fig. 6, the gray scale curves formed by the global feature points belonging to the same row in the filtered global feature points can effectively represent the trend of the curve feature.
Therefore, some points which can not obviously embody the trend characteristics of the gray scale curve are filtered, the trend characteristics of the gray scale curve can be fully highlighted, the subsequent processing amount of the characteristic points is reduced, and the processing efficiency is improved.
Step 204: and determining L effective sections in the gray level image based on the global feature points obtained after filtering.
As can be seen from fig. 6, the amplitude variation of the global feature points on the gray scale curve corresponding to the license plate region has a certain regularity, that is, the horizontal intervals of the remaining global feature points are relatively uniform except for the large neutral positions between the characters, and the interval between two adjacent global feature points is relatively close. Therefore, in the embodiment of the application, the terminal may determine the L valid segments for indicating the possible positions of the license plate according to the distance features between the global feature points belonging to the same row in the global feature points obtained after filtering. Specifically, based on the global feature points obtained after filtering, determining the L valid segments in the grayscale image may include the following (3) to (5):
(3) and acquiring the coordinates of each global feature point obtained after filtering.
It should be noted that, in practical implementation, a rectangular coordinate system may be established in the grayscale image, for example, the rectangular coordinate system may be established with the lower left corner of the grayscale image as an origin, and the application does not limit the establishment process of the rectangular coordinate system.
Therefore, each global feature point corresponds to its own coordinate, that is, the terminal can obtain the coordinate of each global feature point obtained after filtering. Further, the terminal may determine a distance between two adjacent global feature points belonging to the same row according to coordinates of the two adjacent global feature points.
(4) And determining effective points in the global feature points obtained after filtering based on the obtained coordinates, wherein the effective points refer to points belonging to the effective segments so as to obtain the L effective segments.
In a specific implementation, one global feature point is selected from the global feature points obtained after filtering, and the following processing is performed on the selected global feature point until all the global feature points in the global feature points obtained after filtering are processed:
and determining a designated distance based on the coordinates of the selected global feature point and the coordinates of two adjacent global feature points which belong to the same row with the selected global feature point and are adjacent to the selected global feature point, wherein the designated distance is the sum of the distances between the selected global feature point and the two adjacent global feature points, and when the designated distance is smaller than a preset distance, determining the selected global feature point as an effective point.
The preset distance can be set by a user according to actual needs in a self-defined mode, and can also be set by the terminal in a default mode.
For example, referring to fig. 6, assuming that the selected global feature point is a global feature point D, and two previous and next global feature points which belong to the same row and are adjacent to the global feature point D are C and E, respectively, then based on the coordinates of the global feature point C, the global feature point D and the global feature point E, the distance between the global feature point C and the global feature point D is determined to be x1, and the distance between the global feature point E and the global feature point E is determined to be x2, so that the specified distance is determined to be x1+ x 2.
As described above, since the horizontal intervals on the gray scale curve corresponding to the license plate region are relatively uniform, and the two adjacent global feature points belonging to the same row are relatively close to each other, when the sum x1+ x2 of the distances between the determined global feature point D and the two preceding and following global feature points is less than the preset distance, it is determined that the selected global feature point is a point belonging to the region where the license plate is located, and at this time, the selected global feature point may be determined as an effective point.
Further, when the designated distance is greater than the preset distance and less than N times of the preset distance, the number S and T of designated pixel points located between the selected global feature point and the two preceding and succeeding global feature points are respectively determined, and the designated pixel points are pixel points whose gray values are within the same preset gray range as the gray value of the selected global feature point. When the S is larger than a preset value, the T is not larger than the preset value, and the difference value between the specified distance and the coordinate length corresponding to the S specified pixel points is smaller than the preset distance, determining the selected global feature point as an effective point; when the T is larger than a preset value, the S is not larger than the preset value, and the difference value between the specified distance and the coordinate length corresponding to the T specified pixel points is smaller than the preset distance, determining the selected global feature point as an effective point; and when the S and the T are both larger than the preset numerical value, determining the sum of the coordinate lengths corresponding to the S designated pixel points and the coordinate lengths corresponding to the T designated pixel points, and if the difference value between the designated distance and the sum of the coordinate lengths is smaller than the preset distance, determining the selected global feature point as an effective point.
Where N is an integer greater than 1, N may be set to 2 in general. In addition, the preset value may be set by a user according to actual requirements in a self-defined manner, or may also be set by the default of the terminal, which is not limited in the embodiment of the present application.
In a practical application scenario, due to the irregular distribution of characters in the license plate, in such a case, a large gap may exist between the characters, for example, the large gap is 21 in fig. 6. At this time, if the selected global feature point is located on the large neutral position, the terminal may detect that the sum of the distances between the selected global feature point and the two preceding and following global feature points is greater than a preset distance, but less than N times the preset distance. In fact, the selected global feature points on the large neutral position are valid points, and therefore, in order to avoid missing the valid points, if the sum of the distances between the selected global feature point and the two preceding and following global feature points is greater than the preset distance and is less than N times of the preset distance, the attribute judgment on the selected global feature points needs to be further increased.
In this embodiment of the present application, the distance of the large neutral point may be subtracted, which is equivalent to folding the large neutral point along the horizontal direction, and then, the magnitude relationship between the sum of the distances between the selected global feature point and the two preceding and following global feature points and the preset distance is compared to determine whether the selected global feature point is a valid point.
In a specific implementation, the number of designated pixel points located between the selected global feature point and the two preceding and succeeding global feature points may be determined, and the number may be identified as S and T, respectively.
When the S is larger than a preset value and the T is not larger than the preset value, a large neutral position exists between the selected global feature point and the previous global feature point in the two global feature points, and at the moment, the difference value between the designated distance and the coordinate lengths corresponding to the S designated pixel points is determined, so that the distance of the large neutral position is reduced, namely the large neutral position is folded along the horizontal direction. If the difference is smaller than the preset distance, the selected global feature point is a point belonging to the area where the license plate is located, and at the moment, the selected global feature point can be determined as an effective point.
When the T is greater than a preset value and the S is not greater than the preset value, it is indicated that a large neutral position exists between the selected global feature point and a subsequent global feature point of the two preceding and subsequent global feature points, at this time, a difference between the designated distance and the coordinate lengths corresponding to the T designated pixel points is determined to subtract the distance of the large neutral position, that is, the large neutral position is folded up along the horizontal direction, and if the difference is less than the preset distance, it is indicated that the selected global feature point is a point belonging to an area where the license plate is located, at this time, the selected global feature point may be determined as an effective point.
When both S and T are larger than a preset value, it is indicated that a large neutral position exists between the selected global feature point and the two global feature points before and after the selected global feature point, at this time, the sum of the coordinate length corresponding to the designated distance and the S designated pixel points and the coordinate length corresponding to the T designated pixel points is determined, and the difference value between the designated distance and the sum of the coordinate lengths is determined, so that the distance of the two large neutral positions is reduced, namely, the two large neutral positions are folded along the horizontal direction. If the difference is smaller than the preset distance, the selected global feature point is a point belonging to the area where the license plate is located, and at the moment, the selected global feature point can be determined as an effective point.
It is worth mentioning that, the distance of the large neutral position is subtracted, and then, the magnitude relation between the sum of the distances between the selected global feature point and the two global feature points before and after the selected global feature point and the preset distance is compared to further judge whether the selected global feature point is an effective point, so that the effective point is avoided being omitted, and thus, the accuracy of effective point judgment is improved.
For example, with reference to fig. 6, it is assumed that the selected global feature point is a global feature point F, and the predetermined distance is 2, the distance between the global feature point F and the previous global feature point E of the two previous and next global feature points is x3, and the distance between the global feature point F and the next global feature point G of the two previous and next global feature points is x 4. It may be determined that the sum of the distances between the selected global feature point and the two preceding and succeeding global feature points is x3+ x 4.
If the x3+ x4 is greater than the preset distance and less than 2 times of the preset distance, at this time, the terminal determines the number of pixel points between the global feature point F and the two front and rear global feature points E and G and the number of pixel points between the global feature point F and the gray value of the global feature point F within the same preset gray range, and records the number as S and T.
At this time, it can be found that T is greater than a preset value, and therefore, the terminal determines the coordinate length corresponding to the T pixel points, and if the coordinate length is l, the terminal subtracts the coordinate length l. Then, if the difference x3+ x4-l is smaller than the preset distance, the global feature point F is determined as the valid point, otherwise, the global feature point F may be determined not to be the valid point.
In addition, it should be noted that, when the sum of the distances between the selected global feature point and the two preceding and succeeding global feature points is greater than N times of the preset distance, it may be determined that the selected global feature point is not a valid point, that is, the selected global feature point does not belong to a point in the valid segment.
(5) And determining a line segment formed by connecting the starting effective point and the end effective point belonging to the same row of effective points as an effective segment.
According to the above implementation process, the terminal may determine all the valid points in the captured image, where the valid points belonging to the same row all include a start valid point and an end valid point, the terminal connects the start valid point and the end valid point in the valid points belonging to the same row, and the formed line segment is determined as the valid segment, so as to obtain L valid segments, for example, please continue to refer to fig. 6, and the valid segment corresponding to a certain row of pixel points is shown as 22.
It should be noted that, the foregoing steps 202 to 204 are used to implement a step of determining L effective segments in the grayscale image based on the grayscale values of K rows of pixel points included in the grayscale image. Wherein, the L effective sections are all used for indicating the possible positions of the license plate in the shot image. Therefore, based on the L effective sections, the position of the license plate can be located in the gray level image, the license plate does not need to be located in a connected domain detection mode, and the locating accuracy is improved.
Further, in order to increase the subsequent processing speed, before the terminal determines L valid segments in the grayscale image based on the grayscale values of K rows of pixel points included in the grayscale image, the terminal may further merge the rows of pixel points into one row of pixel points to obtain each row of pixel points of the grayscale image again.
In a specific implementation, the process of merging the lines in the grayscale image may include: and correspondingly adding the gray values of the pixels belonging to the same column in each adjacent M rows of pixels, and then averaging to combine each adjacent M rows of pixels into one row of pixels, wherein M is less than the total row number of the pixels included in the shot image and can be divided by the total row number.
The M can be set by a user according to actual requirements in a self-defined mode, and under the ordinary condition, the user can set according to the height of a shot image. Specifically, the total number of rows of the pixel points included in the shot image can be determined according to the height of the shot image, and then, according to actual requirements, the M is set to be a numerical value which can be divided by the total number of rows.
For example, the total number of rows of pixel points of the captured image is 40, and M may be set to 4. At this time, the terminal adds the gray values of the pixels located in the same column in every four adjacent rows and then averages the gray values, so that every four adjacent rows of pixels are combined into one row of pixels, and row combination processing is achieved, namely the gray value of each pixel in each combined row is actually the average value of the gray values of the four pixels on the corresponding column in the four adjacent rows.
At this time, in the execution process of determining L effective segments in the grayscale image based on the grayscale values of the K rows of pixel points included in the grayscale image, the terminal may perform processing based on the grayscale image after the row merging processing, so that the processing amount of the rows may be reduced, and the operation efficiency of the terminal may be improved.
Further, after the lines in the grayscale image are merged, the grayscale image may be subjected to horizontal smoothing to reduce interference of high-frequency details such as noise and texture in the grayscale image. The specific implementation of performing the horizontal smoothing processing on the grayscale image may refer to related technologies, which are not described in detail in this embodiment of the present application.
Step 205: and positioning the license plate from the shot image based on the L effective sections in the gray image.
In a specific implementation, the locating the license plate from the captured image based on the L valid segments in the grayscale image may include: and scanning the L effective sections one by one according to the sequence from top to bottom, if the overlapping length of every two adjacent effective sections is greater than the preset length, keeping the scanned effective sections and continuing to execute the scanning operation until the overlapping length of the two adjacent effective sections is less than the preset length, and stopping the scanning operation. And determining a left boundary, a right boundary, an upper boundary and a lower boundary of the license plate based on all the reserved effective sections, and positioning the license plate from the shot image according to the determined left boundary, right boundary, upper boundary and lower boundary.
It is worth mentioning that the license plate is positioned by scanning the L effective sections one by one, and the L effective sections are all used for indicating the possible positions of the license plate in the shot image, so that the license plate can be accurately positioned from the shot image even if the characters of the license plate are irregularly distributed, and the accuracy of license plate positioning is improved.
In a specific implementation, the terminal may number the row in which each active segment is located, for example, the first row in which the active segment is located is numbered 001, the second row is numbered 002, and so on. Then, the L effective segments are scanned one by one in order from top to bottom to determine the overlapping length of each two adjacent effective segments, for example, please refer to fig. 7, where the overlapping length of 001 th row and 002 th row is 23 in the figure.
If the overlapping length is larger than the preset length, the two rows where the two scanned effective sections are located may belong to the area where the license plate is located, and therefore the two scanned effective sections are reserved and scanning is continued. When the overlapping length of two adjacent effective sections is smaller than the preset length, the scanned line of the next effective section does not belong to the area where the license plate is located, and therefore scanning can be stopped.
The preset length may be set by a user according to actual needs in a user-defined manner, or may be set by a default terminal, which is not limited in the embodiment of the present disclosure.
Then, the terminal determines the left boundary, the right boundary, the upper boundary and the lower boundary of the license plate based on all the reserved valid segments, and the specific implementation thereof may include: adding the coordinates of the initial effective points of all the reserved effective sections, then averaging to obtain an average initial coordinate, adding the coordinates of the terminal effective points of all the reserved effective sections, then averaging to obtain an average terminal coordinate, determining a vertical line where the average initial coordinate is located as the left boundary of the license plate, and determining a vertical line where the average terminal coordinate is located as the right boundary of the license plate; and determining the position of a row of pixel points where the effective section scanned for the first time in all the reserved effective sections is located as the upper boundary of the license plate, and determining the position of a row of pixel points where the effective section scanned for the last time in all the reserved effective rows is located as the lower boundary of the license plate.
According to the above description, each valid segment includes a start valid point and an end valid point, and in a specific implementation, the coordinates of the start valid points of all valid segments may be added and averaged to obtain an average start coordinate, and a vertical line where the average start coordinate is located may be determined as a left boundary of the license plate. Similarly, according to a similar implementation idea, the terminal can determine the right boundary of the license plate based on the coordinates of the effective end points of all the effective sections. Therefore, the left boundary and the right boundary are determined based on the starting effective point and the end effective point of each effective section, and the accuracy of positioning the left side and the right side of the license plate is guaranteed.
In addition, in the process of determining the upper boundary and the lower boundary, all the effective sections which are scanned and reserved belong to the license plate area, and the terminal scans the images from top to bottom, so that the positions of a row of pixel points where the effective sections which are scanned for the first time in all the reserved effective lines are located can be determined as the upper boundary of the license plate, and the positions of a row of pixel points where the effective sections which are scanned for the last time in all the reserved effective lines are determined as the lower boundary of the license plate. Therefore, the upper boundary and the lower boundary are determined based on the scanned effective section, and the accuracy of the upper side and the lower side of the license plate is ensured.
It should be noted that, if the gray-scale image is subjected to line merging in the foregoing execution steps, the actually determined upper boundary is the position of the first line of pixel points where the first scanned effective segment is located before merging, and the lower boundary is the position of the last line of pixel points where the last scanned effective segment is located before merging.
It is understood that after the four boundaries of the license plate are located, since the four boundaries can uniquely locate one area block, the terminal can locate the license plate from the shot image according to the four boundaries, for example, the located license plate is as shown in fig. 8.
In the embodiment of the application, a shot image including a license plate to be positioned is obtained, and gray processing is performed on the shot image to obtain a gray image. And determining L effective sections in the gray image based on the gray values of K rows of pixel points included in the gray image. Because the determined L effective sections are all used for indicating the possible positions of the license plate in the shot image, the license plate can be positioned from the shot image based on the L effective sections in the gray image. Therefore, when the characters of the license plate are irregularly distributed, the problem that the license plate is inaccurately positioned due to the fact that the license plate needs to be positioned in a connected domain detection mode is avoided, and the positioning accuracy is improved.
Fig. 9 is a schematic structural diagram illustrating a license plate location device according to an exemplary embodiment, where the license plate location device may be implemented by software, hardware, or a combination of the two. The license plate positioning device can comprise:
the image processing module 310 is configured to obtain a captured image and perform gray processing on the captured image to obtain a gray image, where the captured image includes a license plate to be positioned;
an effective segment determining module 320, configured to determine L effective segments in the grayscale image based on grayscale values of K rows of pixel points included in the grayscale image, where the L effective segments are all used to indicate positions where the license plate may exist in the captured image, the K and the L are positive integers, and the L is less than or equal to the K;
the positioning module 330 is configured to position the license plate from the captured image based on the L valid segments in the grayscale image.
Optionally, the valid segment determining module 320 includes:
the marking unit is used for marking local feature points in the gray image based on the gray value of each pixel point in the K rows of pixel points, wherein the local feature points comprise peak points or valley points;
the filtering unit is used for filtering the marked local feature points to obtain global feature points;
and the determining unit is used for determining the L effective sections in the gray level image based on the global feature points obtained after filtering.
Optionally, the marking unit is configured to:
aiming at a target pixel point, determining the gray values of a front pixel point and a rear pixel point which belong to the same row as the target pixel point and are adjacent to the target pixel point, wherein the target pixel point is any one of the pixel points in any row, except the first pixel point and the last pixel point, of the K rows of pixel points;
if the gray value of the target pixel point is smaller than the gray values of the front and the rear pixel points, marking the target pixel point as a valley point; and if the gray value of the target pixel point is greater than the gray values of the front and the rear pixel points, marking the target pixel point as a peak point.
Optionally, the filter unit is configured to:
selecting one local feature point from the marked local feature points, and executing the following processing on the selected local feature point until all the local feature points in the marked local feature points are processed:
determining the gray value of the next local characteristic point which belongs to the same row with the selected local characteristic point and is adjacent to the selected local characteristic point;
determining the difference between the gray value of the selected local characteristic point and the gray value of the next local characteristic point;
and when the determined gray value difference is smaller than the preset gray value difference, filtering out the selected local characteristic point and the next local characteristic point.
Optionally, the determining unit is configured to:
obtaining the coordinates of each global feature point obtained after filtering;
determining effective points in the global feature points obtained after filtering based on the obtained coordinates, wherein the effective points refer to points belonging to an effective segment;
and determining a line segment formed by connecting the starting effective point and the end effective point belonging to the same row of effective points as the effective segment to obtain the L effective segments.
Optionally, the determining unit is configured to:
selecting a global feature point from the global feature points obtained after filtering, and executing the following processing on the selected global feature point until all the global feature points in the global feature points obtained after filtering are processed:
determining a designated distance based on the coordinates of the selected global feature point and the coordinates of two adjacent global feature points which belong to the same row with the selected global feature point and are adjacent to the selected global feature point, wherein the designated distance is the sum of the distances between the selected global feature point and the two adjacent global feature points;
and when the designated distance is smaller than the preset distance, determining the selected global feature point as a valid point.
Optionally, the determining unit is further configured to:
when the designated distance is larger than a preset distance and smaller than N times of the preset distance, respectively determining the number S and T of designated pixel points between the selected global feature point and the two front and rear global feature points, wherein the designated pixel points are pixel points of which the gray values and the gray values of the selected global feature points are in the same preset gray scale range;
when the S is larger than a preset value, the T is not larger than the preset value, and the difference value between the specified distance and the coordinate length corresponding to the S specified pixel points is smaller than the preset distance, determining the selected global feature point as an effective point;
when the T is larger than a preset value, the S is not larger than the preset value, and the difference value between the specified distance and the coordinate length corresponding to the T specified pixel points is smaller than the preset distance, determining the selected global feature point as an effective point;
and when the S and the T are both larger than the preset numerical value, determining the sum of the coordinate lengths corresponding to the S designated pixel points and the coordinate lengths corresponding to the T designated pixel points, and if the difference value between the designated distance and the sum of the coordinate lengths is smaller than the preset distance, determining the selected global feature point as an effective point.
Optionally, the positioning module 330 includes:
the scanning unit is used for scanning the L effective sections one by one according to the sequence from top to bottom;
if the overlapping length of every two adjacent effective sections is greater than the preset length, the scanned effective sections are reserved and the scanning operation is continuously executed until the overlapping length of the two adjacent effective sections is less than the preset length after scanning, and the scanning operation is stopped;
and the positioning unit is used for determining the left boundary, the right boundary, the upper boundary and the lower boundary of the license plate based on all the reserved effective sections, and positioning the license plate from the shot image according to the determined left boundary, right boundary, upper boundary and lower boundary.
Optionally, the positioning unit is configured to:
adding the coordinates of the initial effective points of all the reserved effective sections, then averaging to obtain an average initial coordinate, adding the coordinates of the terminal effective points of all the reserved effective sections, then averaging to obtain an average terminal coordinate, determining a vertical line where the average initial coordinate is located as the left boundary of the license plate, and determining a vertical line where the average terminal coordinate is located as the right boundary of the license plate;
and determining the position of a row of pixel points where the effective section scanned for the first time in all the reserved effective sections is located as the upper boundary of the license plate, and determining the position of a row of pixel points where the effective section scanned for the last time in all the reserved effective rows is located as the lower boundary of the license plate.
Optionally, referring to fig. 10, the apparatus further includes:
the merging module 340 is configured to add gray values of pixels belonging to the same column in each adjacent M rows of pixels, and then average the added gray values to merge each adjacent M rows of pixels into one row of pixels, where M is smaller than the total row number of pixels included in the captured image and can be an integer of the total row number.
In the embodiment of the application, a shot image including a license plate to be positioned is obtained, and gray processing is performed on the shot image to obtain a gray image. And determining L effective sections in the gray image based on the gray values of K rows of pixel points included in the gray image. Because the determined L effective sections are all used for indicating the possible positions of the license plate in the shot image, the license plate can be positioned from the shot image based on the L effective sections in the gray image. Therefore, when the characters of the license plate are distributed irregularly, the problem that the license plate is not accurately positioned due to the fact that the license plate needs to be positioned in a connected domain detection mode is avoided, and the positioning accuracy is improved.
It should be noted that: in the method for positioning a license plate provided in the above embodiment, only the division of each functional module is used for illustration, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the license plate positioning device and the license plate positioning method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 11 shows a block diagram of a terminal 400 according to an exemplary embodiment of the present invention. The terminal 400 may be: a smartphone, a tablet, a laptop, or a desktop computer. The terminal 400 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, the terminal 400 includes: a processor 401 and a memory 402.
Processor 401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 401 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 401 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 401 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 402 is used to store at least one instruction for execution by processor 401 to implement the license plate location method provided by the method embodiments herein.
In some embodiments, the terminal 400 may further optionally include: a peripheral interface 403 and at least one peripheral. The processor 401, memory 402 and peripheral interface 403 may be connected by bus or signal lines. Each peripheral may be connected to the peripheral interface 403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 404, touch screen display 405, camera 406, audio circuitry 407, positioning components 408, and power supply 409.
The peripheral interface 403 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 401 and the memory 402. In some embodiments, processor 401, memory 402, and peripheral interface 403 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 401, the memory 402 and the peripheral interface 403 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 405 is a touch display screen, the display screen 405 also has the ability to capture touch signals on or over the surface of the display screen 405. The touch signal may be input to the processor 401 as a control signal for processing. At this point, the display screen 405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 405 may be one, providing the front panel of the terminal 400; in other embodiments, the display screen 405 may be at least two, respectively disposed on different surfaces of the terminal 400 or in a folded design; in still other embodiments, the display 405 may be a flexible display disposed on a curved surface or a folded surface of the terminal 400. Even further, the display screen 405 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display screen 405 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 406 is used to capture images or video. Optionally, camera assembly 406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 401 for processing, or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 401 or the radio frequency circuit 404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 407 may also include a headphone jack.
The positioning component 408 is used to locate the current geographic position of the terminal 400 for navigation or LBS (Location Based Service). The Positioning component 408 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
The power supply 409 is used to supply power to the various components in the terminal 400. The power source 409 may be alternating current, direct current, disposable or rechargeable. When the power source 409 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 400 also includes one or more sensors 410. The one or more sensors 410 include, but are not limited to: acceleration sensor 411, gyro sensor 412, pressure sensor 413, fingerprint sensor 414, optical sensor 415, and proximity sensor 416.
The acceleration sensor 411 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 400. For example, the acceleration sensor 411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 401 may control the touch display screen 405 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 411. The acceleration sensor 411 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 412 may detect a body direction and a rotation angle of the terminal 400, and the gyro sensor 412 may cooperate with the acceleration sensor 411 to acquire a 3D motion of the terminal 400 by the user. From the data collected by the gyro sensor 412, the processor 401 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 413 may be disposed on a side bezel of the terminal 400 and/or a lower layer of the touch display screen 405. When the pressure sensor 413 is disposed on the side frame of the terminal 400, a user's holding signal to the terminal 400 can be detected, and the processor 401 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 413. When the pressure sensor 413 is disposed at the lower layer of the touch display screen 405, the processor 401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 414 is used for collecting a fingerprint of the user, and the processor 401 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 414, or the fingerprint sensor 414 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 414 may be disposed on the front, back, or side of the terminal 400. When a physical key or vendor Logo is provided on the terminal 400, the fingerprint sensor 414 may be integrated with the physical key or vendor Logo.
The optical sensor 415 is used to collect the ambient light intensity. In one embodiment, the processor 401 may control the display brightness of the touch display screen 405 based on the ambient light intensity collected by the optical sensor 415. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 405 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 405 is turned down. In another embodiment, the processor 401 may also dynamically adjust the shooting parameters of the camera assembly 406 according to the ambient light intensity collected by the optical sensor 415.
A proximity sensor 416, also known as a distance sensor, is typically disposed on the front panel of the terminal 400. The proximity sensor 416 is used to collect the distance between the user and the front surface of the terminal 400. In one embodiment, when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually decreases, the processor 401 controls the touch display screen 405 to switch from the bright screen state to the dark screen state; when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually becomes larger, the processor 401 controls the touch display screen 405 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 11 is not intended to be limiting of terminal 400 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
An embodiment of the present application further provides a non-transitory computer-readable storage medium, and when an instruction in the storage medium is executed by a processor of a mobile terminal, the mobile terminal is enabled to execute the license plate location method provided in the embodiment shown in fig. 2, fig. 3, or fig. 4.
The embodiment of the present application further provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the license plate location method provided in the embodiment shown in fig. 2, fig. 3, or fig. 4.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A license plate positioning method is characterized by comprising the following steps:
acquiring a shot image and carrying out gray processing on the shot image to obtain a gray image, wherein the shot image comprises a license plate to be positioned;
marking local feature points in the gray image based on the gray value of each pixel point in K rows of pixel points included in the gray image, wherein the local feature points comprise peak points or valley points; filtering the marked local feature points to obtain global feature points; obtaining the coordinates of each global feature point obtained after filtering; selecting a global feature point from the global feature points obtained after filtering, and executing the following processing on the selected global feature point until all the global feature points in the global feature points obtained after filtering are processed:
determining a designated distance based on the coordinates of the selected global feature point and the coordinates of two global feature points which belong to the same row with the selected global feature point and are adjacent to the selected global feature point, wherein the designated distance is the sum of the distances between the selected global feature point and the two global feature points;
when the designated distance is larger than a preset distance and smaller than N times of the preset distance, respectively determining the number S and T of designated pixel points between the selected global feature point and the front and rear two global feature points, wherein the designated pixel points are pixel points of which the gray values and the gray values of the selected global feature points are in the same preset gray scale range;
when the S is larger than a preset numerical value, the T is not larger than the preset numerical value, and the difference value between the specified distance and the coordinate length corresponding to the S specified pixel points is smaller than the preset distance, determining the selected global feature point as an effective point;
when the T is larger than a preset value, the S is not larger than the preset value, and the difference value between the designated distance and the coordinate lengths corresponding to the T designated pixel points is smaller than the preset distance, determining the selected global feature point as an effective point;
when the S and the T are both larger than a preset value, determining the sum of the coordinate lengths corresponding to the S designated pixel points and the coordinate lengths corresponding to the T designated pixel points, and if the difference value between the designated distance and the sum of the coordinate lengths is smaller than the preset distance, determining the selected global feature point as an effective point; the effective point refers to a point belonging to an effective segment;
determining a line segment formed by connecting a starting effective point and an end effective point in the same row of effective points as effective segments to obtain L effective segments in the gray image, wherein the L effective segments are used for indicating the possible positions of the license plate in the shot image, K and L are positive integers, and L is less than or equal to K;
and positioning the license plate from the shot image based on the L effective sections in the gray-scale image.
2. The method of claim 1, wherein said marking local feature points in the grayscale image based on the grayscale value of each of the K rows of pixel points included in the grayscale image comprises:
aiming at a target pixel point, determining the gray values of a front pixel point and a rear pixel point which belong to the same line as the target pixel point and are adjacent to the target pixel point, wherein the target pixel point is any one of the pixel points in any line including the K lines except the first pixel point and the last pixel point;
if the gray values of the target pixel points are smaller than the gray values of the front and the rear pixel points, marking the target pixel points as valley points; and if the gray values of the target pixel points are all larger than the gray values of the front and the rear pixel points, marking the target pixel points as peak points.
3. The method of claim 1, wherein filtering the marked local feature points comprises:
selecting one local feature point from the marked local feature points, and executing the following processing on the selected local feature point until all the local feature points in the marked local feature points are processed:
determining the gray value of the next local characteristic point which belongs to the same row with the selected local characteristic point and is adjacent to the selected local characteristic point;
determining the difference between the gray value of the selected local feature point and the gray value of the next local feature point;
and when the determined gray value difference is smaller than a preset gray value difference, filtering out the selected local characteristic point and the next local characteristic point.
4. The method of claim 1, wherein after determining the specified distance based on the coordinates of the selected global feature point and the coordinates of the two preceding and following global feature points that belong to the same row as the selected global feature point and are adjacent to the selected global feature point, further comprising:
and when the specified distance is smaller than a preset distance, determining the selected global feature point as an effective point.
5. The method of claim 1, wherein said locating the license plate from the captured image based on the L valid segments in the grayscale image comprises:
scanning the L effective sections one by one according to the sequence from top to bottom;
if the overlapping length of every two adjacent effective sections is greater than the preset length, the scanned effective sections are reserved and the scanning operation is continuously executed until the overlapping length of the two adjacent effective sections is less than the preset length after scanning, and the scanning operation is stopped;
and determining a left boundary, a right boundary, an upper boundary and a lower boundary of the license plate based on all the reserved effective sections, and positioning the license plate from the shot image according to the determined left boundary, right boundary, upper boundary and lower boundary.
6. The method of claim 5, wherein determining a left boundary, a right boundary, an upper boundary, and a lower boundary of the license plate based on all valid segments retained comprises:
adding the coordinates of the initial effective points of all the reserved effective sections, then averaging to obtain an average initial coordinate, adding the coordinates of the terminal effective points of all the reserved effective sections, then averaging to obtain an average terminal coordinate, determining a vertical line where the average initial coordinate is located as the left boundary of the license plate, and determining a vertical line where the average terminal coordinate is located as the right boundary of the license plate;
and determining the positions of a row of pixel points where the effective segments scanned for the first time in all the reserved effective segments are located as the upper boundary of the license plate, and determining the positions of a row of pixel points where the effective segments scanned for the last time in all the reserved effective segments are located as the lower boundary of the license plate.
7. The method of claim 1, wherein before marking the local feature points in the gray-scale image based on the gray-scale value of each of the K rows of pixel points included in the gray-scale image, further comprising:
and correspondingly adding the gray values of the pixels belonging to the same column in each adjacent M rows of pixels, and then averaging to combine each adjacent M rows of pixels into one row of pixels, wherein M is less than the total row number of the pixels included in the shot image and can be divided by the total row number.
8. A license plate positioning device, the device comprising:
the image processing module is used for acquiring a shot image and carrying out gray processing on the shot image to obtain a gray image, wherein the shot image comprises a license plate to be positioned;
an effective segment determining module, configured to determine L effective segments in the grayscale image based on grayscale values of K rows of pixel points included in the grayscale image, where the L effective segments are all used to indicate positions where the license plate may exist in the captured image, the K and the L are positive integers, and the L is less than or equal to the K;
the positioning module is used for positioning the license plate from the shot image based on the L effective sections in the gray image;
the active segment determining module includes:
the marking unit is used for marking local feature points in the gray image based on the gray value of each pixel point in the K rows of pixel points, wherein the local feature points comprise peak points or valley points;
the filtering unit is used for filtering the marked local feature points to obtain global feature points;
the determining unit is used for determining the L effective sections in the gray level image based on the global feature points obtained after filtering;
the determination unit is configured to: obtaining the coordinates of each global feature point obtained after filtering; selecting a global feature point from the global feature points obtained after filtering, and executing the following processing on the selected global feature point until all the global feature points in the global feature points obtained after filtering are processed:
determining a designated distance based on the coordinates of the selected global feature point and the coordinates of two global feature points which belong to the same row with the selected global feature point and are adjacent to the selected global feature point, wherein the designated distance is the sum of the distances between the selected global feature point and the two global feature points;
when the designated distance is larger than a preset distance and smaller than N times of the preset distance, respectively determining the number S and T of designated pixel points between the selected global feature point and the front and rear two global feature points, wherein the designated pixel points are pixel points of which the gray values and the gray values of the selected global feature points are in the same preset gray scale range;
when the S is larger than a preset numerical value, the T is not larger than the preset numerical value, and the difference value between the specified distance and the coordinate length corresponding to the S specified pixel points is smaller than the preset distance, determining the selected global feature point as an effective point;
when the T is larger than a preset value, the S is not larger than the preset value, and the difference value between the designated distance and the coordinate lengths corresponding to the T designated pixel points is smaller than the preset distance, determining the selected global feature point as an effective point;
when the S and the T are both larger than a preset value, determining the sum of the coordinate lengths corresponding to the S designated pixel points and the coordinate lengths corresponding to the T designated pixel points, and if the difference value between the designated distance and the sum of the coordinate lengths is smaller than the preset distance, determining the selected global feature point as an effective point; the effective point refers to a point belonging to an effective segment;
and determining a line segment formed by connecting the starting effective point and the end effective point belonging to the same row of effective points as the effective segments to obtain the L effective segments.
9. The apparatus of claim 8, wherein the marking unit is to:
determining gray values of a front pixel point and a rear pixel point which belong to the same row as a target pixel point and are adjacent to the target pixel point according to the target pixel point, wherein the target pixel point is any one of the pixel points in any row including the K rows except the first pixel point and the last pixel point;
if the gray values of the target pixel points are smaller than the gray values of the front and the rear pixel points, marking the target pixel points as valley points; and if the gray values of the target pixel points are all larger than the gray values of the front and the rear pixel points, marking the target pixel points as peak points.
10. The apparatus of claim 8, wherein the filter unit is to:
selecting a local feature point from the marked local feature points, and executing the following processing on the selected local feature point until all the local feature points in the marked local feature points are processed:
determining the gray value of the next local characteristic point which belongs to the same row with the selected local characteristic point and is adjacent to the selected local characteristic point;
determining the difference between the gray value of the selected local feature point and the gray value of the next local feature point;
and when the determined gray value difference is smaller than a preset gray value difference, filtering out the selected local characteristic point and the next local characteristic point.
11. The apparatus of claim 8, wherein the determination unit is to:
and when the specified distance is smaller than a preset distance, determining the selected global feature point as an effective point.
12. The apparatus of claim 8, wherein the positioning module comprises:
the scanning unit is used for scanning the L effective sections one by one according to the sequence from top to bottom;
if the overlapping length of every two adjacent effective sections is greater than the preset length, the scanned effective sections are reserved and the scanning operation is continuously executed until the overlapping length of the two adjacent effective sections is less than the preset length after scanning, and the scanning operation is stopped;
and the positioning unit is used for determining the left boundary, the right boundary, the upper boundary and the lower boundary of the license plate based on all the reserved effective sections, and positioning the license plate from the shot image according to the determined left boundary, right boundary, upper boundary and lower boundary.
13. The apparatus of claim 12, wherein the positioning unit is to:
adding the coordinates of the initial effective points of all the reserved effective sections, then averaging to obtain an average initial coordinate, adding the coordinates of the terminal effective points of all the reserved effective sections, then averaging to obtain an average terminal coordinate, determining a vertical line where the average initial coordinate is located as the left boundary of the license plate, and determining a vertical line where the average terminal coordinate is located as the right boundary of the license plate;
and determining the positions of a row of pixel points where the effective segments scanned for the first time in all the reserved effective segments are located as the upper boundary of the license plate, and determining the positions of a row of pixel points where the effective segments scanned for the last time in all the reserved effective segments are located as the lower boundary of the license plate.
14. The apparatus of claim 8, wherein the apparatus further comprises:
and the merging module is used for correspondingly adding the gray values of the pixels belonging to the same column in each adjacent M rows of pixels and then averaging the gray values so as to merge each adjacent M rows of pixels into one row of pixels, wherein M is less than the total row number of the pixels included in the shot image and can be divided by the total row number.
15. A computer-readable storage medium having instructions stored thereon, wherein the instructions, when executed by a processor, implement the steps of the method of any of claims 1-7.
CN201810502241.4A 2018-05-23 2018-05-23 License plate positioning method and device and storage medium Active CN110533019B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810502241.4A CN110533019B (en) 2018-05-23 2018-05-23 License plate positioning method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810502241.4A CN110533019B (en) 2018-05-23 2018-05-23 License plate positioning method and device and storage medium

Publications (2)

Publication Number Publication Date
CN110533019A CN110533019A (en) 2019-12-03
CN110533019B true CN110533019B (en) 2022-08-12

Family

ID=68656543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810502241.4A Active CN110533019B (en) 2018-05-23 2018-05-23 License plate positioning method and device and storage medium

Country Status (1)

Country Link
CN (1) CN110533019B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111723863B (en) * 2020-06-19 2023-06-02 中国农业科学院农业信息研究所 Fruit tree flower identification and position acquisition method and device, computer equipment and storage medium
CN112990197A (en) * 2021-03-17 2021-06-18 浙江商汤科技开发有限公司 License plate recognition method and device, electronic equipment and storage medium
CN113129305B (en) * 2021-05-18 2023-06-16 浙江大华技术股份有限公司 Method and device for determining state of silk spindle, storage medium and electronic device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4817166A (en) * 1986-05-05 1989-03-28 Perceptics Corporation Apparatus for reading a license plate
US5081685A (en) * 1988-11-29 1992-01-14 Westinghouse Electric Corp. Apparatus and method for reading a license plate
CN102693431A (en) * 2012-05-31 2012-09-26 信帧电子技术(北京)有限公司 Method and device for identifying type of white number plate
CN103455815A (en) * 2013-08-27 2013-12-18 电子科技大学 Self-adaptive license plate character segmentation method in complex scene
CN103870803A (en) * 2013-10-21 2014-06-18 北京邮电大学 Vehicle license plate recognition method and system based on coarse positioning and fine positioning fusion
CN106709530A (en) * 2017-01-17 2017-05-24 中国科学院上海高等研究院 License plate recognition method based on video
CN107679534A (en) * 2017-10-11 2018-02-09 郑州云海信息技术有限公司 A kind of license plate locating method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877126B (en) * 2009-11-19 2012-12-19 东软集团股份有限公司 Method and device for splitting license plate candidate area
CN102375982B (en) * 2011-10-18 2013-01-02 华中科技大学 Multi-character characteristic fused license plate positioning method
US9122953B2 (en) * 2013-04-15 2015-09-01 Xerox Corporation Methods and systems for character segmentation in automated license plate recognition applications
CN103488978B (en) * 2013-09-26 2017-08-01 浙江工业大学 License plate positioning method based on gray level jump and character projection interval mode
CN107729899B (en) * 2016-08-11 2019-12-20 杭州海康威视数字技术股份有限公司 License plate number recognition method and device
CN106886777B (en) * 2017-04-11 2020-06-09 深圳怡化电脑股份有限公司 Character boundary determining method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4817166A (en) * 1986-05-05 1989-03-28 Perceptics Corporation Apparatus for reading a license plate
US5081685A (en) * 1988-11-29 1992-01-14 Westinghouse Electric Corp. Apparatus and method for reading a license plate
CN102693431A (en) * 2012-05-31 2012-09-26 信帧电子技术(北京)有限公司 Method and device for identifying type of white number plate
CN103455815A (en) * 2013-08-27 2013-12-18 电子科技大学 Self-adaptive license plate character segmentation method in complex scene
CN103870803A (en) * 2013-10-21 2014-06-18 北京邮电大学 Vehicle license plate recognition method and system based on coarse positioning and fine positioning fusion
CN106709530A (en) * 2017-01-17 2017-05-24 中国科学院上海高等研究院 License plate recognition method based on video
CN107679534A (en) * 2017-10-11 2018-02-09 郑州云海信息技术有限公司 A kind of license plate locating method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《License Plate Detection of Myanmar Vehicle Images Captured from the Dissimilar Environmental Conditions》;Ohnmar Khin et al.;《2017 International Conference on Advanced Computing and Applications (ACOMP)》;20171231;全文 *
《基于字符特征约束的自适应车牌校正提取》;费继友;《仪器仪表学报》;20160331;第37卷(第3期);全文 *
《基于灰度分布和字符紧密性特征的车牌定位方法》;葛二壮 等;《微电子学与计算机》;20111031;第28卷(第10期);全文 *

Also Published As

Publication number Publication date
CN110533019A (en) 2019-12-03

Similar Documents

Publication Publication Date Title
CN109829456B (en) Image identification method and device and terminal
CN111126182B (en) Lane line detection method, lane line detection device, electronic device, and storage medium
CN110490179B (en) License plate recognition method and device and storage medium
CN110795019B (en) Key recognition method and device for soft keyboard and storage medium
CN109302632B (en) Method, device, terminal and storage medium for acquiring live video picture
CN110490186B (en) License plate recognition method and device and storage medium
CN110533019B (en) License plate positioning method and device and storage medium
CN113723136B (en) Bar code correction method, device, equipment and storage medium
CN111754386B (en) Image area shielding method, device, equipment and storage medium
CN113706576A (en) Detection tracking method, device, equipment and medium
CN112396076A (en) License plate image generation method and device and computer storage medium
CN110503159B (en) Character recognition method, device, equipment and medium
CN111325701A (en) Image processing method, device and storage medium
CN110738185B (en) Form object identification method, form object identification device and storage medium
CN112749590B (en) Object detection method, device, computer equipment and computer readable storage medium
CN112052701B (en) Article taking and placing detection system, method and device
CN111586279B (en) Method, device and equipment for determining shooting state and storage medium
CN111127541B (en) Method and device for determining vehicle size and storage medium
CN111444749B (en) Method and device for identifying road surface guide mark and storage medium
CN110163192B (en) Character recognition method, device and readable medium
CN111860064A (en) Target detection method, device and equipment based on video and storage medium
CN111563402B (en) License plate recognition method, license plate recognition device, terminal and storage medium
CN110728275B (en) License plate recognition method, license plate recognition device and storage medium
CN111723615A (en) Method and device for carrying out detection object matching judgment on detection object image
CN116681746B (en) Depth image determining method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant