WO2019066642A2 - A system and method for detecting license plate - Google Patents

A system and method for detecting license plate Download PDF

Info

Publication number
WO2019066642A2
WO2019066642A2 PCT/MY2018/050064 MY2018050064W WO2019066642A2 WO 2019066642 A2 WO2019066642 A2 WO 2019066642A2 MY 2018050064 W MY2018050064 W MY 2018050064W WO 2019066642 A2 WO2019066642 A2 WO 2019066642A2
Authority
WO
WIPO (PCT)
Prior art keywords
window
pixels
image
mask
white
Prior art date
Application number
PCT/MY2018/050064
Other languages
French (fr)
Other versions
WO2019066642A3 (en
Inventor
Hamam MOKAYED
Hock Woon Hon
Yan Chai HUM
Kelvin Lo Yir SIANG
Che Yon CHOO
Original Assignee
Mimos Berhad
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mimos Berhad filed Critical Mimos Berhad
Publication of WO2019066642A2 publication Critical patent/WO2019066642A2/en
Publication of WO2019066642A3 publication Critical patent/WO2019066642A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Definitions

  • the present invention relates to a system and method for detecting license plate. More particularly, the present invention relates to a system and method for detecting license plate using multi-masking technique, dynamic dilation and group- based filtering.
  • License plate detection is widely used for facilitating surveillance, law enforcement, access control and intelligent transportation monitoring with minimal human intervention.
  • Automatic license plate recognition systems are being used as means for detecting and recognising motor vehicles. Detecting the right object from a binarised image is one of the most challenging tasks in the field of computer vision and digital image processing, whereby issues such as different illumination, camera settings, big gaps among license plate contents, additional borders, nearby noise and the number of detected plates may decrease and affect both accuracy and performance for the whole license plate recognition system.
  • An example of license plate detection is disclosed in the United States
  • Patent No. US 8509486 B2 which relates to a system and method for recognising license plate.
  • the system comprises a plate detection module, a character partition module, a character recognition module, and a character recombination module.
  • the system performs license plate recognition by converting an image of a vehicle into a gray-level image, detecting text area of said vehicle license plate image, recognising a plurality of characters from binarised character images, recombining the plurality of characters into a character string of a vehicle registration plate and outputting the character string and receiving a next image of said vehicle which is captured at a different time point.
  • the license plate of a moving vehicle is located by transforming a colour image of the moving vehicle into a first gray level image, performing edge detection on the first gray level image using a Sobel operator to generate a second gray level image, determining a first intermediate gradient from the plurality of gradients of the first gray level image as a threshold value, processing the second gray level image according to the threshold value to generate a third gray level image, performing a morphological operation on the third gray level image to generate a fourth gray level image, determining an edge density of the fourth gray level image, comparing the edge density with a critical edge density to confirm the existence of a license plate image on the fourth gray level image, and finally locating the fourth gray level image to locate the license plate image and display the license plate image on a screen.
  • the present invention relates to a system and method for detecting license plate.
  • the system (100) comprises a plate detection module (20) to detect location of vehicle license plate by filtering noise using multi-masking technique, dynamic dilation and group-based filtering.
  • the multi-masking technique is a method of generating a plurality of mask images in order to remove surrounding noise from detected license plate images.
  • the system (100) further enhances white pixels from the plurality of mask images based on strongly connected primary vertical, primary diagonal and secondary diagonal white pixels using dynamic dilation technique.
  • the dynamic dilation refers to a process to enhance white pixels based on strong connected primary vertical, primary diagonal and secondary diagonal white pixels.
  • the system (100) then groups and filters similar blobs comprising the license plate image based on compactness, ratio, and white pixel density rules to exclude the surrounding noise connected to the license plate and connect all the different parts related to the same license plate.
  • the filtered blobs of the vehicle license plate are converted into text format.
  • the method for detecting license plate is characterised by the steps of extracting a plurality of frames from video streaming by an image acquisition module (10); converting frames to grayscale by the plate detection module (20); performing Sobel edge detection method to obtain an original mask by the plate detection module (20); determining a strong edge mask image by the plate detection module (20); determining an edge mask image from the original mask and the strong edge mask by the plate detection module (20); performing dynamic dilation by the plate detection module (20); applying group-based filtering on output blobs by the plate detection module (20); determining a final mask by the plate detection module (20); performing blob-based dilation by the plate detection module (20); and removing non- plate images from a final mask image based on compactness, ratio and white pixel density rules by the plate detection module (20).
  • the method for determining the edge mask image from the original mask and the strong edge mask by the plate detection module (20) further comprises the steps of convoluting the original mask by a predetermined window size having an array of ; ' rows by j columns of pixel, wherein ; ' equals j; computing density of a window; determining whether the density of the window is more or less than a predetermined value; scanning ; ' minus one divided by two of upper and lower pixels of a primary vertical row of the window if the density of the window is more than the predetermined threshold value; determining if all ; ' minus one divided by two of the upper and lower pixels of the primary vertical row of the window are white; scanning ; ' minus one pixels of a primary diagonal of the window if not all ; ' minus one divided by two pixels of the upper and lower of the primary vertical row of the window are white; determining if all ; ' minus one pixels of the primary diagonal of the window are white; scanning ; ' minus one pixels of a secondary
  • FIG. 1 illustrates a block diagram of a system (100) for detecting license plate according to an embodiment of the present invention.
  • FIG. 2 illustrates an example of a 7x7 window with primary vertical, primary diagonal and secondary diagonal.
  • FIG. 3 illustrates a flowchart of a method for detecting license plate according to an embodiment of the present invention.
  • FIG. 4 illustrates a flowchart of sub-steps for determining a strong edge mask image of step 1400 of the method of FIG. 3.
  • FIG. 5 illustrates a flowchart of sub-steps for obtaining an edge mask of step 1500 of the method of FIG. 3.
  • FIG. 6 illustrates a flowchart of sub-steps for performing dynamic dilation of step 1600 of the method of FIG. 3.
  • FIG. 7 illustrates a flowchart of sub-steps for performing group-based filtration of step 1700 of the method of FIG. 3.
  • FIG. 8 illustrates a flowchart of sub-steps for determining a final mask of step 1800 of the method of FIG. 3.
  • FIG. 1 illustrates a block diagram of a system (100) for detecting license plate according to an embodiment of the present invention.
  • the system (100) comprises an image acquisition module (10), a plate detection module (20), a plate segmentation module (30), a plate recognition module (40), a plate post-analyser module (50) and a display module (60).
  • the system (100) analyses multiple frames of moving vehicles across a monitoring area.
  • the system (100) detects more than one license plate in the same frame using multi- masking technique.
  • the multi-masking technique is a method of generating a plurality of mask images in order to remove surrounding noise from detected license plate images.
  • the system (100) further enhances white pixels from the plurality of mask images based on strongly connected primary vertical, primary diagonal and secondary diagonal white pixels using dynamic dilation technique.
  • the strongly connected white pixels are determined by counting the white pixels for each four direction and comparing with a predetermined threshold value.
  • the primary vertical refers to a centre vertical pixel array of a window.
  • the primary diagonal relates to pixels that lie on a diagonal that extends from top left to bottom right of the window, whereas the secondary diagonal relates to pixels that lies on a diagonal that extends from top right to bottom left of the window.
  • FIG. 2 illustrates an example of a 7x7 window with primary vertical, primary diagonal and secondary diagonal.
  • the primary vertical pixels of the window are indicated with shaded region.
  • the system (100) groups and filters similar blobs comprising the license plate image based on compactness, ratio, and white pixel density rules to exclude the surrounding noise connected to the license plate and connect all the different parts related to the same license plate.
  • the filtered blobs of the vehicle license plate are converted into text format.
  • the image acquisition unit (10) is configured to acquire a plurality of continuous frames from a plurality of image capturing means.
  • the plate detection module (20) is connected to the image acquisition module (10) and the plate recognition module (30).
  • the frames from the image acquisition unit (10) are processed by the plate detection module (20) to detect the location of a license plate.
  • the plate detection module (20) detects the location of the license plates by applying multi masking technique, dynamic dilation and group filtering in order to filter out noise associated to a standard license plate.
  • the dynamic dilation refers to a process to enhance white pixels based on strong connected primary vertical, primary diagonal and secondary diagonal white pixels
  • group-based filtering refers to a technique to filter similar blobs based on compactness, ratio and white pixel density rules.
  • the detected license plates are sent to the plate recognition module (40) for recognition stage.
  • the plate segmentation module (30) is configured to segment characters in the detected license plate into an individual entity.
  • the plate segmentation module (30) is connected to the plate detection module (20) and the plate recognition module (40).
  • the plate recognition module (40) is configured to recognise the individual entity of the license plate into a text format of alphabet and numeric.
  • the plate recognition module (40) is connected to the plate segmentation module (30) and the plate post-analyser module (50), wherein the plate post-analyser module (50) is further connected to the display module (60).
  • the plate post-analyser module (50) is configured to analyse the recognised license plate from the plate recognition module (40) to derive a final text content of the license plate that relates to an individual vehicle.
  • the final result of recognised license plate is sent to the displaying module (60).
  • FIG. 3 illustrates a flowchart of a method for detecting license plate according to an embodiment of the present invention.
  • frames from a video streaming is obtained by the image acquisition module (10) as in step 1100.
  • the video streaming may be received from the plurality of image capturing means. Images captured by the image capturing means are usually red, green, and blue images or RGB images. Once RGB images of the vehicles and license plates or frames which made up the video are captured, the frames are then converted into grayscale frames as in step 1200.
  • Sobel edge detection method is an edge detection technique to detect a change in the grayscale image based on a signal provided to a Sobel detection operator that involves setting pixel values of an image to zero and non-zero values. Sobel edge detection method finds edges using Sobel approximation to derivative of a Sobel filter and returns edges at points whereby the gradient of the image is maximum. Region with high edges variance or change in brightness are considered as potential regions of being a license plate.
  • a strong edge mask image is determined by the plate detection module (20) as in step 1400.
  • the strong edge mask image refers to an edge image with high density of white pixel in either one of the primary vertical, primary diagonal and secondary diagonal of a window.
  • the step of determining the strong edge mask image as in step 1400 is further explained in relation to FIG. 4. Thereon, the edge mask image is determined based on the outputs of the Sobel edge detection and the strong edge mask image by the plate detection module (20) as in step 1500.
  • the edge mask image refers to an image layer that accentuates desired edges of an image by selectively sharpening the edges.
  • the dynamic dilation is performed by the plate detection module (20) as in step 1600.
  • the dynamic dilation is performed on two directions which are on either left or right direction and on either up or down direction of an image.
  • a group-based filtering is applied to the dilated strong edge mask image or an output blob by the plate detection module (20) as in step 1700.
  • the final mask is determined from the group-based filtering by the plate detection module (20) as in step 1800.
  • a blob-based dilation is performed by the plate detection module (20) as in step 1900.
  • Blob-based dilation refers to dilation on the blobs which are close to each other but failed to be connected by the previous dynamic dilation process.
  • non-plates images are removed from a final mask image based on compactness, ratio and white pixel density rules by the plate detection module (20) as in step 2000.
  • FIG. 4 illustrates a flowchart of sub-steps of determining the strong edge mask image of step 1400 of the method of FIG. 3.
  • the convolution operation produces a total number of 18 white pixels, while the rest are black pixels.
  • the white pixels are denoted with pixel value of 255, whereas the black pixels are denoted with pixel value of 0.
  • the density of the window is computed as in step 1406, whereby the density of the window refers to a total number of white pixels over a total number of pixels in the window.
  • the density of the window is determined whether the density is more or less than a predetermined value as in decision 1408. If the density of the window is less than the predetermined value as in decision 1408, the window is converted into a black image as in step 1410 before the process ends.
  • ⁇ of upper pixels and lower pixels of the primary vertical row of the window are scanned as in step 1412. For example, in case of a 7x7 window shown in FIG. 2, three upper pixels and three lower pixels of the primary vertical row of the window are scanned. If all three pixels of upper and lower primary vertical row of the window image are white 1414, the white pixels are kept as strong edges as in step 1416. Else, i - 1 pixels of the primary diagonal of the window are scanned as in step 1418. Therefore, in the example of 7x7 window shown in FIG. 2, six pixels of the primary diagonal of the window are scanned. If all i - 1 pixels of the primary diagonal are white pixels as in decision 1420, the white pixels are kept as strong edges as in step 1416.
  • i - 1 pixels of the primary diagonal of the window are not white as in decision 1420, i - 1 pixels of the secondary diagonal of the window are scanned as in step 1422. If all i - 1 pixels of the secondary diagonal of the window are white as in decision 1424, the white pixels are kept as strong edges as in step 1416. However, if some of the i - 1 pixels of the secondary diagonal are white as in decision 1424, the window is converted into a black image with no strong edges as in step 1410.
  • FIG. 5 illustrates a flowchart of sub-steps for obtaining an edge mask image of step 1500 of the method of FIG. 3.
  • the strong edge mask image is obtained as in step 1504.
  • a predetermined window size is then created as in step 1506.
  • the original mask and the strong edge mask image are convoluted based on the predetermined window size.
  • the number of white pixel of the window is computed as in step 1508.
  • the strong edge mask value is then determined whether the value is more or less than a predetermined original mask value as in decision 1510. If the strong edge mask value is less than the predetermined original mask value as in decision 1510, the strong edge mask image is converted into a black image as in step 1512.
  • FIG. 6 illustrates a flowchart of sub-steps for performing dynamic dilation of step 1600 of the method of FIG. 3.
  • white pixel density of the edge mask image is computed in four directions of left, right, up and down as in step 1602.
  • a valid direction for combining the pixels is determined as in step 1604.
  • the valid direction for combining the pixels is determined based on a high value of pixel density at different direction of the edge mask image.
  • dilation values are computed as in step 1606.
  • dilation is applied on the strong edge mask image as in step 1608.
  • the dynamic dilation applied to the strong edge mask image produces output blobs. Dilation expands the boundary of a blob in the image in several direction, brightens the image and fills in any holes of the image.
  • FIG. 7 illustrates a flowchart of sub-steps for performing group-based filtration of step 1700 of the method of FIG. 3.
  • the output blobs are scanned as in step 1702.
  • Horizontal and vertical spaces among the output blobs are then computed as in step 1704.
  • the horizontal and vertical spaces among the output blobs are computed in order to group the blobs that are close to each other in the same group.
  • the values of the horizontal and vertical spaces are compared with a predetermined threshold value as in step 1706.
  • the output blobs are classified in groups as in step 1708. Each individual blob is grouped based on common properties such as width, height and ratio.
  • step 1710 The process then continues to compute group feature based on compactness, ratio and white pixel density rules as in step 1710.
  • a valid blob is determined as in step 1712, wherein the valid blob is referred as a filtered image. If the valid blob is not determined as in decision 1712, the blob is converted into a black image as in step 1714. However, if the valid blob is determined as in decision 1712, the process continues to step 1800 for determining a final mask.
  • FIG. 8 illustrates a flowchart of sub-steps for determining a final mask of step 1800 of the method of FIG. 3.
  • the edge mask and the filtered image are obtained as in step 1802.
  • the blobs which have the same group for both edge mask and filtered image are determined as in step 1804.
  • the blobs that have the same group for both edge mask and filtered image are considered as blobs that are related to the same license plate.
  • the edge mask is then retrieved to the filtered image as in step 1806.
  • a final mask image is determined as in step 1808.
  • the final mask image comprises a bounding box which includes a text content of the license plate of a vehicle crossing a surveillance area.
  • the bounding box indicating the detected license plate of the vehicle is displayed at the display module (60). While embodiments of the invention have been illustrated and described, it is not intended that these embodiments illustrate and describe all possible forms of the invention. Rather, the words used in the specifications are words of description rather than limitation and various changes may be made without departing from the scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a system and method for detecting license plate. The system (100) comprises a plate detection module (20) configured to detect location of vehicle license plate by filtering noise using multi-masking technique, dynamic dilation and group-based filtering, wherein multi-masking technique generates a plurality of mask images to remove noise; wherein the dynamic dilation enhances white pixels based on strongly connected primary vertical, primary diagonal and secondary diagonal white pixels; and wherein the group-based filtering filters similar blobs based on compactness, ratio and white pixel density rules.

Description

A SYSTEM AND METHOD FOR DETECTING LICENSE PLATE
FIELD OF INVENTION
The present invention relates to a system and method for detecting license plate. More particularly, the present invention relates to a system and method for detecting license plate using multi-masking technique, dynamic dilation and group- based filtering.
BACKGROUND OF THE INVENTION
License plate detection is widely used for facilitating surveillance, law enforcement, access control and intelligent transportation monitoring with minimal human intervention. Automatic license plate recognition systems are being used as means for detecting and recognising motor vehicles. Detecting the right object from a binarised image is one of the most challenging tasks in the field of computer vision and digital image processing, whereby issues such as different illumination, camera settings, big gaps among license plate contents, additional borders, nearby noise and the number of detected plates may decrease and affect both accuracy and performance for the whole license plate recognition system. An example of license plate detection is disclosed in the United States
Patent No. US 8509486 B2 which relates to a system and method for recognising license plate. The system comprises a plate detection module, a character partition module, a character recognition module, and a character recombination module. The system performs license plate recognition by converting an image of a vehicle into a gray-level image, detecting text area of said vehicle license plate image, recognising a plurality of characters from binarised character images, recombining the plurality of characters into a character string of a vehicle registration plate and outputting the character string and receiving a next image of said vehicle which is captured at a different time point.
Another example of a method for detecting license plate is disclosed in the United States Patent No. US 8290213 B2 which relates to a method for locating license plate of a moving vehicle. The license plate of a moving vehicle is located by transforming a colour image of the moving vehicle into a first gray level image, performing edge detection on the first gray level image using a Sobel operator to generate a second gray level image, determining a first intermediate gradient from the plurality of gradients of the first gray level image as a threshold value, processing the second gray level image according to the threshold value to generate a third gray level image, performing a morphological operation on the third gray level image to generate a fourth gray level image, determining an edge density of the fourth gray level image, comparing the edge density with a critical edge density to confirm the existence of a license plate image on the fourth gray level image, and finally locating the fourth gray level image to locate the license plate image and display the license plate image on a screen.
Although there are many methods for detecting license plate, there is still a need for a system and method to detect license plate regardless of surrounding edges, noise, gaps among the content of the license plate, number of region of interest and number of detected cars.
SUMMARY OF INVENTION
The present invention relates to a system and method for detecting license plate. The system (100) comprises a plate detection module (20) to detect location of vehicle license plate by filtering noise using multi-masking technique, dynamic dilation and group-based filtering. The multi-masking technique is a method of generating a plurality of mask images in order to remove surrounding noise from detected license plate images. The system (100) further enhances white pixels from the plurality of mask images based on strongly connected primary vertical, primary diagonal and secondary diagonal white pixels using dynamic dilation technique. The dynamic dilation refers to a process to enhance white pixels based on strong connected primary vertical, primary diagonal and secondary diagonal white pixels. The system (100) then groups and filters similar blobs comprising the license plate image based on compactness, ratio, and white pixel density rules to exclude the surrounding noise connected to the license plate and connect all the different parts related to the same license plate. The filtered blobs of the vehicle license plate are converted into text format.
The method for detecting license plate is characterised by the steps of extracting a plurality of frames from video streaming by an image acquisition module (10); converting frames to grayscale by the plate detection module (20); performing Sobel edge detection method to obtain an original mask by the plate detection module (20); determining a strong edge mask image by the plate detection module (20); determining an edge mask image from the original mask and the strong edge mask by the plate detection module (20); performing dynamic dilation by the plate detection module (20); applying group-based filtering on output blobs by the plate detection module (20); determining a final mask by the plate detection module (20); performing blob-based dilation by the plate detection module (20); and removing non- plate images from a final mask image based on compactness, ratio and white pixel density rules by the plate detection module (20).
The method for determining the edge mask image from the original mask and the strong edge mask by the plate detection module (20) further comprises the steps of convoluting the original mask by a predetermined window size having an array of ;' rows by j columns of pixel, wherein ;' equals j; computing density of a window; determining whether the density of the window is more or less than a predetermined value; scanning ;' minus one divided by two of upper and lower pixels of a primary vertical row of the window if the density of the window is more than the predetermined threshold value; determining if all ;' minus one divided by two of the upper and lower pixels of the primary vertical row of the window are white; scanning ;' minus one pixels of a primary diagonal of the window if not all ;' minus one divided by two pixels of the upper and lower of the primary vertical row of the window are white; determining if all ;' minus one pixels of the primary diagonal of the window are white; scanning ;' minus one pixels of a secondary diagonal of the window if not all ;' minus one pixels of the primary diagonal of the window are white; determining if all ;' minus one pixels of the secondary diagonal of the window are white; and converting the window to a black image if not all ;' minus one pixels of the secondary diagonal of the window are white.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
FIG. 1 illustrates a block diagram of a system (100) for detecting license plate according to an embodiment of the present invention. FIG. 2 illustrates an example of a 7x7 window with primary vertical, primary diagonal and secondary diagonal. FIG. 3 illustrates a flowchart of a method for detecting license plate according to an embodiment of the present invention.
FIG. 4 illustrates a flowchart of sub-steps for determining a strong edge mask image of step 1400 of the method of FIG. 3.
FIG. 5 illustrates a flowchart of sub-steps for obtaining an edge mask of step 1500 of the method of FIG. 3.
FIG. 6 illustrates a flowchart of sub-steps for performing dynamic dilation of step 1600 of the method of FIG. 3.
FIG. 7 illustrates a flowchart of sub-steps for performing group-based filtration of step 1700 of the method of FIG. 3. FIG. 8 illustrates a flowchart of sub-steps for determining a final mask of step 1800 of the method of FIG. 3.
DESCRIPTION OF THE PREFERRED EMBODIMENT
A preferred embodiment of the present invention will be described herein below with reference to the accompanying drawings. In the following description, well known functions or constructions are not described in detail since they would obscure the description with unnecessary detail.
Reference is initially made to FIG. 1 which illustrates a block diagram of a system (100) for detecting license plate according to an embodiment of the present invention. The system (100) comprises an image acquisition module (10), a plate detection module (20), a plate segmentation module (30), a plate recognition module (40), a plate post-analyser module (50) and a display module (60). Generally, the system (100) analyses multiple frames of moving vehicles across a monitoring area. The system (100) detects more than one license plate in the same frame using multi- masking technique. The multi-masking technique is a method of generating a plurality of mask images in order to remove surrounding noise from detected license plate images. The system (100) further enhances white pixels from the plurality of mask images based on strongly connected primary vertical, primary diagonal and secondary diagonal white pixels using dynamic dilation technique. The strongly connected white pixels are determined by counting the white pixels for each four direction and comparing with a predetermined threshold value. The primary vertical refers to a centre vertical pixel array of a window. The primary diagonal relates to pixels that lie on a diagonal that extends from top left to bottom right of the window, whereas the secondary diagonal relates to pixels that lies on a diagonal that extends from top right to bottom left of the window. FIG. 2 illustrates an example of a 7x7 window with primary vertical, primary diagonal and secondary diagonal. The primary vertical pixels of the window are indicated with shaded region. The system (100) groups and filters similar blobs comprising the license plate image based on compactness, ratio, and white pixel density rules to exclude the surrounding noise connected to the license plate and connect all the different parts related to the same license plate. The filtered blobs of the vehicle license plate are converted into text format.
The image acquisition unit (10) is configured to acquire a plurality of continuous frames from a plurality of image capturing means.
The plate detection module (20) is connected to the image acquisition module (10) and the plate recognition module (30). The frames from the image acquisition unit (10) are processed by the plate detection module (20) to detect the location of a license plate. The plate detection module (20) detects the location of the license plates by applying multi masking technique, dynamic dilation and group filtering in order to filter out noise associated to a standard license plate. The dynamic dilation refers to a process to enhance white pixels based on strong connected primary vertical, primary diagonal and secondary diagonal white pixels, whereas group-based filtering refers to a technique to filter similar blobs based on compactness, ratio and white pixel density rules. The detected license plates are sent to the plate recognition module (40) for recognition stage. The plate segmentation module (30) is configured to segment characters in the detected license plate into an individual entity. The plate segmentation module (30) is connected to the plate detection module (20) and the plate recognition module (40).
The plate recognition module (40) is configured to recognise the individual entity of the license plate into a text format of alphabet and numeric. The plate recognition module (40) is connected to the plate segmentation module (30) and the plate post-analyser module (50), wherein the plate post-analyser module (50) is further connected to the display module (60).
The plate post-analyser module (50) is configured to analyse the recognised license plate from the plate recognition module (40) to derive a final text content of the license plate that relates to an individual vehicle. The final result of recognised license plate is sent to the displaying module (60).
FIG. 3 illustrates a flowchart of a method for detecting license plate according to an embodiment of the present invention. Initially, frames from a video streaming is obtained by the image acquisition module (10) as in step 1100. The video streaming may be received from the plurality of image capturing means. Images captured by the image capturing means are usually red, green, and blue images or RGB images. Once RGB images of the vehicles and license plates or frames which made up the video are captured, the frames are then converted into grayscale frames as in step 1200.
Once the grayscale frames are obtained, a Sobel edge detection method is performed to obtain an original mask by the plate detection module (20) as in step 1300. Sobel edge detection method is an edge detection technique to detect a change in the grayscale image based on a signal provided to a Sobel detection operator that involves setting pixel values of an image to zero and non-zero values. Sobel edge detection method finds edges using Sobel approximation to derivative of a Sobel filter and returns edges at points whereby the gradient of the image is maximum. Region with high edges variance or change in brightness are considered as potential regions of being a license plate. Next, a strong edge mask image is determined by the plate detection module (20) as in step 1400. The strong edge mask image refers to an edge image with high density of white pixel in either one of the primary vertical, primary diagonal and secondary diagonal of a window. The step of determining the strong edge mask image as in step 1400 is further explained in relation to FIG. 4. Thereon, the edge mask image is determined based on the outputs of the Sobel edge detection and the strong edge mask image by the plate detection module (20) as in step 1500. The edge mask image refers to an image layer that accentuates desired edges of an image by selectively sharpening the edges.
After the edge mask image is determined, the dynamic dilation is performed by the plate detection module (20) as in step 1600. The dynamic dilation is performed on two directions which are on either left or right direction and on either up or down direction of an image. Next, a group-based filtering is applied to the dilated strong edge mask image or an output blob by the plate detection module (20) as in step 1700. The final mask is determined from the group-based filtering by the plate detection module (20) as in step 1800. Once the final mask is obtained, a blob-based dilation is performed by the plate detection module (20) as in step 1900. Blob-based dilation refers to dilation on the blobs which are close to each other but failed to be connected by the previous dynamic dilation process. Finally, non-plates images are removed from a final mask image based on compactness, ratio and white pixel density rules by the plate detection module (20) as in step 2000.
FIG. 4 illustrates a flowchart of sub-steps of determining the strong edge mask image of step 1400 of the method of FIG. 3. Initially, the original mask is convoluted by a predetermined window size to generate a window as in step 1404, whereby the window size having an array of ;' x j pixel, wherein ;' = j. For example, in FIG. 2, the original mask is convoluted by ;' x j window size, wherein /' = j = 7. The convolution operation produces a total number of 18 white pixels, while the rest are black pixels. The white pixels are denoted with pixel value of 255, whereas the black pixels are denoted with pixel value of 0. Thereon, the density of the window is computed as in step 1406, whereby the density of the window refers to a total number of white pixels over a total number of pixels in the window. The density of the window is determined whether the density is more or less than a predetermined value as in decision 1408. If the density of the window is less than the predetermined value as in decision 1408, the window is converted into a black image as in step 1410 before the process ends.
However, if the density of window is more than the predetermined value as in decision 1408, ^ of upper pixels and lower pixels of the primary vertical row of the window are scanned as in step 1412. For example, in case of a 7x7 window shown in FIG. 2, three upper pixels and three lower pixels of the primary vertical row of the window are scanned. If all three pixels of upper and lower primary vertical row of the window image are white 1414, the white pixels are kept as strong edges as in step 1416. Else, i - 1 pixels of the primary diagonal of the window are scanned as in step 1418. Therefore, in the example of 7x7 window shown in FIG. 2, six pixels of the primary diagonal of the window are scanned. If all i - 1 pixels of the primary diagonal are white pixels as in decision 1420, the white pixels are kept as strong edges as in step 1416.
However, if some of the i - 1 pixels of the primary diagonal of the window are not white as in decision 1420, i - 1 pixels of the secondary diagonal of the window are scanned as in step 1422. If all i - 1 pixels of the secondary diagonal of the window are white as in decision 1424, the white pixels are kept as strong edges as in step 1416. However, if some of the i - 1 pixels of the secondary diagonal are white as in decision 1424, the window is converted into a black image with no strong edges as in step 1410.
FIG. 5 illustrates a flowchart of sub-steps for obtaining an edge mask image of step 1500 of the method of FIG. 3. Initially, the strong edge mask image is obtained as in step 1504. A predetermined window size is then created as in step 1506. The original mask and the strong edge mask image are convoluted based on the predetermined window size. Next, the number of white pixel of the window is computed as in step 1508. The strong edge mask value is then determined whether the value is more or less than a predetermined original mask value as in decision 1510. If the strong edge mask value is less than the predetermined original mask value as in decision 1510, the strong edge mask image is converted into a black image as in step 1512. However, if the strong edge mask value is more than the predetermined original mask value as in decision 1510, the pixel value from the original mask is obtained in order to generate an edge mask image as in step 1514. FIG. 6 illustrates a flowchart of sub-steps for performing dynamic dilation of step 1600 of the method of FIG. 3. Initially, white pixel density of the edge mask image is computed in four directions of left, right, up and down as in step 1602. Thereon, a valid direction for combining the pixels is determined as in step 1604. The valid direction for combining the pixels is determined based on a high value of pixel density at different direction of the edge mask image. Once the valid direction for combining the pixels is determined, dilation values are computed as in step 1606. Finally, dilation is applied on the strong edge mask image as in step 1608. The dynamic dilation applied to the strong edge mask image produces output blobs. Dilation expands the boundary of a blob in the image in several direction, brightens the image and fills in any holes of the image.
FIG. 7 illustrates a flowchart of sub-steps for performing group-based filtration of step 1700 of the method of FIG. 3. Initially, the output blobs are scanned as in step 1702. Horizontal and vertical spaces among the output blobs are then computed as in step 1704. The horizontal and vertical spaces among the output blobs are computed in order to group the blobs that are close to each other in the same group. Thereon, the values of the horizontal and vertical spaces are compared with a predetermined threshold value as in step 1706. Based on the predetermined threshold value, the output blobs are classified in groups as in step 1708. Each individual blob is grouped based on common properties such as width, height and ratio. The process then continues to compute group feature based on compactness, ratio and white pixel density rules as in step 1710. Once the group-based feature of the output blobs has been computed, a valid blob is determined as in step 1712, wherein the valid blob is referred as a filtered image. If the valid blob is not determined as in decision 1712, the blob is converted into a black image as in step 1714. However, if the valid blob is determined as in decision 1712, the process continues to step 1800 for determining a final mask.
FIG. 8 illustrates a flowchart of sub-steps for determining a final mask of step 1800 of the method of FIG. 3. Initially, the edge mask and the filtered image are obtained as in step 1802. Thereon, the blobs which have the same group for both edge mask and filtered image are determined as in step 1804. The blobs that have the same group for both edge mask and filtered image are considered as blobs that are related to the same license plate. The edge mask is then retrieved to the filtered image as in step 1806. Finally, a final mask image is determined as in step 1808. The final mask image comprises a bounding box which includes a text content of the license plate of a vehicle crossing a surveillance area. The bounding box indicating the detected license plate of the vehicle is displayed at the display module (60). While embodiments of the invention have been illustrated and described, it is not intended that these embodiments illustrate and describe all possible forms of the invention. Rather, the words used in the specifications are words of description rather than limitation and various changes may be made without departing from the scope of the invention.

Claims

1 . A system (100) for license plate detection comprising:
a) an image acquisition module (10) configured to acquire a plurality of continuous frames from a plurality of image capturing means;
b) a plate detection module (20) configured to detect location of license plate from the plurality of continuous frames obtained by the image acquisition module (10);
c) a plate segmentation module (30) configured to segment characters of the detected license plate from the plate detection module (20) into an individual entity;
d) a plate recognition module (40) configured to recognise the individual entity of the license plate into a text format of alphabet and numeric; and
e) a plate post-analyser module (50) configured to analyse a recognised license plate from the plate recognition module (40) to derive a final text content of the license plate that relates to an individual vehicle; characterised in that the plate detection module (20) detects location of vehicle license plate by filtering noise using multi-masking technique, dynamic dilation and group-based filtering,
wherein the multi-masking technique generates a plurality of mask images to remove noise;
wherein the dynamic dilation enhances white pixels based on strong connected primary vertical, primary diagonal and secondary diagonal white pixels; and
wherein the group-based filtering filters similar blobs based on compactness, ratio and white pixel density rules.
A method for detecting a license plate is characterised by the steps of:
a) obtaining a plurality of frames from video streaming by an image acquisition module (10);
b) converting frames to grayscale by a plate detection module (20); c) performing Sobel edge detection method to obtain an original mask by the plate detection module (20);
d) determining a strong edge mask image by the plate detection module (20); e) determining an edge mask image from the original mask and the strong edge mask image by the plate detection module (20);
f) performing dynamic dilation by the plate detection module (20), wherein the dynamic dilation relates to a dilation process on either left or right direction and on either up or down direction of an image;
g) applying group-based filtering on output blobs by the plate detection module (20);
h) determining a final mask by the plate detection module (20);
i) performing blob-based dilation by the plate detection module (20), wherein the blob-based dilation refers to dilation on the blobs which are close to each other but failed to be connected by the previous dynamic dilation process; and
j) removing non-plate images from a final mask image based on compactness, ratio and white pixel density rules by the plate detection module (20).
The method as claimed in claim 2, wherein the step of determining strong edge mask images includes:
a) convoluting the original mask by a predetermined window size having an array of ;' rows by j columns of pixel, wherein ;' equals j;
b) computing density of a window;
c) determining whether the density of the window is more or less than a predetermined value;
d) scanning ;' minus one divided by two of upper and lower pixels of a primary vertical row of the window if the density of the window is more than the predetermined threshold value;
e) determining if all ;' minus one divided by two of the upper and lower pixels of the primary vertical row of the window are white;
f) scanning ;' minus one pixels of a primary diagonal of the window if not all ;' minus one divided by two pixels of the upper and lower of the primary vertical row of the window are white;
g) determining if all ;' minus one pixels of the primary diagonal of the window are white; h) scanning ;' minus one pixels of a secondary diagonal of the window if not all ;' minus one pixels of the primary diagonal of the window are white;
i) determining if all ;' minus one pixels of the secondary diagonal of the window are white; and
j) converting the window to a black image if not all ;' minus one pixels of the secondary diagonal of the window are white.
The method as claimed in claim 3, wherein if the density of window is less than the predetermined value, the window is converted into a black image.
The method as claimed in claim 3, wherein if all ;' minus one divided by two pixels of the upper and lower of the primary vertical row, ;' minus one pixels of the primary diagonal and ;' minus one pixels of the secondary diagonal of the window are white, the white pixels are kept as strong edges.
The method as claimed in claim 2, wherein the step of determining an edge mask image includes:
a) obtaining the strong edge mask image;
b) creating a predetermined window size;
c) computing number of white pixels of the window;
d) determining if a strong edge mask value is more or less than a predetermined original mask value; and
e) obtaining pixel value of the original mask to generate an edge mask image if the strong edge mask value is more than the original edge mask value.
The method as claimed in claim 6, wherein if the strong edge mask value is less than the original mask value, the strong edge mask image is converted into a black image.
The method as claimed in claim 2, wherein the step of performing dynamic dilation includes:
a) computing white pixel density in four directions of left, right, up and down of the window; b) determining a valid direction for combining the pixels;
c) computing dilation values; and
d) applying dilation on the strong edge mask image.
The method as claimed in claim 2, wherein the step of performing group- based filtration on output blobs includes:
a) scanning the output blobs, wherein the output blobs refer to blobs created after dynamic dilation process;
b) computing horizontal and vertical spaces among the output blobs; c) comparing the horizontal and vertical spaces with a predetermined threshold value;
d) classifying the output blobs in groups based on common properties; e) computing group-based features based on compactness, ratio and white pixel density rules;
f) determining whether the output blob is a valid blob; and
g) determining a final mask if the output blob is the valid blob.
The method as claimed in claim 9, wherein if the output blob is not a valid blob, the output blob is converted into a black image.
The method as claimed in claim 2, wherein the step of determining a final mask includes:
a) obtaining the edge mask and a filtered image;
b) determining blobs which fall into the same group for both edge mask and the filtered image;
c) retrieving the edge mask to the filtered image; and
d) determining a final mask.
PCT/MY2018/050064 2017-09-29 2018-09-28 A system and method for detecting license plate WO2019066642A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
MYPI2017001430 2017-09-29
MYPI2017001430 2017-09-29

Publications (2)

Publication Number Publication Date
WO2019066642A2 true WO2019066642A2 (en) 2019-04-04
WO2019066642A3 WO2019066642A3 (en) 2019-06-27

Family

ID=65861667

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/MY2018/050064 WO2019066642A2 (en) 2017-09-29 2018-09-28 A system and method for detecting license plate

Country Status (1)

Country Link
WO (1) WO2019066642A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866881A (en) * 2019-11-15 2020-03-06 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8290213B2 (en) 2009-12-04 2012-10-16 Huper Laboratories Co., Ltd. Method of locating license plate of moving vehicle
US8509486B2 (en) 2010-10-29 2013-08-13 National Chiao Tung University Vehicle license plate recognition method and system thereof

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130279758A1 (en) * 2012-04-23 2013-10-24 Xerox Corporation Method and system for robust tilt adjustment and cropping of license plate images
MY174684A (en) * 2015-11-27 2020-05-07 Mimos Berhad A system and method for detecting objects from image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8290213B2 (en) 2009-12-04 2012-10-16 Huper Laboratories Co., Ltd. Method of locating license plate of moving vehicle
US8509486B2 (en) 2010-10-29 2013-08-13 National Chiao Tung University Vehicle license plate recognition method and system thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866881A (en) * 2019-11-15 2020-03-06 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN110866881B (en) * 2019-11-15 2023-08-04 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
WO2019066642A3 (en) 2019-06-27

Similar Documents

Publication Publication Date Title
KR101403876B1 (en) Method and Apparatus for Vehicle License Plate Recognition
JP4942510B2 (en) Vehicle image recognition apparatus and method
EP3036730B1 (en) Traffic light detection
CN107609546B (en) Method and device for recognizing word title
US10748023B2 (en) Region-of-interest detection apparatus, region-of-interest detection method, and recording medium
KR101719549B1 (en) Image preprocessing method for license plate recognition and license plate recognition device using thereof
CN111353961B (en) Document curved surface correction method and device
KR101549495B1 (en) An apparatus for extracting characters and the method thereof
CN111027475A (en) Real-time traffic signal lamp identification method based on vision
WO2020130799A1 (en) A system and method for licence plate detection
KR101375567B1 (en) Partial image extractor and its partial image extracting method
EP3051461B1 (en) A computer implemented system and method for extracting and recognizing alphanumeric characters from traffic signs
CN101369312B (en) Method and equipment for detecting intersection in image
KR101613703B1 (en) detection of vehicle video and license plate region extraction method
CN108182691B (en) Method and device for identifying speed limit sign and vehicle
CN111666811A (en) Method and system for extracting traffic sign area in traffic scene image
Fernández-Caballero et al. Display text segmentation after learning best-fitted OCR binarization parameters
Mitra et al. Automatic number plate recognition system: a histogram based approach
CN114170153A (en) Wafer defect detection method and device, electronic equipment and storage medium
JP2005165387A (en) Method and device for detecting stripe defective of picture and display device
WO2019066642A2 (en) A system and method for detecting license plate
KR100513784B1 (en) The method and device of improving image
TWI498830B (en) A method and system for license plate recognition under non-uniform illumination
CN110363192B (en) Object image identification system and object image identification method
JP6377214B2 (en) Text detection method and apparatus

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18859961

Country of ref document: EP

Kind code of ref document: A2