KR101563543B1 - Lpr system for recognition of compact car and two wheel vehicle - Google Patents

Lpr system for recognition of compact car and two wheel vehicle Download PDF

Info

Publication number
KR101563543B1
KR101563543B1 KR1020150057097A KR20150057097A KR101563543B1 KR 101563543 B1 KR101563543 B1 KR 101563543B1 KR 1020150057097 A KR1020150057097 A KR 1020150057097A KR 20150057097 A KR20150057097 A KR 20150057097A KR 101563543 B1 KR101563543 B1 KR 101563543B1
Authority
KR
South Korea
Prior art keywords
vehicle
image
original image
area
smear
Prior art date
Application number
KR1020150057097A
Other languages
Korean (ko)
Inventor
김태경
김수경
김태형
Original Assignee
주식회사 넥스파시스템
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 넥스파시스템 filed Critical 주식회사 넥스파시스템
Priority to KR1020150057097A priority Critical patent/KR101563543B1/en
Application granted granted Critical
Publication of KR101563543B1 publication Critical patent/KR101563543B1/en

Links

Images

Classifications

    • G06K9/3258
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/015Detecting movement of traffic to be counted or controlled with provision for distinguishing between two or more types of vehicles, e.g. between motor-cars and cycles
    • G06K2209/15

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to an LPR system for the recognition of a compact vehicle and a two-wheel vehicle, which can classify a vehicle as a compact vehicle and a two-wheel vehicle according to a percentage of an area occupied by the vehicle in an original image to easily charge a fee. The LPR system for the recognition of a compact vehicle and a two-wheel vehicle according to an embodiment of the present invention comprises: a photographing module to photograph an original image including a license plate of a vehicle; a vehicle recognition module to receive the original image photographed by the photographing module to determine whether the vehicle photographed in the original image belongs to a prescribed classification vehicle type; and a number recognition module to receive the original image photographed by the photographing module to recognize characters of the license plate of the vehicle. The prescribed classification vehicle type includes a compact vehicle and a two-wheel vehicle.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an LPR system for recognizing a light vehicle and a motorcycle,

The present invention relates to an LPR system for recognizing a light vehicle and a motorcycle. More particularly, the present invention relates to an LPR system capable of easily charging a light vehicle by distinguishing a light vehicle and a two-wheeler according to a ratio of an area occupied by the vehicle.

In spite of the rapid growth of demand for automobiles due to economic growth and income increase, the manpower to manage the road situation and traffic situation is insufficient. Therefore, efforts are being made to overcome the existing poor traffic management system with limited personnel.

As a part of this effort, the development of a system for automatic recognition (including number recognition) of vehicles is actively under way. Many researches have been made so far in various fields such as traffic enforcement, traffic volume investigation, arresting of stolen vehicles, control of access vehicles, parking facility management, etc. through vehicle recognition or vehicle number (character recognition).

In particular, the processes for recognizing characters on the license plate differ from those of other characters, and therefore, robust processing methods must be considered because distortion occurs due to camera noise, illumination change, weather, and the like due to environmental influences. However, in the case of a license plate area of a vehicle, its contents are limited due to its inherent characteristics, and its structure is simpler than general character recognition (pattern recognition). For this reason, License Plate Recognition (LPR) is the most common system for efficiently managing environmental characteristics, increasing demand for vehicles, manpower supply and management of parking space resources.

The LPR system or number recognition technology was first developed in the UK in 1976. Over the following decades, as technology evolved and market demand grew, LPR systems grew steadily and expanded steadily in Southeast Asia and other European countries. As a result, the LPR system market is growing significantly in North America. This led to strong motivation for effective crime prevention and prevention technologies, which enabled them to become more active in the wider market.

Previous Vehicle Number Identification (LPR) or Automated LPR (ALPR) systems use the Surveillance Method using Optical Character Recognition (OCR) in images (images) obtained from cameras to read vehicle license plates. . In recent years, the name of the parking management system has been working efficiently for parking spaces. Currently, LPR system is solving manpower supply, labor cost burden, and charge leakage problem by adjusting the parking charge in relation to the parking environment. As the demand of LPR system is constantly increasing, technological change and development are continuing.

However, when using the parking ticket in the operation aspect, sudden events such as loss of parking tickets, damage, waste of resources, and the like, occur when there are no cash or only a large amount of money.

In addition, in order to solve the problems caused by the sudden increase of the self-driving vehicle due to the quality of life, problems of the parking problems (space), inefficient operation management, inconveniences of users due to lack of knowledge, An unmanned automation system is required.

Generally, in the case of an unmanned automation system (hereinafter referred to as LPR system), although a loop system is mainly used as a means of detecting a vehicle, a non-buried detector is required due to inconveniences of surrounding citizens and maintenance troubles due to burial construction . In order to replace this, an ultrasonic sensor or a Doppler sensor is used to detect the vehicle or classify the type of the vehicle.

As such, the system's technology and coverage are gradually expanding and expanding, including the three basic components of a typical LPR system.

The first is to acquire the video source from the camera or video (image), the second to extract the vehicle number for the information of the camera or video (image) input from the core or engine in the LPR system, Recognition of the matching process for the numbered characters or integration among other systems.

The present invention proposes a method of classifying vehicles of vehicles photographed in the LPR system structure. Conventional general LPR system is focused on technology development related to license plate recognition of a vehicle, but it may be an important issue in the LPR system to incorporate the function of classifying the vehicle type because the charge may vary depending on the type of vehicle .

However, in the conventional system, there is a limitation in constructing an unmanned system that can systematically charge a fee only by using a loop coil to detect the approach of the vehicle and recognize the license plate character of the vehicle. I have to do it.

Accordingly, it is required to develop an LPR system capable of recognizing various types of vehicles and applying differential charging.

Korean Application No. 10-2014-0183200 Korean Patent No. 10-0756318 Korean Patent No. 10-0638829 Korean Application No. 10-2015-0050779

Disclosure of Invention Technical Problem [8] The present invention has been made in order to solve the above-mentioned problems, and it is an object of the present invention to provide an LPR system capable of easily charging by dividing a light vehicle and a two- It has its purpose.

In addition, the present invention can accurately grasp light vehicles and two-wheeled vehicles, systematically charge fees according to the types of vehicles, prevent accident of blocking bars, speedy processing without high complexity, and cost reduction The purpose is to provide the LPR system to the user.

It is another object of the present invention to provide a user with an LPR system capable of greatly improving the number recognition performance by correcting distortion of an image generated in a low illuminance area and a high illuminance area.

It is to be understood that both the foregoing general description and the following detailed description of the present invention are exemplary and explanatory and are not intended to limit the invention to the precise form disclosed. It can be understood.

An LPR system related to an example of the present invention for realizing the above-mentioned problems includes an imaging module for photographing an original image including a license plate of a vehicle; A vehicle recognition module that receives the original image photographed by the photographing module to determine whether the vehicle photographed on the original image belongs to a vehicle classified into a predetermined category; And a number recognition module for receiving the original image photographed by the photographing module to recognize a character of the license plate of the vehicle, wherein the car classified into the predetermined category is a light car and a motorcycle.

The vehicle recognition module may further include: an area detection unit that detects a vehicle area, which is a pixel of the original image, from which the vehicle is photographed; And a vehicle type determination unit that determines whether the vehicle belongs to the light vehicle and the motorcycle based on a predetermined determination factor, wherein the determination factor includes a ratio of the vehicle area in the original image.

The area detection unit may be configured to extract the license plate area of the vehicle and the headlight area of the vehicle in the detected vehicle area and to extract the headlight area of the vehicle using the extracted license plate area of the vehicle and the headlight area of the vehicle The total length and the length of the vehicle can be measured.

In addition, the determination factor may further include a full width length of the vehicle measured by the area detecting unit and a length of a radius of the vehicle.

The vehicle type determination unit may determine that the vehicle belongs to the two-wheeled vehicle when the ratio occupied by the vehicle area in the original image is within the first range, and if the ratio of the vehicle area occupied by the original image to the second range The upper limit of the first range is smaller than the upper limit of the second range, and the lower limit of the first range is smaller than the lower limit of the second range.

When it is determined that the upper limit of the first range is larger than the upper limit of the second range and the ratio occupied by the vehicle area in the original image belongs to the first range and the second range at the same time, The measured total length of the vehicle and the length of the vehicle can be used as the determination factor

In addition, the number recognition module may include a discrimination unit for classifying the original image into one of a low-illuminance image, a high-illuminance image, and an unprocessed image based on a discrimination factor related to the original image; And an image processor for generating a corrected image using the original image, wherein the image processor is configured to perform an advanced clipped histogram smoothing when the original image is classified as the low-illuminance image by the discriminator A low-illuminance image processing unit for generating the corrected image from the original image using a histogram equalization method; And a high-illuminance image processor for removing the smear generated in the original image to generate the corrected image if the original image is classified by the discriminator into the high-illuminance image, The target image is the corrected image when the target image is used for character recognition of the license plate of the vehicle and the original image is classified into the low illuminance image or the high illuminance image by the discriminator, If the original image is classified as the raw image, the target image is the original image.

The discrimination factor may be an intensity of light of the original image converted into a gray scale and if the intensity of light of the original image is higher than a predetermined threshold value, Classify as unprocessed images.

When the variation of the intensity of the light of the original image is lower than the threshold value, the determination unit classifies the original image into one of the low-illuminance image and the high-illuminance image according to the intensity of the light of the original image.

Further, the improved cut histogram smoothing method may include: determining an adaptive cut ratio for the original image; generating a cut histogram from which the upper region of the histogram of the original image is removed according to the determined adaptive cut ratio; The corrected image can be generated by cutting at least a part of the upper region and reassigning the cut region to the cut histogram.

Further, the adaptive cutoff ratio can be determined by the following equation.

Equation

Figure 112015039572028-pat00001

In the above equation,

Figure 112015039572028-pat00002
Is the adaptive cut rate,
Figure 112015039572028-pat00003
Is a gray value of the original image,
Figure 112015039572028-pat00004
to be.

The high-illuminance image processing unit may include: a detector that detects a position of a first column in which the smear is generated among columns constituting the original image; And removing the smear from the original image based on the detected position information of the first column.

The detection unit may include an extraction unit for extracting a signal distribution curve for each column constituting the original image using the original image input to the number recognition module; And a conversion unit converting the signal distribution curve into a normal distribution curve, wherein the signal distribution curve represents a sum of gray values of a plurality of pixels constituting each column constituting the original image.

The detection unit may generate a binary pattern map by comparing the normal distribution curve with a preset threshold value. In a region where the normal distribution curve is smaller than the threshold value, the binary pattern map has a value of 0, In an area larger than the threshold value, the binary pattern map has a value of 1, and the area having the value of 1 in the binary pattern map corresponds to the first column of the original image.

The image processing apparatus may further include a reconstruction unit that reconstructs the original image of the first column from which the smear is removed by using a predetermined interpolation method.

The present invention can provide a user with an LPR system capable of easily charging by dividing a light vehicle and a two-wheeler according to the ratio of the area of the vehicle in the original image.

In addition, the present invention can accurately grasp light vehicles and two-wheeled vehicles, systematically charge fees according to the types of vehicles, prevent accident of blocking bars, speedy processing without high complexity, and cost reduction The LPR system can be provided to the user.

In addition, the present invention can provide a user with an LPR system capable of greatly improving the number recognition performance by correcting distortion of an image generated in a low illuminance area and a high illuminance area.

It should be understood, however, that the effects obtained by the present invention are not limited to the above-mentioned effects, and other effects not mentioned may be clearly understood by those skilled in the art to which the present invention belongs It will be possible.

BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate a preferred embodiment of the invention and, together with the description, serve to provide a further understanding of the technical idea of the invention, It should not be construed as limited.
1 is an embodiment of a typical LPR system associated with the present invention.
2 shows an example of a block diagram of the LPR system of the present invention.
FIGS. 3A and 3B schematically illustrate histogram truncation of an image associated with the present invention. FIG.
4 is a schematic diagram for explaining an improved cut histogram smoothing that can be applied to the present invention.
FIG. 5 shows an example of a signal distribution curve according to each column of an image obtained in the photographing module.
FIG. 6 shows a normal distribution curve obtained by taking the signal distribution curve of FIG. 5 as an input.
7A to 7C show an example of the result of judging the type of vehicle according to the present invention.
8 is a flow chart of an adaptive probability based low illumination image enhancement and smear reconstruction processing method related to an example of the present invention.
9 is a flowchart of a method for determining the type of vehicle in the LPR system related to an example of the present invention.
10A to 10C illustrate an example of a result of processing a low-illuminance image according to the present invention.
11A to 11C show an example of a result obtained by restoring smear generated in a high-contrast image according to the present invention.

Hereinafter, a preferred embodiment of the present invention will be described with reference to the drawings. It should be noted that the embodiments described below do not unduly limit the contents of the present invention described in the claims, and the entire constitution described in this embodiment is not necessarily essential as a means for solving the present invention.

The general LPR system is used to collect the fee according to the operation fee system with the information of entering / leaving the vehicle, or to collect the information on the entering / leaving status of unspecified vehicles without operating fee system. Furthermore, it is presented as an integrated direction to observe the movement of vehicles in geographically dispersed organizations. In this regard, Figure 1 is one embodiment of a typical LPR system associated with the present invention.

As shown in FIG. 1, the LPR system detects a license plate area of a vehicle inputted by a camera, recognizes the license plate character of the vehicle by using the number and character detection method, and transmits the license plate character to the local PC or server, Management and supervision.

The number information of these vehicles is used for the collection of fees according to entrance, departure, and fare system, analysis of vehicle traffic, regional congestion analysis, and analysis of vehicle access by time of day. Through this, smooth operation management and user convenience are maximized The purpose is to do.

Hereinafter, an LPR system capable of classifying a vehicle type of a vehicle photographed in an image is proposed.

< LPR  System Configuration>

Hereinafter, the configuration of the LPR system according to the present invention will be described in detail with reference to the drawings.

2 shows an example of a block diagram of the LPR system of the present invention. 2, the LPR system 100 of the present invention includes an imaging module 10, a number recognition module 20, a determination unit 30, an image processing unit 40, a vehicle recognition module 80, 90), and the like.

However, the components shown in FIG. 2 are not essential, so an LPR system 100 having more or fewer components may be implemented. Further, the components shown in Fig. 2 are connected to each other in an interdependent manner, and it is possible that each component is separately or integrally implemented as shown in Fig. Hereinafter, each configuration will be described.

The photographing module 10 is installed in the LPR system to photograph a vehicle, and photographs a vehicle located in a predetermined section to generate an original image. The original image is photographed on the original image photographed by the photographing module 10, and the original image is transmitted to the number recognizing module 20, used for license plate recognition of the vehicle, and transmitted to the vehicle recognizing module 80 And is used for recognizing the type of the vehicle.

The photographing module 10 has a tilting device capable of tilting in the x-, y-, and z-axis directions, respectively. The photographing module 10 has a configuration capable of photographing a zoom-in image or a zoom-out image with respect to the vehicle by rotating the camera at specified x, y coordinates.

The photographing module 10 may be implemented using a camera equipped with a fisheye lens. When a fisheye lens having a wide angle of view is used, it is possible to photograph an image in an omnidirectional (360 DEG) region around the photographing module 10.

The photographing module 10 is equipped with a CCD sensor or a CMOS sensor, preferably a CCD sensor. The CCD type image sensor and the CMOS type image sensor commonly have a light receiving section that receives light and converts it into an electric signal. The CCD type image sensor transmits the electric signal through the CCD and converts it to the voltage at the last stage. On the other hand, the CMOS image sensor converts the signal to a voltage at each pixel and transfers it to the outside. That is, the CCD type image sensor moves the electrons generated by the light directly to the output part by using the gate pulse, and the CMOS type image sensor converts the electrons generated by the light into voltage in each pixel, There is a difference in output through the switch.

In the CCD type image sensor, a smear phenomenon occurs due to the signal processing method. The smear phenomenon refers to a phenomenon in which a line of vertical lines appears on the screen when a strong reflected light of a light source or an illumination lamp is photographed. It is often seen when using high-speed shutter and is often seen when shooting very bright objects such as light sources. The CCD type image sensor has a structure in which only one light is present in one cell. When a charge that can be stored in one cell overflows due to reflection phenomenon and interference phenomenon between cells, a smear phenomenon occurs.

The smear phenomenon is easily generated in the buffer area for storing or transmitting to the image sensor according to the exposure of the light in the high-speed shutter setting. The high-speed shutter of the CCD adjusts the exposure by the exposure time of the CCD through the shutter of the camera body and by directly controlling the CCD at a shutter speed higher than the synchronization speed. If the shutter of the camera body is opened when acquiring an image using the electronic shutter of the CCD, the light continues to be incident on the photodiode and the charge is overflowed in the stored space. If the charge of the CCD composed of the longitudinal array is read, And a smear phenomenon is generated.

The smear phenomenon thus generated can distort the photographed image, cause a problem in that the system of detecting or checking the vehicle may grasp the vehicle shape and obstruct the recognition of the vehicle number.

Meanwhile, the number recognition module 20 of the LPR system 100 of the present invention may include a determination unit 30, an image processing unit 40, and the like, as shown in FIG.

The discriminating unit 30 classifies the original image generated by the photographing module 10 into one of a low-illuminance image, a high-illuminance image, and an unprocessed image based on a discrimination factor related to the original image. Here, the discriminant related to the original image is the intensity of the light of the original image converted to gray scale.

Specifically, when the intensity of light of the original image is higher than a predetermined threshold value, the determination unit 30 classifies the original image into a raw image. When the light intensity of the original image is lower than the threshold value, the original image is classified into one of the low-illuminance image and the high-illuminance image according to the intensity of the light of the original image. That is, if the intensity of light of the original image is less than the predetermined first value, the original image is classified as a low-illuminance image, and if the intensity of light of the original image is higher than the first value, the original image is classified as a high-

On the other hand, the image processing unit 40 improves the image of the original image classified into the low-illuminance image or the high-illuminance image. The image processing unit 40 includes a low-illuminance image processing unit 50 for an original image classified into a low-illuminance image, a high-illuminance image processing unit 60 for an original image classified into a high-illuminance image, (70). &Lt; / RTI &gt;

The low-illuminance image processing unit 50 generates a corrected image from the original image by using the Advanced Clipped Histogram Equalization (A_CHE) method when the original image is classified as a low-illuminance image.

Specifically, according to the improved truncation histogram smoothing method, an adaptive truncation ratio for an original image is determined, and a truncated histogram is generated in which the upper region of the histogram of the original image is removed according to the determined adaptive truncation ratio. At least a part thereof is cut and reassigned to the cut histogram.

In this regard, Figures 3a and 3b schematically illustrate histogram truncation of an image associated with the present invention. FIG. 3A schematically shows a state in which the upper end portion of the histogram of the original image is removed according to the conventional CHE scheme, FIG. 3B illustrates a histogram upper region of the original image according to the improved cut histogram smoothing (A_CHE) It schematically shows how it is removed.

According to the conventional CHE scheme, as shown in FIG. 3A, the upper region of the histogram is removed according to the fixed cut-off ratio. In the CHE system disclosed in Korean Patent No. 10-0756318 (Patent Document 2), a fixed cutting ratio is used.

However, according to the improved cutting histogram smoothing (A_CHE) method of the present invention, as shown in FIG. 3B, the cutting ratio is determined in accordance with the original image, . Here, the adaptive cut rate

Figure 112015039572028-pat00005
Is determined by the following equation (1).

Figure 112015039572028-pat00006

In Equation (1)

Figure 112015039572028-pat00007
Is a gray value of the original image,
Figure 112015039572028-pat00008
to be.

4 is a schematic diagram for explaining an improved cut histogram smoothing that can be applied to the present invention. The upper region removed according to the adaptive cut rate is re-assigned to the cut histogram by clipping the low-intensity distribution region and the high-intensity distribution region. As shown in FIG. 4, the cut portion of the upper region includes a cut-off range for the low-light-intensity distribution region and a cut-off range for the high-luminance-range distribution region.

Here, the low-

Figure 112015039572028-pat00009
And the high-
Figure 112015039572028-pat00010
Can be expressed by Equations (2) and (3), respectively.

Figure 112015039572028-pat00011

Figure 112015039572028-pat00012

In equations (2) and (3)

Figure 112015039572028-pat00013
Is a value arbitrarily set in order to distinguish low light intensity and high light intensity, for example,
Figure 112015039572028-pat00014
Lt; / RTI &gt;
Figure 112015039572028-pat00015
Is the global level of the original image.

Clipping range for low light intensity distribution area

Figure 112015039572028-pat00016
Is expressed as shown in Equation (4), and the cut-off range
Figure 112015039572028-pat00017
Can be expressed by Equation (5).

Figure 112015039572028-pat00018

Figure 112015039572028-pat00019

In equations (4) and (5)

Figure 112015039572028-pat00020
Is the cut histogram,
Figure 112015039572028-pat00021
Is the low-illuminance distribution area in gray scale,
Figure 112015039572028-pat00022
Is the high-intensity distribution area in gray scale,
Figure 112015039572028-pat00023
Is the sum of the grayscales.

Referring back to FIG. 2, when the smear is generated in the original image classified into the high-contrast image, the high-contrast image processor 60 removes the smear generated in the original image to generate a corrected image. The high-contrast image processing unit 60 may further include a detection unit, a removal unit, a restoration unit, and the like.

The detection unit may determine whether or not smear is generated in the input original image, and may detect the position of the smear generated column (first column) when it is determined that the smear is generated.

The detecting unit may further include an extracting unit and a converting unit. The extracting unit extracts a signal distribution curve for each column constituting the original image using the original image. The converting unit converts the signal distribution curve generated by the extracting unit into a normal distribution curve.

In this regard, FIG. 5 shows an example of a signal distribution curve according to each column of the image obtained by the photographing module, and FIG. 6 shows a normal distribution curve with the signal distribution curve of FIG. 5 as an input.

As shown in FIG. 5, the extracting unit of the detecting unit may curve the input original image into a signal distribution for a column unit signal. The signal distribution curve represents the sum of gray values of a plurality of pixels constituting each column constituting the original image.

Further, as shown in FIG. 6, the converting unit of the detecting unit can convert the signal distribution curve related to the input original image into a normal distribution curve.

Referring back to FIG. 2, the elimination unit may remove the smear generated in the original image based on the position information of the first column detected by the detection unit.

The restoration unit may restore the original image of the first column from which the smear is removed by using an interpolation method based on the priority of the patch. Specifically, the restoration unit calculates a priority for each of the plurality of pixels in the patch, determines a highest priority pixel having the highest priority among the calculated priorities, and determines the highest priority pixel and a plurality of pixels The restoration can be performed by comparing the similarity of pixels not constituting the first column.

On the other hand, the image restoring unit 70 generates a restored image by applying a focus degradation method to the target image. The image restoring unit 70 may include an image estimating unit, an image generating unit, an image restoring unit, and the like.

Here, the target image will be the original image or the corrected image. If the original image is classified into a low-illuminance image or a high-illuminance image by the discrimination unit 30, the corrected image becomes a target image. If the original image is classified as an unprocessed image, the original image becomes a target image.

The image estimating means is used when a high-resolution image generating method is used as a focus deterioration method.

The image estimating unit may generate a super-resolution image by up-scaling a low-resolution deteriorated image according to an up-scale coefficient.

In the case where focus deterioration occurs due to shaking or error in the target image, the image estimating means can predict the focused image from the image of the target image.

In the case of the image with focus deterioration, the edge portion of the subject is blurred, and various algorithms can be used to predict the actual edge information. Such an algorithm is widely known to those skilled in the art, and a detailed description thereof will be omitted.

The image estimating means obtains a super resolution image (SR) from the low resolution image in the intermittent image with the focus deterioration using the above algorithm, and estimates the focused image using the obtained super resolution image (SR).

The image generating means is used when the high-resolution image generating method is used as the improved focus deteriorating method.

In the case where a super-resolution image is generated according to an up-scale coefficient in the low-resolution image input by the image estimation unit, the image generation unit removes at least part of the focus deterioration using the super- Resolution image, and the high-resolution image is calculated by interpolation. At this time, the interpolation method can preferably improve the focus-deteriorated image using bicubic interpolation.

The relationship between the image with focus deterioration and the high-resolution image can be expressed by Equation (6) below.

Figure 112015039572028-pat00024

here,

Figure 112015039572028-pat00025
Is a generated high-resolution image,
Figure 112015039572028-pat00026
Is a focus deterioration input by a low-resolution image,
Figure 112015039572028-pat00027
Is a super resolution image which is interpolated according to the up-scale coefficient in a low-resolution image.

The image generating means may generate a high-resolution image focused on the target image having the focal deterioration according to Equation (6), and the detailed contents of the generation will be omitted since it is self-explanatory to the ordinary technician.

The image restoration means is used when the detailing method is used as an improved focus deterioration method.

The image restoration means uses a dithering method in order to improve sharpness by receiving a high-resolution image focused on an image with focus deterioration.

When the image generating means receiving the target image generates the high-resolution image, the image restoring means can remove a part of the focus deterioration from the target image by using the generated high-resolution image.

The elimination of such focus deterioration can be done in plural. That is, the image generating means calculates the high-resolution image using the super-resolution image estimated by the image estimating means, and uses the image restoring means to improve the sharpness by the dithering method.

The image reconstructed by the image reconstructing means is input to the image estimating means again and the super resolution image is estimated again. The high resolution image is again calculated by the image generating means, use. This repeated process can be repeated according to the set parameter value.

In addition, a directionally adaptive guided filter can be used as an example of a deterring method that can be used in the image restoration means. The guided filter is a local linear filter, and has the property of smoothing while preserving edge components like a bilateral filter. This feature prevents the edge of the image from being blurred and maintains the base layer.

According to the image generating means, although the image quality is improved, there may be a local smoothing phenomenon and an artifact defect in the edge region, that is, the vicinity of the subject or the characteristic information. In order to further improve it and obtain precise results, a clear high quality image can be obtained by using the direction adaptive guiding filter.

As a deterring method that can be applied to the present invention, the guarded filter performs an operation as shown in Equation (7) below.

Figure 112015039572028-pat00028

In Equation (7)

Figure 112015039572028-pat00029
,
Figure 112015039572028-pat00030
Represents the pixel position,
Figure 112015039572028-pat00031
Lt; / RTI &gt; represents a filter kernel,
Figure 112015039572028-pat00032
Represents an input image,
Figure 112015039572028-pat00033
Represents a linear guidance image.

The filter kernel of Equation (7) can be expressed as Equation (8) below.

Figure 112015039572028-pat00034

In Equation (8)

Figure 112015039572028-pat00035
Is a linear transformed image,
Figure 112015039572028-pat00036
Lt; / RTI &gt; represents a filter kernel,
Figure 112015039572028-pat00037
Lt; / RTI &gt;
Figure 112015039572028-pat00038
Is a normalized parameter,
Figure 112015039572028-pat00039
The
Figure 112015039572028-pat00040
of
Figure 112015039572028-pat00041
Is the average in the transformed image,
Figure 112015039572028-pat00042
The
Figure 112015039572028-pat00043
Is the pixel location in the kernel center of.

Referring again to FIG. 2, the vehicle recognition module 80 of the LPR system 100 of the present invention may include an area detection unit 82 and a vehicle type determination unit 84.

The area detecting unit 82 detects a vehicle area, which is a pixel where the vehicle is photographed, from among the original images photographed by the photographing module 10.

Further, the area detecting unit 82 extracts the license plate area of the vehicle and the headlight area of the vehicle in the detected vehicle area, estimates the full width length and the length of the vehicle using the license plate area of the vehicle and the headlight area of the vehicle .

That is, the license plate area and the center of gravity can be detected using the histogram projection method, and the headlight area and the center of gravity can be detected using the blur method.

The histogram projection method is implemented by vertically and horizontally performing a method of adding the shading values corresponding to the coordinates in the pixel, that is, having the same components. At this time, the numbers of the plates are included in the center of the summed data, The plate region and the center of gravity can be detected by projecting the coordinates of the region on the original image.

The blur method applied to the detection of the headlight area and the center of gravity is widely used as an object segmentation method in image processing. In addition to the bluff method, differential and regional extremes methods can be additionally applied if they are used to find regions that are brighter or darker than the surrounding area. The headlight region and the center of gravity which can be easily determined by the characteristic feature of the vehicle and the region in which locally identical components are gathered and the vehicle can be easily detected can be detected.

The vehicle type determination unit 84 determines whether the vehicle photographed on the original image belongs to the light vehicle or the two-wheeled vehicle based on the determination factor.

It is determined that it is most accurate to judge the vehicle type based on the size of the vehicle photographed on the original image. Accordingly, the determination factor of the vehicle type determination unit 84 is basically the ratio of the vehicle area to the original image.

For example, when the ratio of the vehicle area is within the range of 30 to 50 percent, it is perceived as belonging to the motorcycle. If the ratio of the vehicle area is within the range of 50 to 80 percent, It can be set to be recognized as a general vehicle.

The judging factor may further include the full length of the vehicle measured by the area detecting unit and the length of the vehicle's turn. If it is determined that the car type and the two-wheeled vehicle can not be distinguished from each other only by the vehicle area ratio in the vehicle type determination unit 84, the full width of the vehicle and the length of the vehicle can be considered.

In this connection, Figs. 7A to 7C show an example of a result of judging the type of vehicle according to the present invention. Fig. 7A shows detection of the vehicle area A with respect to the ordinary vehicle, Fig. 7B shows detection of the vehicle area A with respect to the light vehicle, and Fig. 7C shows detection of the vehicle area A with respect to the two- The area detecting unit 82 cuts the upper and lower parts of the original image for more accurate determination, and detects the vehicle area from the cut original image.

The vehicle processing unit 90 determines the charge imposed on the vehicle according to the type of the vehicle determined by the vehicle type determination unit 90. For example, when it is determined that the vehicle belongs to the light vehicle, the vehicle processing unit 90 may charge a fee to which the light vehicle charge reduction is applied. When it is determined that the vehicle belongs to the two-wheeled vehicle, the vehicle processing section 90 can prevent the breaker from being operated even if the vehicle is stepped on the loop coil, thereby preventing the breaker accident.

< LPR  How the system works>

Hereinafter, an operation method of the LPR system according to the present invention will be described in detail with reference to the drawings.

8 is a flow chart of an adaptive probability based low illumination image enhancement and smear reconstruction processing method related to an example of the present invention.

8, the photographing module 10 mounted with a CCD sensor photographs an original image including a license plate of the vehicle using the CCD sensor, and the original image photographed by the photographing module 10 is recognized as a number Is input to the module 20 (S10). In the original image, the vehicle is photographed. In general, the license plate of the vehicle is located under the original image.

Next, the determination unit 30 determines whether there is a change in intensity of light on the original image (S20). In step S20, the original image is converted into a gray scale image, and a characteristic change is observed with respect to an ROI, which is a part of the entire original image.

Then, the determination unit 30 determines whether the light intensity of the original image is a low-illuminance component (S30). According to the determination of step S30, the present invention can largely correct two pieces of distortion information. First, we propose an image enhancement method by expanding the dynamic range for the low-light and high-intensity regions, and the second method suggests a method for detecting and restoring the smear due to the amount of light charge in the high-intensity region .

Subsequently, the image processing unit 40 generates a corrected image using the original image. Specifically, when the original image is classified into a low-illuminance image, the low-illuminance image processing unit 50 generates a corrected image using the improved Clipped Histogram Equalization (S40) The high-illuminance image processing unit 60 removes the smear generated in the original image to generate a corrected image (S42).

In step S40, an improved histogram smoothing method, which is one of the histogram smoothing methods, is used as a method of improving image quality of an original image.

In general, histogram equalization improves the image by uniformly distributing the distribution of brightness values by processing images in which the distribution of brightness values is shifted to one side or is not uniform. The ultimate goal of histogram smoothing is to create a histogram with a uniform distribution, which makes the distribution of the histogram uniform during processing. In this case, since the brightness value is significantly changed according to the input image and the undesired noise can be amplified, the method can increase the contrast while maintaining the average brightness value.

Since the histogram processing method is a simple method for solving the degraded image quality, there are various methods. Typical examples are Bi Histogram Equalization, Recursive Mean-Separate Histogram Equalization, and Clipped Histogram Equalization.

Among them, the Clipped Histogram Equalization (CHE) method is most effective and maintains the amount of information in the image, and there is no image distortion. This method controls the maximum value of the histogram by setting an arbitrary maximum value and cutting the upper portion of the histogram exceeding the maximum value to reset the entire region of the threshold value. It should be set to have the minimum range after the histogram conversion and the dynamic threshold value according to the image feature change can be set by assigning the threshold value to the initial setting according to the image. In this case, the upper part of the histogram is reassigned to the whole area, so it is strong against noise, but in general image, the improvement of the contrast results in inefficiency compared with other methods.

Therefore, in the present invention, the histogram section is divided into several sections without resetting the upper part of the histogram to the entire area, and the biased distribution is evenly distributed in the peripheral section of the histogram section by the distance ratio, We proposed an improved A_CHE method of CHE as a way to improve the image contrast.

As a result, the low-illuminance area is improved from the dynamic range, and furthermore, the high-illuminance area can be processed with the improved image having a strong form.

Meanwhile, in step S42, the smear is detected and removed from the original image using image processing.

After receiving the original image from the photographing module 10, the input original image is statistically analyzed. As shown in FIG. 5, the extracting unit of the detecting unit may curve the input original image into a signal distribution for a column unit signal. The signal distribution curve represents the sum of gray values of a plurality of pixels constituting each column constituting the original image.

Also, as shown in FIG. 6, the converting unit of the detecting unit can convert the signal distribution curve related to the input original image into a normal distribution curve. That is, when the smear is generated at a specific place from the source of sunlight or passive light by the vehicle in general, it can be expressed as a normal distribution.

After the original image is expressed as a normal distribution, the presence or absence of smear is determined. It can be concluded that the smear is generated in the columns of the image due to the characteristics of the smear, and the smear occurs in the regions of the column having white and bright shapes, especially white and bright shapes.

Thus, the sum of the gray values along the direction of the distribution generated in the smear and other sections in the signal distribution curve and the value of the maximum estimate for the sum of the column distribution curve, i.e., the blurring, In the curve, when a portion having a specific and significantly higher frequency than other portions in the normal distribution exists, it can be determined that the smear is generated in the original image.

After the presence of the smear in the original image is determined, the position of the smear is determined. It is possible to judge the portion where the smear is generated by judging the portion having a specific and remarkably high frequency in the signal distribution curve of the original image as compared with the other portions as the region in which the smear occurs.

It is determined that the smear region exists, and after the position is determined, the smear is removed and a binary pattern map (Alpha map) for restoration is generated.

The smear intensity and the exact background intensity are estimated to remove the smear. A binary pattern map is generated by applying an average filter to each column of the original image.

In this case, the binary pattern map has a value of 1 when the signal intensity on the normal distribution is larger than a predetermined threshold value, and the column has a value of 1, and the column has a value of 0 when the signal intensity is small.

After the binary pattern map is generated, the smear position is rearranged using the binary pattern map. It consists of vehicle, noise, background, and smear when analyzing each pixel station. The intensity of the smear signal is estimated by reconstructing the smear region search size in order to align the gray values of the pixels in the column using the applied filter, and the accurate position is determined and rearranged.

After rearranging the smear position, the smear is removed. Using the determined area and intensity of the smear, smear can be removed from the entire image.

After removing the smear, the original image is reconstructed. There are various methods of restoring the original image, but in the case of the present invention, inpainting is applied. In particular, although the original image can be reconstructed using the interpolation method, the original image can be reconstructed using a patch method having a certain size in the periphery since it is not suitable for the region.

Then, the image reconstruction unit generates a reconstructed image by applying a focus deterioration method combined with a high-resolution image generation method and a deterring method to a target image (S50).

Specifically, in step S50, up-scaling is performed according to an up-scale coefficient in a target image with focus deterioration to generate a super-resolution image, A high-resolution image is calculated by applying a bicubic interpolation to the generated super-resolution image. The process may be repeated according to the value of the predetermined coefficients, and preferably it may be repeated until the focus deterioration is no longer improved

After the above process is repeatedly performed, a high-resolution image in which a part of the focal deterioration has been removed can be obtained. In order to restore a high-resolution image thus produced to a clear image with good visual quality, Method.

However, the step S50 is not an essential step, and it is possible to proceed to the step S60 while omitting the step S50.

Then, the number recognition module 20 recognizes the characters of the number plate of the vehicle using the target image (S60).

For character recognition, one of the three detection methods of the license plate position can be executed first. First, the feature region of the license plate is detected using the vertical and horizontal edge information from the photographed image. The second is to detect the position of the license plate by the scan data analysis. The third is to detect the exact license plates by directly searching for numbers and letters.

When the position of the license plate is detected, the recognition algorithm uses the numbers, letters (consonants, vowels, and vowels) to recognize the characters by template matching (Hangul consonant + number) ) Are classified in detail and the recognized characters are re-confirmed, thereby minimizing the error in decoding the characters.

Hereinafter, the smear processing process of the high-contrast image processing unit 60 will be described in more detail.

The smear is present in the CCD camera due to its characteristics and is obtained from the light (light source) at the same position or strong light (light source) due to the reflector depending on the position of the camera and the object (vehicle), resulting in a distorted image.

In the proposed method, the smear column is searched in the still image to remove the smear. In order to reduce the calculation amount, only the lowermost region of the region of interest (ROI) And determines that a smear exists when the distribution of the corresponding component has a certain value or more.

In fact, it is difficult to find the exact location. This is because the distributions of photons reaching the object from the sunlight are irregularly occurring in the object, background, noise and smear regions, and exist and occur more than one.

Therefore, in order to analyze the characteristics of the smear, it is determined that the smear region is present in the regions of the white and bright columns transmitted to the supersaturated object, and the gray characteristic is analyzed and detected. The sum of the gray values and the distribution curve, ie the maximum peak value for the sum of the blurring, are found along the direction of the distribution not only in smear but also in other sections. Other areas in the curve show relatively similar components or even smoothing as the background, vehicle components.

First, the smear intensity intensity curve model can be expressed as Equation 9 below.

Figure 112015039572028-pat00044

here

Figure 112015039572028-pat00045
Is the size of the original image,
Figure 112015039572028-pat00046
and
Figure 112015039572028-pat00047
Denote rows and columns, respectively.
Figure 112015039572028-pat00048
The
Figure 112015039572028-pat00049
The gray value in the column,
Figure 112015039572028-pat00050
The
Figure 112015039572028-pat00051
&Lt; / RTI &gt; represents a gray value for a pixel within a pixel.

On the other hand, the threshold value set for the smear region is

Figure 112015039572028-pat00052
Is expressed by Equation (10) below.

Figure 112015039572028-pat00053

here

Figure 112015039572028-pat00054
Is the mean according to the statistical analysis in the whole image,
Figure 112015039572028-pat00055
Is the standard deviation,
Figure 112015039572028-pat00056
Means a weight.

Regarding the correlation between the equations (9) and (10), it can be seen that the smear phenomenon occurs at the highest estimate among the curves of the gray sum, as described above,

Figure 112015039572028-pat00057
Can be expressed by the following equation (11).

Figure 112015039572028-pat00058

That is, when the smear brightness intensity is higher than the threshold value, the smear region is determined.

After finding the smear position coordinates,

Figure 112015039572028-pat00059
Associated with the start coordinates
Figure 112015039572028-pat00060
And end coordinates
Figure 112015039572028-pat00061
Can be obtained.

However, it is possible to catch the smear occurrence region in an area wider than the start coordinates and the end coordinates thus obtained. For example, as shown in the following equation (12), it is possible to determine the region of 2 pixels before the start coordinate and after the end coordinate by 2 pixels as the smear occurrence area.

Figure 112015039572028-pat00062

here,

Figure 112015039572028-pat00063
and
Figure 112015039572028-pat00064
Means the result for the smear zone.

In addition, smear can be detected and removed by estimating the intensity of the smear in the image in which the smear is generated and the background intensity in the state in which the smear is removed.

Apply a mean filter to the signal strength for each column and use the applied filter to reconstruct the smear region search size to align the gray values of the pixels in the column.

Figure 112015039572028-pat00065

Equation (13) is an expression for an average filter applied to the signal strength of each column.

here,

Figure 112015039572028-pat00066
Is an image data intermediate position corresponding to coordinates in the search size,
Figure 112015039572028-pat00067
Is the radius of the selected location area,
Figure 112015039572028-pat00068
Is an ordered vector at the selected search size.

In the case of Equation 13 above, when analyzing each pixel column, the gray values of the pixel are sorted including the vehicle, noise, background, and smear signal.

According to the above method,

Figure 112015039572028-pat00069
Smear and
Figure 112015039572028-pat00070
The intensity of the background component can be accurately obtained.

It is possible to estimate the smear intensity, determine the difference from the pure background intensity, and determine the position and area of the smear.

After determining the intensity and area of the smear, the smear is removed from the entire image.

Figure 112015039572028-pat00071

here,

Figure 112015039572028-pat00072
The smear was removed
Figure 112015039572028-pat00073
. In other words,
Figure 112015039572028-pat00074
.

Through such a method, it is possible to detect smear in the image including the smear, to grasp the position of the smear, and to remove the smear from the image.

The extracted smear result and region (position) are detected and the non-distorted image is recovered from the generated binary pattern map (alpha pattern map). In order to deal with this problem, inpainting is applied as a technique for recovering lost portions of images among various restoration methods.

In general, there is an interpolation method that uses surrounding values in the simplest way. This is simply applicable if the missing area is not large, but the interpolation method for the area is not suitable because the error propagates. Therefore, it is performed until all the clipped regions on each layer are interpolated. Also, the interpolation priority is calculated for pixels on the boundary of the area to be interpolated, and the texture and structure interpolation are performed from the priority order. And the confidence of the center point of the patch centered at the pixel on the boundary and the value associated with the structure in the patch.

Figure 112015039572028-pat00075

In Equation (15)

Figure 112015039572028-pat00076
Lt; RTI ID = 0.0 &gt;
Figure 112015039572028-pat00077
Quot;
Figure 112015039572028-pat00078
Is the confidence value at the pixel in the patch,
Figure 112015039572028-pat00079
Represents the area to be interpolated.
Figure 112015039572028-pat00080
Means a patch,
Figure 112015039572028-pat00081
The size of the patch,
Figure 112015039572028-pat00082
Is the direction unit vector of the structure in the image,
Figure 112015039572028-pat00083
Gt;
Figure 112015039572028-pat00084
The unit vector of the normal to the contour in the image.
Figure 112015039572028-pat00085
Is the normalization constant, and the reliability of the pixel is set to 1 for pixels included in the source region S, and to 0 otherwise.

Figure 112015039572028-pat00086
The meaning of the values is simply a value that allows the structure to be restored more correctly by giving a higher priority to the pixels in the direction coinciding with the direction of the structure in the image, So that more accurate interpolation is possible.

When the highest priority is determined, the pixel is subjected to template matching to compare a gray intensity with a patch within a certain range to derive an area having the maximum similarity, and blending with the patch area on the target pixel ).

In the present invention, a sum of absolute difference (SAD) in a gray-scale space is used as a template matching method. Finally, in the process C, the highest priority among the pixels on the boundary is recalculated, and the process from A to C is repeated until all the target pixels are interpolated.

9 is a flowchart of a method for determining the type of vehicle in the LPR system related to an example of the present invention.

9, the original image photographed by the photographing module 10 is input to the vehicle recognition module 80 (S100), and the area detection unit 82 detects the vehicle area of the original image, (S110).

Subsequently, the vehicle judging section 84 judges whether the vehicle photographed on the original image belongs to the light vehicle or the motorcycle on the basis of a predetermined judgment factor (S120). The decision factor is basically the ratio of the vehicle area to the original image. In addition, the determination factor may further include the full width of the vehicle and the length of the vehicle.

Subsequently, the vehicle processing section 90 determines the charging degree of the vehicle (S130). When the vehicle judging unit 84 judges that the vehicle is a light vehicle, the vehicle processing unit 90 may charge a fee to which the light vehicle fee reduction is applied. It is possible to control the breaker not to be operated even if the loop coil is stepped on.

Hereinafter, results of actual application of the present invention will be described with reference to the drawings. FIGS. 10A to 10C show an example of a result of processing a low-illuminance image according to the present invention, and FIGS. 11A to 11C show an example of a result of a smear reconstruction process performed on a high-illuminance image according to the present invention.

In this case, the vehicle image input from the 1.3M camera (PointGray) was used in the LPR system and converted to grayscale and processed. Experimental environment was implemented in Windows 7, CPU 2.8GHz, and 4G memory using Visual Studio 2010 compiler.

FIGS. 10A to 10CC are experimental results for a comparative analysis between the existing HE method and the proposed A_CHE method. 10A is an actual input image, FIG. 10B is a result image by the conventional HE method, and FIG. 10C is a result image by the proposed A_CHE method.

Referring to FIGS. 10A to 10C, although the HE result image has been improved for the actually input image, the process for the contrast and low-illuminance areas is still distorted. This is because when the distribution is reconstructed into uniform regions within the still image, both low-light and high-intensity regions are processed, especially amplified and extended values occur in low-light region and high-intensity region. However, in the A_CHE method, which is more improved than the conventional HE method, it is reconstructed in the stabilized state of the dynamic range in the low illumination and high illumination region, and the distortion information is remarkably reduced even in the still image. However, the object of the present invention is a low-illuminance environment in which the license plate number is not visible in the input image when viewed by the naked eye, because the object of the present invention is the part that carries out recognition of the car number in the LPR system. However, in the result of the proposed A_CHE processing, The numbers were clear and clear enough to be distinguished.

11A to 11C are experimental results for analyzing the smear process. 11A is an actual input image, FIG. 11B is a smear detection result, and FIG. 11C is a smear restoring result.

Figs. 11A to 11C show the distribution of the strong region due to the occurrence of smear in the light-strong region of the light source from the vehicle or the reflector of the vehicle, and then detect the smear region. The detected area is set to be wider than the smear distribution, and the smear area is restored using the inpainting technique for the area. As a result, it was confirmed that the smear was remarkably reduced even though the smear region remained a strong region, that is, a region widely distributed.

The present invention can also be embodied as computer-readable codes on a computer-readable recording medium. A computer-readable recording medium includes all kinds of recording apparatuses in which data that can be read by a computer system is stored. Examples of the computer-readable recording medium include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like, and may be implemented in the form of a carrier wave (for example, transmission via the Internet) .

In addition, the computer-readable recording medium may be distributed over network-connected computer systems so that computer readable codes can be stored and executed in a distributed manner. In addition, functional programs, codes, and code segments for implementing the present invention can be easily inferred by programmers of the technical field to which the present invention belongs.

In addition, the apparatus and method as described above can be applied to a case where the configuration and method of the embodiments described above are not limitedly applied, but the embodiments may be modified so that all or some of the embodiments are selectively combined .

Claims (15)

A photographing module for photographing an original image including a license plate of the vehicle;
A vehicle recognition module that receives the original image photographed by the photographing module to determine whether the vehicle photographed on the original image belongs to a vehicle classified into a predetermined category; And
And a number recognition module for receiving the original image photographed by the photographing module to recognize a character of a license plate of the vehicle,
The vehicle classified into the predetermined category is a light vehicle and a two-
The number recognition module includes:
A discrimination unit for classifying the original image into one of a low-illuminance image, a high-illuminance image, and an unprocessed image based on a discriminant related to the original image; And
And an image processor for generating a corrected image using the original image,
Wherein the image processing unit comprises:
A low-illuminance image processing unit for generating the corrected image from the original image by using an advanced clipped histogram equalization method when the discrimination unit classifies the original image into the low-illuminance image; And
And a high contrast image processor for removing the smear generated in the original image to generate the corrected image when the discrimination unit classifies the original image into the high contrast image,
The number recognition module uses the target image for character recognition of the license plate of the vehicle,
When the discrimination unit classifies the original image into the low-illuminance image or the high-illuminance image, the target image is the correction image, and when the discrimination unit classifies the original image as the unprocessed image, The image is the original image,
The improved truncation histogram smoothing scheme determines an adaptive truncation ratio for the original image and generates a truncated histogram in which the upper region of the histogram of the original image is removed according to the determined adaptive truncation ratio, At least a part of which is cut and reassigned to the cut histogram to generate the corrected image,
Wherein the adaptive cutoff ratio is determined by the following equation:
Equation
Figure 112015073263270-pat00109

In the above equation,
Figure 112015073263270-pat00110
Is the adaptive cut rate,
Figure 112015073263270-pat00111
Is a gray value of the original image,
Figure 112015073263270-pat00112
to be.
The method according to claim 1,
The vehicle recognition module includes:
An area detecting unit detecting a vehicle area of the original image, the area being a pixel of the vehicle; And
Further comprising: a vehicle type determination unit that determines whether the vehicle belongs to the light vehicle and the motorcycle based on a predetermined determination factor,
Wherein the determination factor includes a percentage of the vehicle area occupied by the original image.
3. The method of claim 2,
Wherein the area detecting unit comprises:
Extracting a license plate area of the vehicle and a headlight area of the vehicle in the detected vehicle area,
And measures the full width of the vehicle and the length of the vehicle using the extracted license plate area of the vehicle and the headlight area of the vehicle.
The method of claim 3,
The determination factor may include,
The total length of the vehicle measured by the area detecting unit, and the length of the vehicle.
5. The method of claim 4,
The vehicle-
If the ratio of the vehicle area occupied by the original image is within the first range, it is determined that the vehicle belongs to the two-
When the ratio of the vehicle area occupied by the original image is within the second range, it is determined that the vehicle belongs to the light vehicle,
Wherein the upper limit of the first range is less than the upper limit of the second range and the lower limit of the first range is less than the lower limit of the second range.
6. The method of claim 5,
Wherein the upper limit of the first range is greater than the upper limit of the second range,
Wherein the vehicle type determining unit determines the vehicle width length of the vehicle and the vehicle length of the vehicle based on the determination factor, when the vehicle type determining unit determines that the ratio of the vehicle area occupies the first range and the second range at the same time, Lt; RTI ID = 0.0 &gt; LPR &lt; / RTI &gt; system.
delete The method according to claim 1,
The discrimination factor is an intensity of light of the original image converted into a gray scale,
Wherein when the intensity of light of the original image is higher than a predetermined threshold value, the discriminating unit classifies the original image into the unprocessed image.
9. The method of claim 8,
If the intensity of the light of the original image is lower than the threshold value,
Wherein the determination unit classifies the original image into one of the low-illuminance image and the high-illuminance image according to the intensity of light of the original image.
delete delete The method according to claim 1,
Wherein the high-
A detector for detecting a position of a first column in which the smear is generated among columns constituting the original image; And
And removing the smear from the original image based on the detected position information of the first column.
13. The method of claim 12,
Wherein:
An extraction unit for extracting signal distribution curves for the respective columns constituting the original image using the original image input to the number recognition module; And
And a conversion unit for converting the signal distribution curve into a normal distribution curve,
Wherein the signal distribution curve represents a sum of gray values of a plurality of pixels constituting each column constituting the original image.
delete 13. The method of claim 12,
And a reconstruction unit for reconstructing the original image of the first column from which the smear is removed by using a predetermined interpolation method.
KR1020150057097A 2015-04-23 2015-04-23 Lpr system for recognition of compact car and two wheel vehicle KR101563543B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150057097A KR101563543B1 (en) 2015-04-23 2015-04-23 Lpr system for recognition of compact car and two wheel vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150057097A KR101563543B1 (en) 2015-04-23 2015-04-23 Lpr system for recognition of compact car and two wheel vehicle

Publications (1)

Publication Number Publication Date
KR101563543B1 true KR101563543B1 (en) 2015-10-29

Family

ID=54430636

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150057097A KR101563543B1 (en) 2015-04-23 2015-04-23 Lpr system for recognition of compact car and two wheel vehicle

Country Status (1)

Country Link
KR (1) KR101563543B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180094812A (en) * 2017-02-16 2018-08-24 (주)지앤티솔루션 Method and Apparatus for Detecting Boarding Number
KR101965294B1 (en) * 2018-11-19 2019-04-03 아마노코리아 주식회사 Method, apparatus and system for detecting light car

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180094812A (en) * 2017-02-16 2018-08-24 (주)지앤티솔루션 Method and Apparatus for Detecting Boarding Number
KR101973933B1 (en) 2017-02-16 2019-09-02 (주)지앤티솔루션 Method and Apparatus for Detecting Boarding Number
KR101965294B1 (en) * 2018-11-19 2019-04-03 아마노코리아 주식회사 Method, apparatus and system for detecting light car

Similar Documents

Publication Publication Date Title
KR101553589B1 (en) Appratus and method for improvement of low level image and restoration of smear based on adaptive probability in license plate recognition system
You et al. Adherent raindrop detection and removal in video
CN110473185B (en) Image processing method and device, electronic equipment and computer readable storage medium
WO2017028587A1 (en) Vehicle monitoring method and apparatus, processor, and image acquisition device
US9104914B1 (en) Object detection with false positive filtering
US8472744B2 (en) Device and method for estimating whether an image is blurred
KR101758684B1 (en) Apparatus and method for tracking object
US8068668B2 (en) Device and method for estimating if an image is blurred
CN110544211B (en) Method, system, terminal and storage medium for detecting lens attached object
US10452922B2 (en) IR or thermal image enhancement method based on background information for video analysis
CN109413411B (en) Black screen identification method and device of monitoring line and server
CN110532875B (en) Night mode lens attachment detection system, terminal and storage medium
Jiang et al. Car plate recognition system
CN111027535A (en) License plate recognition method and related equipment
KR20120111153A (en) Pre- processing method and apparatus for license plate recognition
CN112686252A (en) License plate detection method and device
CN111881917A (en) Image preprocessing method and device, computer equipment and readable storage medium
KR101563543B1 (en) Lpr system for recognition of compact car and two wheel vehicle
CN106778765B (en) License plate recognition method and device
CN110363192B (en) Object image identification system and object image identification method
KR102506971B1 (en) Method and system for recognizing license plates of two-wheeled vehicles through deep-learning-based rear shooting
JP7264428B2 (en) Road sign recognition device and its program
KR101696519B1 (en) Number plate of vehicle having boundary code at its edge, and device, system, and method for providing vehicle information using the same
KR100801989B1 (en) Recognition system for registration number plate and pre-processor and method therefor
KR101875786B1 (en) Method for identifying vehicle using back light region in road image

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20180921

Year of fee payment: 4