KR101563543B1 - Lpr system for recognition of compact car and two wheel vehicle - Google Patents
Lpr system for recognition of compact car and two wheel vehicle Download PDFInfo
- Publication number
- KR101563543B1 KR101563543B1 KR1020150057097A KR20150057097A KR101563543B1 KR 101563543 B1 KR101563543 B1 KR 101563543B1 KR 1020150057097 A KR1020150057097 A KR 1020150057097A KR 20150057097 A KR20150057097 A KR 20150057097A KR 101563543 B1 KR101563543 B1 KR 101563543B1
- Authority
- KR
- South Korea
- Prior art keywords
- vehicle
- image
- original image
- area
- smear
- Prior art date
Links
Images
Classifications
-
- G06K9/3258—
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/015—Detecting movement of traffic to be counted or controlled with provision for distinguishing between two or more types of vehicles, e.g. between motor-cars and cycles
-
- G06K2209/15—
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Processing (AREA)
Abstract
Description
The present invention relates to an LPR system for recognizing a light vehicle and a motorcycle. More particularly, the present invention relates to an LPR system capable of easily charging a light vehicle by distinguishing a light vehicle and a two-wheeler according to a ratio of an area occupied by the vehicle.
In spite of the rapid growth of demand for automobiles due to economic growth and income increase, the manpower to manage the road situation and traffic situation is insufficient. Therefore, efforts are being made to overcome the existing poor traffic management system with limited personnel.
As a part of this effort, the development of a system for automatic recognition (including number recognition) of vehicles is actively under way. Many researches have been made so far in various fields such as traffic enforcement, traffic volume investigation, arresting of stolen vehicles, control of access vehicles, parking facility management, etc. through vehicle recognition or vehicle number (character recognition).
In particular, the processes for recognizing characters on the license plate differ from those of other characters, and therefore, robust processing methods must be considered because distortion occurs due to camera noise, illumination change, weather, and the like due to environmental influences. However, in the case of a license plate area of a vehicle, its contents are limited due to its inherent characteristics, and its structure is simpler than general character recognition (pattern recognition). For this reason, License Plate Recognition (LPR) is the most common system for efficiently managing environmental characteristics, increasing demand for vehicles, manpower supply and management of parking space resources.
The LPR system or number recognition technology was first developed in the UK in 1976. Over the following decades, as technology evolved and market demand grew, LPR systems grew steadily and expanded steadily in Southeast Asia and other European countries. As a result, the LPR system market is growing significantly in North America. This led to strong motivation for effective crime prevention and prevention technologies, which enabled them to become more active in the wider market.
Previous Vehicle Number Identification (LPR) or Automated LPR (ALPR) systems use the Surveillance Method using Optical Character Recognition (OCR) in images (images) obtained from cameras to read vehicle license plates. . In recent years, the name of the parking management system has been working efficiently for parking spaces. Currently, LPR system is solving manpower supply, labor cost burden, and charge leakage problem by adjusting the parking charge in relation to the parking environment. As the demand of LPR system is constantly increasing, technological change and development are continuing.
However, when using the parking ticket in the operation aspect, sudden events such as loss of parking tickets, damage, waste of resources, and the like, occur when there are no cash or only a large amount of money.
In addition, in order to solve the problems caused by the sudden increase of the self-driving vehicle due to the quality of life, problems of the parking problems (space), inefficient operation management, inconveniences of users due to lack of knowledge, An unmanned automation system is required.
Generally, in the case of an unmanned automation system (hereinafter referred to as LPR system), although a loop system is mainly used as a means of detecting a vehicle, a non-buried detector is required due to inconveniences of surrounding citizens and maintenance troubles due to burial construction . In order to replace this, an ultrasonic sensor or a Doppler sensor is used to detect the vehicle or classify the type of the vehicle.
As such, the system's technology and coverage are gradually expanding and expanding, including the three basic components of a typical LPR system.
The first is to acquire the video source from the camera or video (image), the second to extract the vehicle number for the information of the camera or video (image) input from the core or engine in the LPR system, Recognition of the matching process for the numbered characters or integration among other systems.
The present invention proposes a method of classifying vehicles of vehicles photographed in the LPR system structure. Conventional general LPR system is focused on technology development related to license plate recognition of a vehicle, but it may be an important issue in the LPR system to incorporate the function of classifying the vehicle type because the charge may vary depending on the type of vehicle .
However, in the conventional system, there is a limitation in constructing an unmanned system that can systematically charge a fee only by using a loop coil to detect the approach of the vehicle and recognize the license plate character of the vehicle. I have to do it.
Accordingly, it is required to develop an LPR system capable of recognizing various types of vehicles and applying differential charging.
Disclosure of Invention Technical Problem [8] The present invention has been made in order to solve the above-mentioned problems, and it is an object of the present invention to provide an LPR system capable of easily charging by dividing a light vehicle and a two- It has its purpose.
In addition, the present invention can accurately grasp light vehicles and two-wheeled vehicles, systematically charge fees according to the types of vehicles, prevent accident of blocking bars, speedy processing without high complexity, and cost reduction The purpose is to provide the LPR system to the user.
It is another object of the present invention to provide a user with an LPR system capable of greatly improving the number recognition performance by correcting distortion of an image generated in a low illuminance area and a high illuminance area.
It is to be understood that both the foregoing general description and the following detailed description of the present invention are exemplary and explanatory and are not intended to limit the invention to the precise form disclosed. It can be understood.
An LPR system related to an example of the present invention for realizing the above-mentioned problems includes an imaging module for photographing an original image including a license plate of a vehicle; A vehicle recognition module that receives the original image photographed by the photographing module to determine whether the vehicle photographed on the original image belongs to a vehicle classified into a predetermined category; And a number recognition module for receiving the original image photographed by the photographing module to recognize a character of the license plate of the vehicle, wherein the car classified into the predetermined category is a light car and a motorcycle.
The vehicle recognition module may further include: an area detection unit that detects a vehicle area, which is a pixel of the original image, from which the vehicle is photographed; And a vehicle type determination unit that determines whether the vehicle belongs to the light vehicle and the motorcycle based on a predetermined determination factor, wherein the determination factor includes a ratio of the vehicle area in the original image.
The area detection unit may be configured to extract the license plate area of the vehicle and the headlight area of the vehicle in the detected vehicle area and to extract the headlight area of the vehicle using the extracted license plate area of the vehicle and the headlight area of the vehicle The total length and the length of the vehicle can be measured.
In addition, the determination factor may further include a full width length of the vehicle measured by the area detecting unit and a length of a radius of the vehicle.
The vehicle type determination unit may determine that the vehicle belongs to the two-wheeled vehicle when the ratio occupied by the vehicle area in the original image is within the first range, and if the ratio of the vehicle area occupied by the original image to the second range The upper limit of the first range is smaller than the upper limit of the second range, and the lower limit of the first range is smaller than the lower limit of the second range.
When it is determined that the upper limit of the first range is larger than the upper limit of the second range and the ratio occupied by the vehicle area in the original image belongs to the first range and the second range at the same time, The measured total length of the vehicle and the length of the vehicle can be used as the determination factor
In addition, the number recognition module may include a discrimination unit for classifying the original image into one of a low-illuminance image, a high-illuminance image, and an unprocessed image based on a discrimination factor related to the original image; And an image processor for generating a corrected image using the original image, wherein the image processor is configured to perform an advanced clipped histogram smoothing when the original image is classified as the low-illuminance image by the discriminator A low-illuminance image processing unit for generating the corrected image from the original image using a histogram equalization method; And a high-illuminance image processor for removing the smear generated in the original image to generate the corrected image if the original image is classified by the discriminator into the high-illuminance image, The target image is the corrected image when the target image is used for character recognition of the license plate of the vehicle and the original image is classified into the low illuminance image or the high illuminance image by the discriminator, If the original image is classified as the raw image, the target image is the original image.
The discrimination factor may be an intensity of light of the original image converted into a gray scale and if the intensity of light of the original image is higher than a predetermined threshold value, Classify as unprocessed images.
When the variation of the intensity of the light of the original image is lower than the threshold value, the determination unit classifies the original image into one of the low-illuminance image and the high-illuminance image according to the intensity of the light of the original image.
Further, the improved cut histogram smoothing method may include: determining an adaptive cut ratio for the original image; generating a cut histogram from which the upper region of the histogram of the original image is removed according to the determined adaptive cut ratio; The corrected image can be generated by cutting at least a part of the upper region and reassigning the cut region to the cut histogram.
Further, the adaptive cutoff ratio can be determined by the following equation.
Equation
In the above equation,
Is the adaptive cut rate, Is a gray value of the original image, to be.The high-illuminance image processing unit may include: a detector that detects a position of a first column in which the smear is generated among columns constituting the original image; And removing the smear from the original image based on the detected position information of the first column.
The detection unit may include an extraction unit for extracting a signal distribution curve for each column constituting the original image using the original image input to the number recognition module; And a conversion unit converting the signal distribution curve into a normal distribution curve, wherein the signal distribution curve represents a sum of gray values of a plurality of pixels constituting each column constituting the original image.
The detection unit may generate a binary pattern map by comparing the normal distribution curve with a preset threshold value. In a region where the normal distribution curve is smaller than the threshold value, the binary pattern map has a value of 0, In an area larger than the threshold value, the binary pattern map has a value of 1, and the area having the value of 1 in the binary pattern map corresponds to the first column of the original image.
The image processing apparatus may further include a reconstruction unit that reconstructs the original image of the first column from which the smear is removed by using a predetermined interpolation method.
The present invention can provide a user with an LPR system capable of easily charging by dividing a light vehicle and a two-wheeler according to the ratio of the area of the vehicle in the original image.
In addition, the present invention can accurately grasp light vehicles and two-wheeled vehicles, systematically charge fees according to the types of vehicles, prevent accident of blocking bars, speedy processing without high complexity, and cost reduction The LPR system can be provided to the user.
In addition, the present invention can provide a user with an LPR system capable of greatly improving the number recognition performance by correcting distortion of an image generated in a low illuminance area and a high illuminance area.
It should be understood, however, that the effects obtained by the present invention are not limited to the above-mentioned effects, and other effects not mentioned may be clearly understood by those skilled in the art to which the present invention belongs It will be possible.
BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate a preferred embodiment of the invention and, together with the description, serve to provide a further understanding of the technical idea of the invention, It should not be construed as limited.
1 is an embodiment of a typical LPR system associated with the present invention.
2 shows an example of a block diagram of the LPR system of the present invention.
FIGS. 3A and 3B schematically illustrate histogram truncation of an image associated with the present invention. FIG.
4 is a schematic diagram for explaining an improved cut histogram smoothing that can be applied to the present invention.
FIG. 5 shows an example of a signal distribution curve according to each column of an image obtained in the photographing module.
FIG. 6 shows a normal distribution curve obtained by taking the signal distribution curve of FIG. 5 as an input.
7A to 7C show an example of the result of judging the type of vehicle according to the present invention.
8 is a flow chart of an adaptive probability based low illumination image enhancement and smear reconstruction processing method related to an example of the present invention.
9 is a flowchart of a method for determining the type of vehicle in the LPR system related to an example of the present invention.
10A to 10C illustrate an example of a result of processing a low-illuminance image according to the present invention.
11A to 11C show an example of a result obtained by restoring smear generated in a high-contrast image according to the present invention.
Hereinafter, a preferred embodiment of the present invention will be described with reference to the drawings. It should be noted that the embodiments described below do not unduly limit the contents of the present invention described in the claims, and the entire constitution described in this embodiment is not necessarily essential as a means for solving the present invention.
The general LPR system is used to collect the fee according to the operation fee system with the information of entering / leaving the vehicle, or to collect the information on the entering / leaving status of unspecified vehicles without operating fee system. Furthermore, it is presented as an integrated direction to observe the movement of vehicles in geographically dispersed organizations. In this regard, Figure 1 is one embodiment of a typical LPR system associated with the present invention.
As shown in FIG. 1, the LPR system detects a license plate area of a vehicle inputted by a camera, recognizes the license plate character of the vehicle by using the number and character detection method, and transmits the license plate character to the local PC or server, Management and supervision.
The number information of these vehicles is used for the collection of fees according to entrance, departure, and fare system, analysis of vehicle traffic, regional congestion analysis, and analysis of vehicle access by time of day. Through this, smooth operation management and user convenience are maximized The purpose is to do.
Hereinafter, an LPR system capable of classifying a vehicle type of a vehicle photographed in an image is proposed.
< LPR System Configuration>
Hereinafter, the configuration of the LPR system according to the present invention will be described in detail with reference to the drawings.
2 shows an example of a block diagram of the LPR system of the present invention. 2, the
However, the components shown in FIG. 2 are not essential, so an
The photographing
The photographing
The photographing
The photographing
In the CCD type image sensor, a smear phenomenon occurs due to the signal processing method. The smear phenomenon refers to a phenomenon in which a line of vertical lines appears on the screen when a strong reflected light of a light source or an illumination lamp is photographed. It is often seen when using high-speed shutter and is often seen when shooting very bright objects such as light sources. The CCD type image sensor has a structure in which only one light is present in one cell. When a charge that can be stored in one cell overflows due to reflection phenomenon and interference phenomenon between cells, a smear phenomenon occurs.
The smear phenomenon is easily generated in the buffer area for storing or transmitting to the image sensor according to the exposure of the light in the high-speed shutter setting. The high-speed shutter of the CCD adjusts the exposure by the exposure time of the CCD through the shutter of the camera body and by directly controlling the CCD at a shutter speed higher than the synchronization speed. If the shutter of the camera body is opened when acquiring an image using the electronic shutter of the CCD, the light continues to be incident on the photodiode and the charge is overflowed in the stored space. If the charge of the CCD composed of the longitudinal array is read, And a smear phenomenon is generated.
The smear phenomenon thus generated can distort the photographed image, cause a problem in that the system of detecting or checking the vehicle may grasp the vehicle shape and obstruct the recognition of the vehicle number.
Meanwhile, the
The discriminating
Specifically, when the intensity of light of the original image is higher than a predetermined threshold value, the
On the other hand, the
The low-illuminance
Specifically, according to the improved truncation histogram smoothing method, an adaptive truncation ratio for an original image is determined, and a truncated histogram is generated in which the upper region of the histogram of the original image is removed according to the determined adaptive truncation ratio. At least a part thereof is cut and reassigned to the cut histogram.
In this regard, Figures 3a and 3b schematically illustrate histogram truncation of an image associated with the present invention. FIG. 3A schematically shows a state in which the upper end portion of the histogram of the original image is removed according to the conventional CHE scheme, FIG. 3B illustrates a histogram upper region of the original image according to the improved cut histogram smoothing (A_CHE) It schematically shows how it is removed.
According to the conventional CHE scheme, as shown in FIG. 3A, the upper region of the histogram is removed according to the fixed cut-off ratio. In the CHE system disclosed in Korean Patent No. 10-0756318 (Patent Document 2), a fixed cutting ratio is used.
However, according to the improved cutting histogram smoothing (A_CHE) method of the present invention, as shown in FIG. 3B, the cutting ratio is determined in accordance with the original image, . Here, the adaptive cut rate
Is determined by the following equation (1).
In Equation (1)
Is a gray value of the original image, to be.4 is a schematic diagram for explaining an improved cut histogram smoothing that can be applied to the present invention. The upper region removed according to the adaptive cut rate is re-assigned to the cut histogram by clipping the low-intensity distribution region and the high-intensity distribution region. As shown in FIG. 4, the cut portion of the upper region includes a cut-off range for the low-light-intensity distribution region and a cut-off range for the high-luminance-range distribution region.
Here, the low-
And the high- Can be expressed by Equations (2) and (3), respectively.
In equations (2) and (3)
Is a value arbitrarily set in order to distinguish low light intensity and high light intensity, for example, Lt; / RTI > Is the global level of the original image.Clipping range for low light intensity distribution area
Is expressed as shown in Equation (4), and the cut-off range Can be expressed by Equation (5).
In equations (4) and (5)
Is the cut histogram, Is the low-illuminance distribution area in gray scale, Is the high-intensity distribution area in gray scale, Is the sum of the grayscales.Referring back to FIG. 2, when the smear is generated in the original image classified into the high-contrast image, the high-
The detection unit may determine whether or not smear is generated in the input original image, and may detect the position of the smear generated column (first column) when it is determined that the smear is generated.
The detecting unit may further include an extracting unit and a converting unit. The extracting unit extracts a signal distribution curve for each column constituting the original image using the original image. The converting unit converts the signal distribution curve generated by the extracting unit into a normal distribution curve.
In this regard, FIG. 5 shows an example of a signal distribution curve according to each column of the image obtained by the photographing module, and FIG. 6 shows a normal distribution curve with the signal distribution curve of FIG. 5 as an input.
As shown in FIG. 5, the extracting unit of the detecting unit may curve the input original image into a signal distribution for a column unit signal. The signal distribution curve represents the sum of gray values of a plurality of pixels constituting each column constituting the original image.
Further, as shown in FIG. 6, the converting unit of the detecting unit can convert the signal distribution curve related to the input original image into a normal distribution curve.
Referring back to FIG. 2, the elimination unit may remove the smear generated in the original image based on the position information of the first column detected by the detection unit.
The restoration unit may restore the original image of the first column from which the smear is removed by using an interpolation method based on the priority of the patch. Specifically, the restoration unit calculates a priority for each of the plurality of pixels in the patch, determines a highest priority pixel having the highest priority among the calculated priorities, and determines the highest priority pixel and a plurality of pixels The restoration can be performed by comparing the similarity of pixels not constituting the first column.
On the other hand, the
Here, the target image will be the original image or the corrected image. If the original image is classified into a low-illuminance image or a high-illuminance image by the
The image estimating means is used when a high-resolution image generating method is used as a focus deterioration method.
The image estimating unit may generate a super-resolution image by up-scaling a low-resolution deteriorated image according to an up-scale coefficient.
In the case where focus deterioration occurs due to shaking or error in the target image, the image estimating means can predict the focused image from the image of the target image.
In the case of the image with focus deterioration, the edge portion of the subject is blurred, and various algorithms can be used to predict the actual edge information. Such an algorithm is widely known to those skilled in the art, and a detailed description thereof will be omitted.
The image estimating means obtains a super resolution image (SR) from the low resolution image in the intermittent image with the focus deterioration using the above algorithm, and estimates the focused image using the obtained super resolution image (SR).
The image generating means is used when the high-resolution image generating method is used as the improved focus deteriorating method.
In the case where a super-resolution image is generated according to an up-scale coefficient in the low-resolution image input by the image estimation unit, the image generation unit removes at least part of the focus deterioration using the super- Resolution image, and the high-resolution image is calculated by interpolation. At this time, the interpolation method can preferably improve the focus-deteriorated image using bicubic interpolation.
The relationship between the image with focus deterioration and the high-resolution image can be expressed by Equation (6) below.
here,
Is a generated high-resolution image, Is a focus deterioration input by a low-resolution image, Is a super resolution image which is interpolated according to the up-scale coefficient in a low-resolution image.The image generating means may generate a high-resolution image focused on the target image having the focal deterioration according to Equation (6), and the detailed contents of the generation will be omitted since it is self-explanatory to the ordinary technician.
The image restoration means is used when the detailing method is used as an improved focus deterioration method.
The image restoration means uses a dithering method in order to improve sharpness by receiving a high-resolution image focused on an image with focus deterioration.
When the image generating means receiving the target image generates the high-resolution image, the image restoring means can remove a part of the focus deterioration from the target image by using the generated high-resolution image.
The elimination of such focus deterioration can be done in plural. That is, the image generating means calculates the high-resolution image using the super-resolution image estimated by the image estimating means, and uses the image restoring means to improve the sharpness by the dithering method.
The image reconstructed by the image reconstructing means is input to the image estimating means again and the super resolution image is estimated again. The high resolution image is again calculated by the image generating means, use. This repeated process can be repeated according to the set parameter value.
In addition, a directionally adaptive guided filter can be used as an example of a deterring method that can be used in the image restoration means. The guided filter is a local linear filter, and has the property of smoothing while preserving edge components like a bilateral filter. This feature prevents the edge of the image from being blurred and maintains the base layer.
According to the image generating means, although the image quality is improved, there may be a local smoothing phenomenon and an artifact defect in the edge region, that is, the vicinity of the subject or the characteristic information. In order to further improve it and obtain precise results, a clear high quality image can be obtained by using the direction adaptive guiding filter.
As a deterring method that can be applied to the present invention, the guarded filter performs an operation as shown in Equation (7) below.
In Equation (7)
, Represents the pixel position, Lt; / RTI > represents a filter kernel, Represents an input image, Represents a linear guidance image.The filter kernel of Equation (7) can be expressed as Equation (8) below.
In Equation (8)
Is a linear transformed image, Lt; / RTI > represents a filter kernel, Lt; / RTI > Is a normalized parameter, The of Is the average in the transformed image, The Is the pixel location in the kernel center of.Referring again to FIG. 2, the
The
Further, the
That is, the license plate area and the center of gravity can be detected using the histogram projection method, and the headlight area and the center of gravity can be detected using the blur method.
The histogram projection method is implemented by vertically and horizontally performing a method of adding the shading values corresponding to the coordinates in the pixel, that is, having the same components. At this time, the numbers of the plates are included in the center of the summed data, The plate region and the center of gravity can be detected by projecting the coordinates of the region on the original image.
The blur method applied to the detection of the headlight area and the center of gravity is widely used as an object segmentation method in image processing. In addition to the bluff method, differential and regional extremes methods can be additionally applied if they are used to find regions that are brighter or darker than the surrounding area. The headlight region and the center of gravity which can be easily determined by the characteristic feature of the vehicle and the region in which locally identical components are gathered and the vehicle can be easily detected can be detected.
The vehicle
It is determined that it is most accurate to judge the vehicle type based on the size of the vehicle photographed on the original image. Accordingly, the determination factor of the vehicle
For example, when the ratio of the vehicle area is within the range of 30 to 50 percent, it is perceived as belonging to the motorcycle. If the ratio of the vehicle area is within the range of 50 to 80 percent, It can be set to be recognized as a general vehicle.
The judging factor may further include the full length of the vehicle measured by the area detecting unit and the length of the vehicle's turn. If it is determined that the car type and the two-wheeled vehicle can not be distinguished from each other only by the vehicle area ratio in the vehicle
In this connection, Figs. 7A to 7C show an example of a result of judging the type of vehicle according to the present invention. Fig. 7A shows detection of the vehicle area A with respect to the ordinary vehicle, Fig. 7B shows detection of the vehicle area A with respect to the light vehicle, and Fig. 7C shows detection of the vehicle area A with respect to the two- The
The
< LPR How the system works>
Hereinafter, an operation method of the LPR system according to the present invention will be described in detail with reference to the drawings.
8 is a flow chart of an adaptive probability based low illumination image enhancement and smear reconstruction processing method related to an example of the present invention.
8, the photographing
Next, the
Then, the
Subsequently, the
In step S40, an improved histogram smoothing method, which is one of the histogram smoothing methods, is used as a method of improving image quality of an original image.
In general, histogram equalization improves the image by uniformly distributing the distribution of brightness values by processing images in which the distribution of brightness values is shifted to one side or is not uniform. The ultimate goal of histogram smoothing is to create a histogram with a uniform distribution, which makes the distribution of the histogram uniform during processing. In this case, since the brightness value is significantly changed according to the input image and the undesired noise can be amplified, the method can increase the contrast while maintaining the average brightness value.
Since the histogram processing method is a simple method for solving the degraded image quality, there are various methods. Typical examples are Bi Histogram Equalization, Recursive Mean-Separate Histogram Equalization, and Clipped Histogram Equalization.
Among them, the Clipped Histogram Equalization (CHE) method is most effective and maintains the amount of information in the image, and there is no image distortion. This method controls the maximum value of the histogram by setting an arbitrary maximum value and cutting the upper portion of the histogram exceeding the maximum value to reset the entire region of the threshold value. It should be set to have the minimum range after the histogram conversion and the dynamic threshold value according to the image feature change can be set by assigning the threshold value to the initial setting according to the image. In this case, the upper part of the histogram is reassigned to the whole area, so it is strong against noise, but in general image, the improvement of the contrast results in inefficiency compared with other methods.
Therefore, in the present invention, the histogram section is divided into several sections without resetting the upper part of the histogram to the entire area, and the biased distribution is evenly distributed in the peripheral section of the histogram section by the distance ratio, We proposed an improved A_CHE method of CHE as a way to improve the image contrast.
As a result, the low-illuminance area is improved from the dynamic range, and furthermore, the high-illuminance area can be processed with the improved image having a strong form.
Meanwhile, in step S42, the smear is detected and removed from the original image using image processing.
After receiving the original image from the photographing
Also, as shown in FIG. 6, the converting unit of the detecting unit can convert the signal distribution curve related to the input original image into a normal distribution curve. That is, when the smear is generated at a specific place from the source of sunlight or passive light by the vehicle in general, it can be expressed as a normal distribution.
After the original image is expressed as a normal distribution, the presence or absence of smear is determined. It can be concluded that the smear is generated in the columns of the image due to the characteristics of the smear, and the smear occurs in the regions of the column having white and bright shapes, especially white and bright shapes.
Thus, the sum of the gray values along the direction of the distribution generated in the smear and other sections in the signal distribution curve and the value of the maximum estimate for the sum of the column distribution curve, i.e., the blurring, In the curve, when a portion having a specific and significantly higher frequency than other portions in the normal distribution exists, it can be determined that the smear is generated in the original image.
After the presence of the smear in the original image is determined, the position of the smear is determined. It is possible to judge the portion where the smear is generated by judging the portion having a specific and remarkably high frequency in the signal distribution curve of the original image as compared with the other portions as the region in which the smear occurs.
It is determined that the smear region exists, and after the position is determined, the smear is removed and a binary pattern map (Alpha map) for restoration is generated.
The smear intensity and the exact background intensity are estimated to remove the smear. A binary pattern map is generated by applying an average filter to each column of the original image.
In this case, the binary pattern map has a value of 1 when the signal intensity on the normal distribution is larger than a predetermined threshold value, and the column has a value of 1, and the column has a value of 0 when the signal intensity is small.
After the binary pattern map is generated, the smear position is rearranged using the binary pattern map. It consists of vehicle, noise, background, and smear when analyzing each pixel station. The intensity of the smear signal is estimated by reconstructing the smear region search size in order to align the gray values of the pixels in the column using the applied filter, and the accurate position is determined and rearranged.
After rearranging the smear position, the smear is removed. Using the determined area and intensity of the smear, smear can be removed from the entire image.
After removing the smear, the original image is reconstructed. There are various methods of restoring the original image, but in the case of the present invention, inpainting is applied. In particular, although the original image can be reconstructed using the interpolation method, the original image can be reconstructed using a patch method having a certain size in the periphery since it is not suitable for the region.
Then, the image reconstruction unit generates a reconstructed image by applying a focus deterioration method combined with a high-resolution image generation method and a deterring method to a target image (S50).
Specifically, in step S50, up-scaling is performed according to an up-scale coefficient in a target image with focus deterioration to generate a super-resolution image, A high-resolution image is calculated by applying a bicubic interpolation to the generated super-resolution image. The process may be repeated according to the value of the predetermined coefficients, and preferably it may be repeated until the focus deterioration is no longer improved
After the above process is repeatedly performed, a high-resolution image in which a part of the focal deterioration has been removed can be obtained. In order to restore a high-resolution image thus produced to a clear image with good visual quality, Method.
However, the step S50 is not an essential step, and it is possible to proceed to the step S60 while omitting the step S50.
Then, the
For character recognition, one of the three detection methods of the license plate position can be executed first. First, the feature region of the license plate is detected using the vertical and horizontal edge information from the photographed image. The second is to detect the position of the license plate by the scan data analysis. The third is to detect the exact license plates by directly searching for numbers and letters.
When the position of the license plate is detected, the recognition algorithm uses the numbers, letters (consonants, vowels, and vowels) to recognize the characters by template matching (Hangul consonant + number) ) Are classified in detail and the recognized characters are re-confirmed, thereby minimizing the error in decoding the characters.
Hereinafter, the smear processing process of the high-contrast
The smear is present in the CCD camera due to its characteristics and is obtained from the light (light source) at the same position or strong light (light source) due to the reflector depending on the position of the camera and the object (vehicle), resulting in a distorted image.
In the proposed method, the smear column is searched in the still image to remove the smear. In order to reduce the calculation amount, only the lowermost region of the region of interest (ROI) And determines that a smear exists when the distribution of the corresponding component has a certain value or more.
In fact, it is difficult to find the exact location. This is because the distributions of photons reaching the object from the sunlight are irregularly occurring in the object, background, noise and smear regions, and exist and occur more than one.
Therefore, in order to analyze the characteristics of the smear, it is determined that the smear region is present in the regions of the white and bright columns transmitted to the supersaturated object, and the gray characteristic is analyzed and detected. The sum of the gray values and the distribution curve, ie the maximum peak value for the sum of the blurring, are found along the direction of the distribution not only in smear but also in other sections. Other areas in the curve show relatively similar components or even smoothing as the background, vehicle components.
First, the smear intensity intensity curve model can be expressed as Equation 9 below.
here
Is the size of the original image, and Denote rows and columns, respectively. The The gray value in the column, The ≪ / RTI > represents a gray value for a pixel within a pixel.On the other hand, the threshold value set for the smear region is
Is expressed by Equation (10) below.
here
Is the mean according to the statistical analysis in the whole image, Is the standard deviation, Means a weight.Regarding the correlation between the equations (9) and (10), it can be seen that the smear phenomenon occurs at the highest estimate among the curves of the gray sum, as described above,
Can be expressed by the following equation (11).
That is, when the smear brightness intensity is higher than the threshold value, the smear region is determined.
After finding the smear position coordinates,
Associated with the start coordinates And end coordinates Can be obtained.However, it is possible to catch the smear occurrence region in an area wider than the start coordinates and the end coordinates thus obtained. For example, as shown in the following equation (12), it is possible to determine the region of 2 pixels before the start coordinate and after the end coordinate by 2 pixels as the smear occurrence area.
here,
and Means the result for the smear zone.In addition, smear can be detected and removed by estimating the intensity of the smear in the image in which the smear is generated and the background intensity in the state in which the smear is removed.
Apply a mean filter to the signal strength for each column and use the applied filter to reconstruct the smear region search size to align the gray values of the pixels in the column.
Equation (13) is an expression for an average filter applied to the signal strength of each column.
here,
Is an image data intermediate position corresponding to coordinates in the search size, Is the radius of the selected location area, Is an ordered vector at the selected search size.In the case of
According to the above method,
Smear and The intensity of the background component can be accurately obtained.It is possible to estimate the smear intensity, determine the difference from the pure background intensity, and determine the position and area of the smear.
After determining the intensity and area of the smear, the smear is removed from the entire image.
here,
The smear was removed . In other words, .Through such a method, it is possible to detect smear in the image including the smear, to grasp the position of the smear, and to remove the smear from the image.
The extracted smear result and region (position) are detected and the non-distorted image is recovered from the generated binary pattern map (alpha pattern map). In order to deal with this problem, inpainting is applied as a technique for recovering lost portions of images among various restoration methods.
In general, there is an interpolation method that uses surrounding values in the simplest way. This is simply applicable if the missing area is not large, but the interpolation method for the area is not suitable because the error propagates. Therefore, it is performed until all the clipped regions on each layer are interpolated. Also, the interpolation priority is calculated for pixels on the boundary of the area to be interpolated, and the texture and structure interpolation are performed from the priority order. And the confidence of the center point of the patch centered at the pixel on the boundary and the value associated with the structure in the patch.
In Equation (15)
Lt; RTI ID = 0.0 > Quot; Is the confidence value at the pixel in the patch, Represents the area to be interpolated. Means a patch, The size of the patch, Is the direction unit vector of the structure in the image, Gt; The unit vector of the normal to the contour in the image. Is the normalization constant, and the reliability of the pixel is set to 1 for pixels included in the source region S, and to 0 otherwise.The meaning of the values is simply a value that allows the structure to be restored more correctly by giving a higher priority to the pixels in the direction coinciding with the direction of the structure in the image, So that more accurate interpolation is possible.
When the highest priority is determined, the pixel is subjected to template matching to compare a gray intensity with a patch within a certain range to derive an area having the maximum similarity, and blending with the patch area on the target pixel ).
In the present invention, a sum of absolute difference (SAD) in a gray-scale space is used as a template matching method. Finally, in the process C, the highest priority among the pixels on the boundary is recalculated, and the process from A to C is repeated until all the target pixels are interpolated.
9 is a flowchart of a method for determining the type of vehicle in the LPR system related to an example of the present invention.
9, the original image photographed by the photographing
Subsequently, the
Subsequently, the
Hereinafter, results of actual application of the present invention will be described with reference to the drawings. FIGS. 10A to 10C show an example of a result of processing a low-illuminance image according to the present invention, and FIGS. 11A to 11C show an example of a result of a smear reconstruction process performed on a high-illuminance image according to the present invention.
In this case, the vehicle image input from the 1.3M camera (PointGray) was used in the LPR system and converted to grayscale and processed. Experimental environment was implemented in Windows 7, CPU 2.8GHz, and 4G memory using Visual Studio 2010 compiler.
FIGS. 10A to 10CC are experimental results for a comparative analysis between the existing HE method and the proposed A_CHE method. 10A is an actual input image, FIG. 10B is a result image by the conventional HE method, and FIG. 10C is a result image by the proposed A_CHE method.
Referring to FIGS. 10A to 10C, although the HE result image has been improved for the actually input image, the process for the contrast and low-illuminance areas is still distorted. This is because when the distribution is reconstructed into uniform regions within the still image, both low-light and high-intensity regions are processed, especially amplified and extended values occur in low-light region and high-intensity region. However, in the A_CHE method, which is more improved than the conventional HE method, it is reconstructed in the stabilized state of the dynamic range in the low illumination and high illumination region, and the distortion information is remarkably reduced even in the still image. However, the object of the present invention is a low-illuminance environment in which the license plate number is not visible in the input image when viewed by the naked eye, because the object of the present invention is the part that carries out recognition of the car number in the LPR system. However, in the result of the proposed A_CHE processing, The numbers were clear and clear enough to be distinguished.
11A to 11C are experimental results for analyzing the smear process. 11A is an actual input image, FIG. 11B is a smear detection result, and FIG. 11C is a smear restoring result.
Figs. 11A to 11C show the distribution of the strong region due to the occurrence of smear in the light-strong region of the light source from the vehicle or the reflector of the vehicle, and then detect the smear region. The detected area is set to be wider than the smear distribution, and the smear area is restored using the inpainting technique for the area. As a result, it was confirmed that the smear was remarkably reduced even though the smear region remained a strong region, that is, a region widely distributed.
The present invention can also be embodied as computer-readable codes on a computer-readable recording medium. A computer-readable recording medium includes all kinds of recording apparatuses in which data that can be read by a computer system is stored. Examples of the computer-readable recording medium include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like, and may be implemented in the form of a carrier wave (for example, transmission via the Internet) .
In addition, the computer-readable recording medium may be distributed over network-connected computer systems so that computer readable codes can be stored and executed in a distributed manner. In addition, functional programs, codes, and code segments for implementing the present invention can be easily inferred by programmers of the technical field to which the present invention belongs.
In addition, the apparatus and method as described above can be applied to a case where the configuration and method of the embodiments described above are not limitedly applied, but the embodiments may be modified so that all or some of the embodiments are selectively combined .
Claims (15)
A vehicle recognition module that receives the original image photographed by the photographing module to determine whether the vehicle photographed on the original image belongs to a vehicle classified into a predetermined category; And
And a number recognition module for receiving the original image photographed by the photographing module to recognize a character of a license plate of the vehicle,
The vehicle classified into the predetermined category is a light vehicle and a two-
The number recognition module includes:
A discrimination unit for classifying the original image into one of a low-illuminance image, a high-illuminance image, and an unprocessed image based on a discriminant related to the original image; And
And an image processor for generating a corrected image using the original image,
Wherein the image processing unit comprises:
A low-illuminance image processing unit for generating the corrected image from the original image by using an advanced clipped histogram equalization method when the discrimination unit classifies the original image into the low-illuminance image; And
And a high contrast image processor for removing the smear generated in the original image to generate the corrected image when the discrimination unit classifies the original image into the high contrast image,
The number recognition module uses the target image for character recognition of the license plate of the vehicle,
When the discrimination unit classifies the original image into the low-illuminance image or the high-illuminance image, the target image is the correction image, and when the discrimination unit classifies the original image as the unprocessed image, The image is the original image,
The improved truncation histogram smoothing scheme determines an adaptive truncation ratio for the original image and generates a truncated histogram in which the upper region of the histogram of the original image is removed according to the determined adaptive truncation ratio, At least a part of which is cut and reassigned to the cut histogram to generate the corrected image,
Wherein the adaptive cutoff ratio is determined by the following equation:
Equation
In the above equation, Is the adaptive cut rate, Is a gray value of the original image, to be.
The vehicle recognition module includes:
An area detecting unit detecting a vehicle area of the original image, the area being a pixel of the vehicle; And
Further comprising: a vehicle type determination unit that determines whether the vehicle belongs to the light vehicle and the motorcycle based on a predetermined determination factor,
Wherein the determination factor includes a percentage of the vehicle area occupied by the original image.
Wherein the area detecting unit comprises:
Extracting a license plate area of the vehicle and a headlight area of the vehicle in the detected vehicle area,
And measures the full width of the vehicle and the length of the vehicle using the extracted license plate area of the vehicle and the headlight area of the vehicle.
The determination factor may include,
The total length of the vehicle measured by the area detecting unit, and the length of the vehicle.
The vehicle-
If the ratio of the vehicle area occupied by the original image is within the first range, it is determined that the vehicle belongs to the two-
When the ratio of the vehicle area occupied by the original image is within the second range, it is determined that the vehicle belongs to the light vehicle,
Wherein the upper limit of the first range is less than the upper limit of the second range and the lower limit of the first range is less than the lower limit of the second range.
Wherein the upper limit of the first range is greater than the upper limit of the second range,
Wherein the vehicle type determining unit determines the vehicle width length of the vehicle and the vehicle length of the vehicle based on the determination factor, when the vehicle type determining unit determines that the ratio of the vehicle area occupies the first range and the second range at the same time, Lt; RTI ID = 0.0 > LPR < / RTI > system.
The discrimination factor is an intensity of light of the original image converted into a gray scale,
Wherein when the intensity of light of the original image is higher than a predetermined threshold value, the discriminating unit classifies the original image into the unprocessed image.
If the intensity of the light of the original image is lower than the threshold value,
Wherein the determination unit classifies the original image into one of the low-illuminance image and the high-illuminance image according to the intensity of light of the original image.
Wherein the high-
A detector for detecting a position of a first column in which the smear is generated among columns constituting the original image; And
And removing the smear from the original image based on the detected position information of the first column.
Wherein:
An extraction unit for extracting signal distribution curves for the respective columns constituting the original image using the original image input to the number recognition module; And
And a conversion unit for converting the signal distribution curve into a normal distribution curve,
Wherein the signal distribution curve represents a sum of gray values of a plurality of pixels constituting each column constituting the original image.
And a reconstruction unit for reconstructing the original image of the first column from which the smear is removed by using a predetermined interpolation method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150057097A KR101563543B1 (en) | 2015-04-23 | 2015-04-23 | Lpr system for recognition of compact car and two wheel vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150057097A KR101563543B1 (en) | 2015-04-23 | 2015-04-23 | Lpr system for recognition of compact car and two wheel vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
KR101563543B1 true KR101563543B1 (en) | 2015-10-29 |
Family
ID=54430636
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150057097A KR101563543B1 (en) | 2015-04-23 | 2015-04-23 | Lpr system for recognition of compact car and two wheel vehicle |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101563543B1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180094812A (en) * | 2017-02-16 | 2018-08-24 | (주)지앤티솔루션 | Method and Apparatus for Detecting Boarding Number |
KR101965294B1 (en) * | 2018-11-19 | 2019-04-03 | 아마노코리아 주식회사 | Method, apparatus and system for detecting light car |
-
2015
- 2015-04-23 KR KR1020150057097A patent/KR101563543B1/en active IP Right Grant
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180094812A (en) * | 2017-02-16 | 2018-08-24 | (주)지앤티솔루션 | Method and Apparatus for Detecting Boarding Number |
KR101973933B1 (en) | 2017-02-16 | 2019-09-02 | (주)지앤티솔루션 | Method and Apparatus for Detecting Boarding Number |
KR101965294B1 (en) * | 2018-11-19 | 2019-04-03 | 아마노코리아 주식회사 | Method, apparatus and system for detecting light car |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101553589B1 (en) | Appratus and method for improvement of low level image and restoration of smear based on adaptive probability in license plate recognition system | |
You et al. | Adherent raindrop detection and removal in video | |
CN110473185B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
WO2017028587A1 (en) | Vehicle monitoring method and apparatus, processor, and image acquisition device | |
US9104914B1 (en) | Object detection with false positive filtering | |
US8472744B2 (en) | Device and method for estimating whether an image is blurred | |
KR101758684B1 (en) | Apparatus and method for tracking object | |
US8068668B2 (en) | Device and method for estimating if an image is blurred | |
CN110544211B (en) | Method, system, terminal and storage medium for detecting lens attached object | |
US10452922B2 (en) | IR or thermal image enhancement method based on background information for video analysis | |
CN109413411B (en) | Black screen identification method and device of monitoring line and server | |
CN110532875B (en) | Night mode lens attachment detection system, terminal and storage medium | |
Jiang et al. | Car plate recognition system | |
CN111027535A (en) | License plate recognition method and related equipment | |
KR20120111153A (en) | Pre- processing method and apparatus for license plate recognition | |
CN112686252A (en) | License plate detection method and device | |
CN111881917A (en) | Image preprocessing method and device, computer equipment and readable storage medium | |
KR101563543B1 (en) | Lpr system for recognition of compact car and two wheel vehicle | |
CN106778765B (en) | License plate recognition method and device | |
CN110363192B (en) | Object image identification system and object image identification method | |
KR102506971B1 (en) | Method and system for recognizing license plates of two-wheeled vehicles through deep-learning-based rear shooting | |
JP7264428B2 (en) | Road sign recognition device and its program | |
KR101696519B1 (en) | Number plate of vehicle having boundary code at its edge, and device, system, and method for providing vehicle information using the same | |
KR100801989B1 (en) | Recognition system for registration number plate and pre-processor and method therefor | |
KR101875786B1 (en) | Method for identifying vehicle using back light region in road image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant | ||
FPAY | Annual fee payment |
Payment date: 20180921 Year of fee payment: 4 |