KR101665961B1 - Apparatus for providing image of vehicle and method thereof - Google Patents

Apparatus for providing image of vehicle and method thereof Download PDF

Info

Publication number
KR101665961B1
KR101665961B1 KR1020160043420A KR20160043420A KR101665961B1 KR 101665961 B1 KR101665961 B1 KR 101665961B1 KR 1020160043420 A KR1020160043420 A KR 1020160043420A KR 20160043420 A KR20160043420 A KR 20160043420A KR 101665961 B1 KR101665961 B1 KR 101665961B1
Authority
KR
South Korea
Prior art keywords
image
vehicle
feature information
lane
lanes
Prior art date
Application number
KR1020160043420A
Other languages
Korean (ko)
Inventor
안순현
Original Assignee
렉스젠(주)
안순현
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 렉스젠(주), 안순현 filed Critical 렉스젠(주)
Priority to KR1020160043420A priority Critical patent/KR101665961B1/en
Application granted granted Critical
Publication of KR101665961B1 publication Critical patent/KR101665961B1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • H04N5/2257

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present invention relates to a vehicle image providing apparatus and a method thereof. According to the present invention, there is provided an apparatus for providing an image of a vehicle photographed at a predetermined point on a road, comprising: a front image obtained by photographing a front of a vehicle running on a predetermined lane, and extracting lane information of the vehicle from the forward image A rear image acquisition unit for acquiring a rear image of the vehicle photographed including a plurality of lanes, a controller for arranging the front image to match the lane of the rear image using the lane information of the vehicle, And a screen output unit for outputting the forward image of the arranged vehicle in correspondence with the lane of the rear image.
According to the present invention, the forward and backward images are efficiently matched to the manager on the basis of the feature information of each image, and the search / estimation of the vehicle that is not recognized due to deliberate license plate cover or damage of the license plate .

Description

[0001] APPARATUS FOR PROVIDING IMAGE OF VEHICLE AND METHOD THEREOF [0002]

The present invention relates to a vehicle image providing apparatus and method thereof, and more particularly, to a vehicle image providing apparatus and method for easily identifying a vehicle traveling on a plurality of lanes.

As the number of illegal activities using vehicles increases, systems for controlling illegal activities or monitoring illegal activity areas are being installed in various places around the living places.

A representative system is a vehicle security system. Generally, a vehicle security system includes a detection module, and a vehicle passing through a set point is photographed by a photographing device to detect / recognize the vehicle number.

Such a vehicle security system normally detects and recognizes information of a moving vehicle installed in a single lane, and detects a movement path of an illegal vehicle (a crime vehicle, a delinquent vehicle, etc.) in conjunction with an important database (DB).

However, in recent years, vehicles that do not recognize the vehicle number have significantly increased through such acts as modifying the vehicle license plate, modifying the license plate, or inserting the license plate. In particular, even though the car number recognition rate has been increased to 99%, the number of unrecognized vehicles is steadily increasing due to deliberate license plate coverings or damage.

As a result, in the case of a vehicle security system installed in a city in the metropolitan area, about 600,000 vehicles are detected per day, but only 95 ~ 97% of the vehicle numbers are recognized. In other words, the vehicle number of a vehicle near 20,000 is not recognized.

Particularly, in the existing vehicle security system, if the vehicle number is not recognized after detecting the license plate area, it is classified as unrecognized, and since it can not provide any information for estimating the unrecognized vehicle, I have never been able to cope with. Further, even in the case of displaying the related information, only the alarm for the vehicle image, the recognition information and the unrecognized vehicle is provided, and no information about the unrecognized vehicle is provided.

Therefore, there is a great need for a new device capable of efficiently detecting information such as deliberate license plate coverings or damage of the license plate, and outputting information therefrom.

The technology of the background of the present invention is disclosed in Korean Patent Registration No. 10-0968433 (published on Jul. 2010, 2010).

The present invention is to provide a vehicle image providing apparatus and method for easily identifying a vehicle traveling on a plurality of lanes.

The present invention provides an apparatus for providing an image of a vehicle photographed at a predetermined point on a road, the apparatus comprising: a forward image acquisition unit that acquires a forward image of a front of a vehicle traveling on a predetermined lane; A rear image collection unit for acquiring a rear image of the vehicle photographed including a plurality of lanes, a controller for arranging the front image to match the lane of the rear image using the lane information of the vehicle, And a screen output unit for outputting the forward image of the arranged vehicle in correspondence with the lane of the rear image.

The screen output unit may include a first setting area for outputting the rear image and at least one second setting area for arranging and outputting the arranged forward images at positions corresponding to lanes included in the first setting area .

The vehicle image providing apparatus may further include an image divider that divides the forward image by a lane to generate a divided image when a plurality of lanes are included in the forward image.

Also, the forward image may be an image of a vehicle whose vehicle number attached to the front is unrecognized.

The vehicle image providing apparatus may further include a feature detecting unit that detects the first feature information and the second feature information of the vehicle by image processing the forward image and the backward image, And an estimating unit that estimates a vehicle corresponding to the forward image among at least one vehicle photographed in the backward image.

The screen output unit may further include a third setting area that is arranged in parallel with the second setting area and outputs a rear image of the estimated vehicle corresponding to the second setting area.

The vehicle image providing apparatus may further include a determination unit that compares the first feature information and the second feature information and determines whether or not the estimated vehicle is changed based on passage of the predetermined point, The screen output unit outputs the forward image of the vehicle on the second setting area corresponding to the lane of the order matching with the changed lane among the one or more second setting areas when the estimated vehicle has determined that the lane has been changed can do.

The first feature information and the second feature information may include at least one of a size, a color, a form, a vehicle number, a type of the vehicle, a color of the vehicle, and a lane of the vehicle.

The vehicle image providing apparatus searches for a storage unit storing feature information detected from the forward image based on at least one of predetermined information input from an administrator and feature information detected from the rear image, If the feature information of the vehicle is not found, the search unit may classify the vehicle as a forward image unrecognized vehicle.

According to another aspect of the present invention, there is provided a method of providing an image of a vehicle photographed at a predetermined point on a road, the method comprising: a forward image of a front of a vehicle running on a predetermined lane; A step of extracting vehicle information of the vehicle from the forward image, a step of arranging the forward image to be matched with the lane of the rear image using the lane information of the vehicle, And outputting a forward image of the vehicle corresponding to the lane of the rear image.

The outputting step may include a first setting area for outputting the rear image and a second setting area for arranging and outputting the arranged forward images at positions corresponding to lanes included in the first setting area, Can be output.

The vehicle image providing method may further include generating a divided image by dividing the forward image for each lane when the forward image includes a plurality of lanes.

Also, the forward image may be an image of a vehicle whose vehicle number attached to the front is unrecognized.

The vehicle image providing method further includes the steps of: detecting the first feature information and the second feature information of the vehicle by image processing the forward image and the backward image, respectively, and comparing the first feature information and the second feature information And estimating a vehicle corresponding to the forward image among the at least one vehicle photographed in the backward image.

The outputting step may further output a third setting area arranged in parallel with the second setting area and outputting the rear image of the estimated vehicle in correspondence with the second setting area.

The vehicle image providing method may further include a step of comparing the first feature information and the second feature information to determine whether or not to change the estimated vehicle based on passage of the predetermined point, Outputting the forward image of the vehicle on the second setting area corresponding to the lane of the order matching with the changed lane of the one or more second setting areas when the estimated vehicle is determined to have changed lanes Can be output.

The first feature information and the second feature information may include at least one of a size, a color, a form, a vehicle number, a type of the vehicle, a color of the vehicle, and a lane of the vehicle.

Also, the vehicle image providing method may further include searching a vehicle number detected in the rear image in a DB storing a vehicle number detected in the forward image, and searching the rear image and the rear image And classifying the information including the vehicle number as a number-unrecognized vehicle.

According to the vehicle image providing apparatus and method of the present invention, the forward and backward images are efficiently matched to the manager on the basis of the feature information of each image, and the vehicle image is provided to the manager by a deliberate license plate There is an advantage that the vehicle that can not be detected can be effectively searched or estimated.

1 is a view for explaining a system to which a vehicle image providing apparatus according to an embodiment of the present invention is applied.
FIG. 2 is a detailed view of the vehicle image providing apparatus of FIG. 1. FIG.
3 is a diagram for explaining a template provided by the screen output unit of FIG.
4 is a diagram illustrating a vehicle image providing method of the vehicle image providing apparatus according to the first embodiment of the present invention.
5A and 5B are views for explaining a forward / backward image received by the vehicle image providing apparatus of FIG.
6 is a view illustrating a vehicle image providing method of a vehicle image providing apparatus according to a second embodiment of the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present invention.

1 is a view for explaining a system to which a vehicle image providing apparatus according to an embodiment of the present invention is applied.

1, a vehicle image providing system according to an embodiment of the present invention includes a photographing apparatus 100 and a vehicle image providing apparatus 200. The vehicle image providing system includes a front / Or the backward image, and matches the forward / backward images by the lane to provide them to the manager. Here, a plurality of lanes includes lanes without a separable section that can be reversed.

The photographing apparatus 100 includes a front photographing apparatus 110 and a rear photographing apparatus 120 which are installed at the same point on the road so as to photograph a vehicle traveling on a plurality of lanes and photographs the front and rear images of the vehicle And transmits it to the vehicle image providing apparatus 200.

The front photographing apparatus 110 is installed on the front of the vehicle, and photographs the front of the vehicle using predetermined detecting means, and provides the photographed front image to the vehicle image providing apparatus 200. At this time, the detecting means includes various means for detecting the vehicle such as detecting means such as a loop, a sound wave, a laser, and a video.

The forward image may be a photographed image including a plurality of lanes or an image photographed by lanes. When a plurality of front photographing apparatuses 110 are installed in correspondence with a plurality of lanes, the forward image may be an image photographed separately for each lane, and when the front photographing apparatus 110 is installed in a plurality of lanes along a plurality of lanes, As shown in FIG.

The rear photographing apparatus 120 is installed on the rear side of the vehicle and photographs the rear side of the vehicle using predetermined detecting means and provides the rear side photographing image to the vehicle image providing apparatus 200. At this time, the rear photographing apparatus 120 can photograph the rear of the vehicle after the set time from the photographing time of the forward image. According to the embodiment of the present invention, it is preferable that the rear photographing apparatus 120 is one, and the rear image is photographed including a plurality of lanes.

The vehicle image providing apparatus 200 receives the forward image and the backward image of the vehicle from the image pickup apparatus 100 and uses the road information of the vehicle extracted from the received forward image to map the forward image of the vehicle to the corresponding road of the backward image And outputs it.

Here, when the forward image is an image including a plurality of lanes, the vehicle image providing apparatus 200 generates a plurality of divided images by dividing the forward image by the lane, and then outputs the lane divided images to corresponding lanes of the rear image Output.

In addition, the vehicle image providing apparatus 200 detects the feature information from each of the forward image and the backward image, and estimates the vehicle corresponding to the forward image among the vehicles photographed in the backward image based on the detected feature information. Then, the estimated backward image is provided to the manager through the template.

Here, the feature information includes feature information of the vehicle that can be extracted from the image, such as the size, color, form, vehicle number, vehicle type, vehicle color, vehicle lane, and photographing time information of the license plate.

In the embodiment of the present invention, it is described that each of the front photographing apparatus 110 and the rear photographing apparatus 120 of the photographing apparatus 100 is directly connected to the communication network to transmit an image photographed by the vehicle image providing apparatus 200 However, a field device (not shown) may be provided between the photographing apparatus 100 and the vehicle image providing apparatus 200.

Here, the field device (not shown) receives the front / rear images photographed from the front photographing device 110 and the rear photographing device 120, and transmits the received front / rear images through the communication network to the vehicle image providing device 200 As shown in FIG.

As described above, the vehicle image providing apparatus 200 can directly receive the forward image and the rear image of the vehicle from the image capturing apparatus 100, but may also include a separate field apparatus (not shown) collecting images of the image capturing apparatus 100, It is also possible to receive the forward and backward images of the vehicle from the vehicle.

Of course, when the field device (not shown) includes a part of the functions of the vehicle image providing apparatus 200, the image received from the photographing apparatus 100 is divided into lanes and the divided images and the rear images are transmitted to the vehicle image providing apparatus 200). In addition, the divided images and the backward images may be analyzed to provide the feature information to the vehicle image providing apparatus 200 together.

The vehicle image providing apparatus according to the embodiment of the present invention will now be described in detail with reference to FIG.

2 is a detailed view of a vehicle image providing apparatus according to an embodiment of the present invention.

2, the vehicle image providing apparatus 200 according to the embodiment of the present invention includes a front image collecting unit 210, a rear image collecting unit 220, a control unit 230, a screen output unit 240, A feature detecting unit 260, an estimating unit 270, a determining unit 280, and a searching unit 290. The image dividing unit 250, the feature detecting unit 260,

First, the forward image collecting unit 210 acquires a forward image photographed forward of the vehicle traveling on a predetermined lane, and extracts lane information of the vehicle from the forward image.

When the front photographing apparatus 110 is installed for each lane, the front image collecting unit 210 individually acquires the lane-by-lane forward image. Each of the front photographing apparatuses 110 is set to photograph pre-assigned lanes and provides the assigned lane information when providing the forward image. Therefore, in this case, the forward image acquisition unit 210 can extract the road information together with the reception of the forward image.

When the front photographing apparatus 110 is installed to photograph a plurality of lanes, the front image collecting unit 210 acquires a photographed forward image including a plurality of lanes. In this case, the forward image collection unit 210 may detect at least one lane in the forward image and extract the lane information of the vehicle based on the detected lane.

By analyzing the lane in the image, the number of lanes can be determined, and the vehicle information can be analyzed by reversing the order of the lanes displayed in the reverse direction in the forward image. For example, the vehicle image providing apparatus 200 can confirm the lane information of the vehicle by sequentially increasing the left lane sequentially from the right lane in the front image as the first lane.

The backward image acquisition unit 220 acquires a backward image of the corresponding vehicle to be driven. The backward image represents the backward image of the vehicle photographed after the set time from the shooting time of the forward image, and is captured including a plurality of lanes.

Here, the plural lanes appearing in the rear image are displayed in order of actual lanes. Therefore, in the case of the backward image, the vehicle lane information can be confirmed by sequentially increasing the leftmost lane as the first lane and the rightward lane as shown in the image.

The control unit 230 arranges the forward image so as to match the lane of the rear image using the lane information of the vehicle extracted by the forward image collecting unit 210. [

Here, in the case where the front photographing apparatuses 110 are installed for each lane and the front images are obtained for a plurality of lanes, the respective front images may be simply arranged in the order of the lanes. In the case where the forward image is an image including a plurality of lanes, the divided images generated by the lanes may be arranged in order of the lanes.

When a plurality of lanes are included in the forward image, the image divider 250 generates a plurality of divided images by dividing the forward image by the lane, and transmits the divided images generated by the lane to the controller 230 together with the lane information . Then, the control unit 230 arranges a plurality of divided images in order by car.

The screen output unit 240 outputs the forward image of the arranged vehicle in correspondence with the lane of the rear image corresponding thereto. The screen output unit 240 matches the forward image (or the divided image) received from the control unit 230 to the car arrangement order shown in the rear image, and outputs the matching image to a template.

As described above, the screen output unit 240 includes a template for outputting predetermined information so that the administrator can easily and easily check the image information. At this time, the template includes a GUI (Graphic User Interface) that displays a predetermined image, feature information, and the like and receives predetermined information from an administrator.

3 is a diagram for explaining a template provided by the screen output unit of FIG.

Referring to FIG. 3, a template 300 of the vehicle information providing apparatus according to the present invention includes a first setting area 310 for outputting a rear image, a second setting area 310 for outputting a forward image or a divided image, Area 320, and one or more third setting areas (not shown) for outputting the estimated rear images.

First, the first setting area 310 is an area for outputting a photographed rear image including a plurality of lanes. FIG. 3 illustrates a road having three lanes in total, and the rear of the vehicle in which three of the three lanes are running is photographed.

The first setting area 310 shown in FIG. 3 is a mode in which the rear image is divided into three regions for each lane, and is output in a divided form. Of course, it is obvious that the form shown in FIG. 3 also corresponds to a rear image including a plurality of lanes.

The second setting area 320 and the third setting area (not shown) substantially correspond to the number of lanes in the rear image. In the case of FIG. 3 illustrating the third road, the second setting area 320 and the third setting area (not shown) are arranged side by side at positions corresponding to the respective lanes of the rear image shown in the first setting area 310 Respectively.

First, the second set area 320 outputs the arranged forward images for each lane. The second setting area 320 arranges the front image (or the divided image) of the corresponding lane arranged at a position corresponding to each lane included in the first setting area 310 and outputs the rear image. 3, the respective forward images corresponding to the primary, secondary, and tertiary roads are divided into a left second setting area 321, a center second setting area 322, and a right second setting area 323 Respectively. As shown in FIG.

3 is an example in which the vehicle of the same number as that of the vehicle on the third road in the backward image outputted in the first setting area 310 is output to the second setting area 323 on the right side of the screen corresponding to the same number, And the third lane is run without changing the lane at the passing of this point.

As shown in FIG. 3, a rear image photographed including a plurality of lanes can be output as it is in the first setting area 310 without additional image processing or in a form that is separated (broken) by a lane. This rear image requires no separate image arrangement process.

However, in the case of the forward image (or the divided image) obtained for each of the lanes, it is arranged to correspond to the lane order in the rear image, and then outputted on the second setting area 320 of the lane order. In the case of FIG. 3, three forward images arranged in order by car are outputted to the corresponding second setting areas 321, 322, and 323, respectively.

As a result, the forward image of the vehicle that has traveled first at the time of shooting the forward image is output to the second setting area 321 at the left corresponding to the first road, and the forward image of the vehicle that travels the secondary 2 setting area 322, and the front image of the vehicle traveling in the third lane is output to the second setting area 323 on the right side.

Of course, the forward image (or the divided image) collected for each lane on the basis of the same photographing time point may have a captured image of the vehicle, and some images may not be captured. Therefore, there may be an area in which the vehicle is not present in each of the first setting areas 321, 322, and 323.

In the case of FIG. 3, it is shown that such a characteristic is revealed. In this case, only the vehicle traveling on the third road at the time of shooting the forward image exists, and the vehicle is outputted to the first setting area 323 on the right side will be. However, when the vehicle is traveling in the third lane, the driver's wheel partially touches the second lane to drive the lane. In the case of the center second setting area 322, a part of the left side of the vehicle is seen to be cut off.

The third setting region (not shown) may be arranged in parallel with the second setting region 320 at a portion between the upper portion of the first setting region 310 and the lower portion of the second setting region 320. The third setting area (not shown) outputs the rear image of the vehicle estimated by the estimation unit 270 in correspondence with the second setting area 320. [

For example, in the case of FIG. 3, the second setting area 323 on the right side outputs a forward image of the vehicle traveling in the third lane. In this case, Vehicle type, color, etc.) among the vehicles in the rear image. As a result of estimation, the vehicle on the third lane becomes the estimated vehicle among the vehicles in the rearward image. Therefore, the image of the estimated vehicle is outputted to the third setting area (not shown) on the right side corresponding to the third road in the third setting area.

The third setting area (not shown) can separate the vehicle images in the backward image from the estimated vehicle images in the backward image. For example, it is possible to separate the rear image on the first setting area by the lane, and then output only the image of the lane portion where the vehicle is located.

Each of the third setting areas (not shown) may output a plurality of estimated images corresponding to the lane-by-star front images. This is because there may be a case where there are a plurality of vehicles having similarity between the vehicle and the feature information in the forward image in the backward image.

The feature detection unit 260 processes the forward and backward images to detect the first feature information and the second feature information of the vehicle, respectively. Here, the first feature information and the second feature information include at least one of a size, a color, a form, a vehicle number, a type of the vehicle, a color of the vehicle, and a lane of the vehicle.

The feature detecting unit 260 includes an image processing algorithm for detecting the feature information of the vehicle such as the type of the vehicle, the color of the vehicle, the number of the vehicle, etc., and the feature information of the detected vehicle through the image processing, .

In addition, the feature detecting unit 260 analyzes the optical flow of the received image of the vehicle to obtain a reference time for estimating the backward image based on the estimated vehicle speed, the detection point in the forward image, and the vehicle distance And provides it to the estimation unit 270.

The estimating unit 270 estimates the vehicle corresponding to the forward image in the backward image and provides the backward image of the estimated vehicle to the screen output unit 240 using the feature information detected by the feature detecting unit 260. [

At this time, the estimating unit 270 compares the first feature information and the second feature information, estimates a vehicle corresponding to the forward image among at least one vehicle photographed in the backward image, , And outputs the feature information of each image through the screen output unit 240.

In addition, the estimator 270 may provide the backward image photographed within a predetermined time centered on the time when the forward image is photographed, through the screen output unit 240 according to the passage of time. At this time, the estimator 270 may provide a backward image corresponding to the forward image using the delay time calculated from the feature detector 260. [

Meanwhile, in the embodiment of the present invention, the forward image may refer to the forward image of the vehicle in which the vehicle number of the license plate affixed to the front is not recognized. According to the embodiment of the present invention, in the case of a vehicle in which the vehicle number is not recognized in the forward image, at least one vehicle having a feature that is the same as or similar to the unrecognized vehicle among at least one vehicle in the rearward image corresponding to the forward image And provides a rear image of the estimated vehicle in correspondence with the forward image.

Therefore, if the backward image of the vehicle in which the recognition of the vehicle number fails in the forward image is estimated and provided on the grounds of deliberate license plate covering or damage, the vehicle number can be confirmed through the rear license plate of the vehicle, There is an advantage that interception can be facilitated.

In addition, when the estimation target vehicle (unrecognized vehicle) changes the driving lane while passing through a predetermined point (installation point of the front / rear photographing apparatus), the driving lane of the vehicle is photographed differently in the forward image and the rearward image do.

In this case, since the vehicle is photographed on the already-changed lane in the rear image, when the forward image is arranged and output using only the lane information, the output position of the front image is the lane position of the corresponding vehicle in the rear image A non-matching problem arises.

In this case, the embodiment of the present invention confirms whether the vehicle is changed to a vehicle through comparing the first feature information and the second feature information, and if the change is confirmed by the car, Can be arranged and output.

Next, the determination unit 280 compares the first characteristic information and the second characteristic information, and determines whether or not to change the estimated vehicle based on the passage of the predetermined point.

At this time, if it is determined that the estimated vehicle has changed lanes, the screen output unit 240 may display the second setting area 320 corresponding to the lanes of the order coinciding with the changed lanes among the one or more second setting areas 321, 322, And outputs the front image of the vehicle to the vehicle.

That is, even if the vehicle is in the third driving at the time of shooting the actual forward image, if the vehicle is changed to the second road at the time of shooting the rear image, the forward image of the vehicle is displayed on the right side of the second setting areas 321, 322, 2 setting area 323 but the second setting area 322 in the center corresponding to the second lane.

Of course, in this case, the estimation unit 270 may estimate that the vehicle in the second lane of the vehicle in the backward image is the corresponding vehicle in the forward image, through comparison of the feature information. At this time, the image of the corresponding vehicle of the second-order road estimated in the rear image is outputted on the second third setting area (not shown) among the three third setting areas. According to such a configuration, the embodiment of the present invention can provide a robust estimation result even in the case of lane change of the vehicle or the like.

The retrieval unit 290 retrieves the vehicle number detected in the backward image in the DB storing the vehicle number detected in the forward image. When the vehicle number detected in the rearward image is not retrieved in the DB, the retrieval unit 290 classifies the information including the vehicle number of the rearward image and the rearward image as a number-unrecognized vehicle (vehicle to be tracked).

That is, the rear image can be recognized from the rear plate of the vehicle and the front image can be recognized from the front plate of the vehicle. If the rear plate is recognized but the front plate is not recognized, the vehicle artificially shields the front plate It is likely that the vehicle is an illegally damaged vehicle. Therefore, in this case, the vehicle image providing apparatus 200 according to the embodiment of the present invention can index and manage information including the rear image of the vehicle and the corresponding vehicle number.

The search unit 290 also searches for the search result corresponding to the feature information on the basis of the input information inputted from the manager or the feature information of the rear image received from the feature detecting unit 260 Information, etc.). At this time, the input information includes information such as search conditions (search period, illegal vehicle search, etc.), search location, and the like.

The screen output unit 240 provides a search area 330. The search area 330 provides the search condition input from the administrator to the search unit 290 and outputs the search result received from the search unit 290 do.

For example, in order to detect a vehicle that has damaged the license plate, the search unit 290 searches the storage unit 290 using at least one of the input information input from the manager and the feature information of the rear image received from the detection unit 260, (Not shown). At this time, when the forward image is not searched, the search unit 290 notifies the controller (not shown) that the forward image corresponding to the feature information of the rear image is not searched.

Accordingly, when the vehicle of the forward image is not recognized because it is covered or damaged by the license plate, the vehicle image providing apparatus 200 can estimate the vehicle using the feature information of the rear image.

Although the embodiment of the present invention is not shown, the vehicle image providing apparatus 200 may further include a storage unit and a control unit. The storage unit (not shown) stores the feature information of the forward image, the backward image, and each image, and stores the information of the backward image estimated by matching the forward image. A control unit (not shown) controls data flow between the units and manages information stored in a storage unit (not shown). In addition, the control unit (not shown) may provide a predetermined event (pop-up, sound, message, etc.) to the administrator when receiving the indexed backward image from the search unit 290. [

Meanwhile, in the embodiment of the present invention, the vehicle image providing apparatus 200 estimates and provides a backward image based on a forward image, but may also provide a forward image based on a backward image.

The vehicle image providing apparatus according to the present invention advantageously provides a forward image and a backward image of the vehicle in a matched manner using the template, and can effectively detect an illegal vehicle by using the feature information detected in the rear image have.

4 to 5B, a vehicle image providing method of a vehicle image providing apparatus according to an embodiment of the present invention will be described.

FIG. 4 is a view illustrating a vehicle image providing method of the vehicle image providing apparatus according to the first embodiment of the present invention. FIGS. 5A and 5B are views for explaining the forward and backward images received by the vehicle image providing apparatus of FIG. FIG.

4, the forward image capturing unit 210 and the backward image capturing unit 220 of the vehicle image providing apparatus 200 according to the exemplary embodiment of the present invention may include a forward image and a backward image from the image capturing apparatus 100, (S410).

Then, the forward image collection unit 210 extracts the lane information from the received forward image (S420).

If the forward images of the respective lanes are individually collected through the plurality of front photographing apparatuses 110, the respective front photographing apparatuses 110 utilize the lane information provided together with the forward images.

When a forward image including a plurality of lanes is collected through the front photographing apparatus 110, the forward image collecting unit 210 processes the forward image and recognizes the lane, and based on the lane, .

Next, the control unit 230 arranges the forward images in order of the lanes using the lane information of the vehicle (S430).

In the case of a forward image including a plurality of lanes, the image divider 250 generates a plurality of divided images by dividing the forward image for each lane and transmits the generated divided images to the control unit 230, Are arranged in order by car.

5A and 5B show a front image and a rear image photographed including a plurality of lanes. Referring to FIG. 5A, the image divider 220 can detect and divide a lane on the right lane on the basis of a plurality of lanes on the front image. That is, the image divider 250 can set the virtual lane range using the lane in the forward image, and sequentially increase the lane from the right-most vehicle to the left by using the lane in the forward image, thereby dividing the lane of the vehicle.

On the other hand, in the embodiment of the present invention, the lane is divided for the forward image, but the lane may be divided for the rear image. In this case, as shown in FIG. 5B, since the first left lane becomes the first lane in the rear image as opposed to the lane division of the front image, the lane of the vehicle can be divided by one lane in the right direction with respect to the lane.

Then, the screen output unit 240 outputs a rear image together with a forward image (or a divided image if there is an image division) of the arranged vehicle, and arranges the forward images arranged corresponding to the rear image lanes On the template (S440). That is, since the backward image corresponds to the forward image, the forward image (or the divided image) is arranged so as to correspond to the lane of the backward image, and output through the screen output unit 240.

Steps S410 to S440 as described above describe a process of outputting a forward image at a position corresponding to a lane in a rear image after dividing a forward image based on a lane. In this case, since the forward and backward images are matched and provided based only on the road information, a process of analyzing the characteristics of the vehicle in the image is unnecessary. However, by providing the front / rear images of the same lane effectively, The vehicle discrimination power can be greatly increased.

In addition, when a vehicle whose vehicle number is not recognized as a result of image processing of a forward image is found, it is necessary to estimate and provide a rear image of the vehicle corresponding to the vehicle in the rear image captured at a predetermined time difference. That is, the vehicle image providing apparatus 200 according to the embodiment of the present invention can recognize the number through a rear plate displayed on the rear image in the case of a vehicle whose vehicle number is not recognized in front.

Hereinafter, an additional process of estimating a corresponding vehicle among the vehicles in the backward image using the feature information of the vehicle between the forward image and the backward image will be described with reference to FIG.

6 is a view illustrating a vehicle image providing method of a vehicle image providing apparatus according to a second embodiment of the present invention.

6, steps S610 to S640 correspond to the steps S440 to S440 described above with reference to FIG. 4, and a duplicate description will be omitted.

After step S640, the feature detector 260 analyzes the forward and backward images using the stored image processing algorithm to detect the feature information, and provides the detected feature information to the estimator 270 (S650) .

Then, the estimating unit 270 compares the feature information between the two images and estimates the vehicle included in the forward image among the vehicles included in the rear image (S660).

The estimating unit 270 compares the characteristics of at least one of the vehicle color, the vehicle number, and the vehicle type in the forward image with the characteristics of the respective vehicles photographed in the rear image, The vehicle can be estimated and provided.

For example, in the case of a vehicle that covers only a part of the front license plate, some of the entire vehicle numbers may be recognized. Therefore, the estimating unit 270 may use only the car number among the characteristic information, As shown in FIG. Further, in the case of a vehicle in which all number plates are covered, it is possible to provide, as an estimated vehicle, a vehicle in which at least one of information such as vehicle type and color is matched among the vehicles in the rear image. Of course, if there are a plurality of estimated vehicles, a plurality of vehicles can be provided as candidates of the estimated vehicle.

Thereafter, the screen output unit 240 further outputs the estimated rearward image of the vehicle on the template corresponding to the forward image (S670).

For example, when estimating the backward image using the vehicle color or the vehicle type among the feature information, the estimating unit 270 estimates the backward image in the backward image captured within the set time from the time when the forward image is captured, It is possible to provide the estimated vehicle with a vehicle whose color or model matches.

When estimating the backward image using the lane among the feature information, the estimating unit 270 provides the backward image of the taken vehicle within the set time from the time when the forward image is photographed, . That is, the estimating unit 270 estimates the rear images of all the vehicles using the lane in the set time after capturing the vehicle in the forward image, and outputs the estimated rear images on the template.

Of course, the estimator 270 may estimate the backward image more accurately using not only the difference of the feature information but also other information, and provide the backward image.

As described above, the estimating unit 270 according to the present invention sets the feature information used for the backward image estimation differently according to the setting of the administrator, the installation environment (the number of lanes, the road condition, etc.) .

According to the vehicle image providing apparatus and method of the present invention as described above, the forward and backward images are efficiently matched to the manager on the basis of the feature information of each image, and the deliberate license plate of the license plate is damaged or damaged There is an advantage that a vehicle that can not be recognized by the vehicle can be effectively searched / estimated.

While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. Accordingly, the true scope of the present invention should be determined by the technical idea of the appended claims.

100: photographing apparatus 110: front photographing apparatus
120: rear photographing apparatus 200: vehicle image providing apparatus
210: forward image collection unit 220: rear image collection unit
230: Control section 240: Screen output section
250: image divider 260: feature detector
270: Estimation unit 280: Judgment unit
290:

Claims (18)

An apparatus for providing an image of a vehicle photographed at a predetermined point on a road,
A first image collecting unit for acquiring a first image for the vehicle photographed including a plurality of lanes and extracting information from the first image by lane;
An image divider dividing the first image into the plurality of lanes;
A second image collecting unit for acquiring a second image of the vehicle photographed including a plurality of lanes,
A controller for arranging the divided first images to be matched with the lanes of the second image using the lane information,
And a screen output unit outputting the arranged first images corresponding to the lanes of the second image, respectively,
Wherein each of the first image and the second image includes:
A vehicle image providing apparatus, comprising: an image taken front or rear of a vehicle, the images being taken in different directions.
The method according to claim 1,
Wherein the screen output unit comprises:
A first setting area for outputting the second image,
And a plurality of second setting areas for respectively arranging and outputting the arranged first images at positions corresponding to lanes included in the first setting area.
delete delete The method of claim 2,
A feature detector for detecting the first feature information and the second feature information of the vehicle by image processing the first image and the second image,
And an estimating unit that compares the first feature information and the second feature information to estimate a vehicle corresponding to the first image among at least one vehicle photographed in the second image.
The method of claim 5,
Wherein the screen output unit comprises:
And a third setting area arranged in parallel with the second setting area and outputting the second image of the estimated vehicle in correspondence with the second setting area.
The method of claim 5,
Further comprising a determination unit that compares the first feature information and the second feature information and determines whether or not the estimated vehicle is changed based on passage of the predetermined point,
Wherein the screen output unit comprises:
And outputs the first image of the vehicle on a second setting area corresponding to the changed lane when it is determined that the estimated vehicle has changed lanes.
The method of claim 5,
The first feature information and the second feature information may include:
A vehicle number, at least one of a type of the vehicle, a color of the vehicle, and a lane of the vehicle.
The method according to claim 1,
A storage unit for storing feature information detected from the first image,
A search unit for searching the storage unit for a first image corresponding to the feature information detected from the second image and classifying the first image as an unrecognized vehicle image if the first image is not searched;
Further comprising a vehicle image providing device
A method of providing an image of a vehicle photographed at a predetermined point on a road,
Obtaining a first image for the vehicle photographed including a plurality of lanes and a second image for the vehicle photographed including a plurality of lanes,
Extracting road information from the first image and dividing the first image into the plurality of roads;
Arranging the divided first images to be matched with the lanes of the second image using the lane information, and
And outputting the arranged first images corresponding to the lanes of the second image, respectively,
Wherein each of the first image and the second image includes:
A method for providing a vehicle image, the method comprising the steps of: capturing an image of the front or rear of the vehicle;
The method of claim 10,
Wherein the outputting step comprises:
A first setting area for outputting the second image,
And outputting a plurality of second setting areas for respectively arranging and outputting the arranged first images at positions corresponding to lanes included in the first setting area.
delete delete The method of claim 11,
Detecting the first feature information and the second feature information of the vehicle by image processing the first image and the second image, respectively, and
And comparing the first feature information and the second feature information to estimate a vehicle corresponding to the first image among at least one vehicle photographed in the second image.
15. The method of claim 14,
Wherein the outputting step comprises:
And a third setting area arranged in parallel with the second setting area and outputting a second image of the estimated vehicle in correspondence with the second setting area.
15. The method of claim 14,
Further comprising the step of comparing the first feature information and the second feature information and determining whether to change the estimated vehicle based on passage of the predetermined point,
Wherein the outputting step comprises:
And outputs the first image of the vehicle on a second setting area corresponding to the changed lane when it is determined that the estimated vehicle has changed lanes.
15. The method of claim 14,
The first feature information and the second feature information may include:
A vehicle number, a type of the vehicle, a color of the vehicle, and a lane of the vehicle.
The method of claim 10,
Storing feature information detected from the first image, and
Further comprising the step of searching for a first image corresponding to the feature information detected from the second image from the stored information and classifying the first image as an unrecognized vehicle image if the first image is not found, Delivery method.
KR1020160043420A 2016-04-08 2016-04-08 Apparatus for providing image of vehicle and method thereof KR101665961B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160043420A KR101665961B1 (en) 2016-04-08 2016-04-08 Apparatus for providing image of vehicle and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160043420A KR101665961B1 (en) 2016-04-08 2016-04-08 Apparatus for providing image of vehicle and method thereof

Publications (1)

Publication Number Publication Date
KR101665961B1 true KR101665961B1 (en) 2016-10-14

Family

ID=57157187

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160043420A KR101665961B1 (en) 2016-04-08 2016-04-08 Apparatus for providing image of vehicle and method thereof

Country Status (1)

Country Link
KR (1) KR101665961B1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106504554A (en) * 2016-09-30 2017-03-15 乐视控股(北京)有限公司 The method and device of identification traffic light status information
KR101870229B1 (en) * 2018-02-12 2018-06-22 주식회사 사라다 System and method for determinig lane road position of vehicle
KR20210064492A (en) * 2019-11-25 2021-06-03 주식회사 딥비전 License Plate Recognition Method and Apparatus for roads
KR20210151415A (en) * 2020-06-05 2021-12-14 한국건설기술연구원 Road accident detection system and method using a lamppost-type structure
KR102429312B1 (en) * 2021-08-09 2022-08-04 주식회사 영국전자 Enforcement device and method thereof
KR102510765B1 (en) * 2022-01-04 2023-03-17 주식회사 아이에스앤로드테크 System and method for controlling two-wheeled vehicles.
KR102558936B1 (en) * 2022-12-27 2023-07-25 주식회사 아이에스앤로드테크 System and method for controlling two-wheeled vehicles.

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004056497A (en) * 2002-07-19 2004-02-19 Sumitomo Electric Ind Ltd Image processing apparatus and method therefor, and vehicle supervision system
JP2013196365A (en) * 2012-03-19 2013-09-30 Fujitsu Ltd Extraction device, extraction program and extraction method
JP2014130435A (en) * 2012-12-28 2014-07-10 Fujitsu Ltd Information processing device and method
KR20160000990A (en) * 2014-06-25 2016-01-06 서울여자대학교 산학협력단 Vehicle photographing apparatus based multilane and control method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004056497A (en) * 2002-07-19 2004-02-19 Sumitomo Electric Ind Ltd Image processing apparatus and method therefor, and vehicle supervision system
JP2013196365A (en) * 2012-03-19 2013-09-30 Fujitsu Ltd Extraction device, extraction program and extraction method
JP2014130435A (en) * 2012-12-28 2014-07-10 Fujitsu Ltd Information processing device and method
KR20160000990A (en) * 2014-06-25 2016-01-06 서울여자대학교 산학협력단 Vehicle photographing apparatus based multilane and control method thereof

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106504554A (en) * 2016-09-30 2017-03-15 乐视控股(北京)有限公司 The method and device of identification traffic light status information
KR101870229B1 (en) * 2018-02-12 2018-06-22 주식회사 사라다 System and method for determinig lane road position of vehicle
KR20210064492A (en) * 2019-11-25 2021-06-03 주식회사 딥비전 License Plate Recognition Method and Apparatus for roads
KR102306789B1 (en) * 2019-11-25 2021-10-01 주식회사 딥비전 License Plate Recognition Method and Apparatus for roads
KR20210151415A (en) * 2020-06-05 2021-12-14 한국건설기술연구원 Road accident detection system and method using a lamppost-type structure
KR102435281B1 (en) * 2020-06-05 2022-08-23 한국건설기술연구원 Road accident detection system and method using a lamppost-type structure
KR102429312B1 (en) * 2021-08-09 2022-08-04 주식회사 영국전자 Enforcement device and method thereof
KR102510765B1 (en) * 2022-01-04 2023-03-17 주식회사 아이에스앤로드테크 System and method for controlling two-wheeled vehicles.
KR102558936B1 (en) * 2022-12-27 2023-07-25 주식회사 아이에스앤로드테크 System and method for controlling two-wheeled vehicles.

Similar Documents

Publication Publication Date Title
KR101665961B1 (en) Apparatus for providing image of vehicle and method thereof
CN109784162B (en) Pedestrian behavior recognition and trajectory tracking method
KR101647370B1 (en) road traffic information management system for g using camera and radar
KR101758576B1 (en) Method and apparatus for detecting object with radar and camera
KR101971878B1 (en) Video surveillance system and method using deep-learning based car number recognition technology in multi-lane environment
KR101935399B1 (en) Wide Area Multi-Object Monitoring System Based on Deep Neural Network Algorithm
CN105702048B (en) Highway front truck illegal road occupation identifying system based on automobile data recorder and method
KR102031503B1 (en) Method and system for detecting multi-object
KR101326943B1 (en) Overtaking vehicle warning system and overtaking vehicle warning method
JP6794243B2 (en) Object detector
KR101742490B1 (en) System for inspecting vehicle in violation by intervention and the method thereof
KR102491091B1 (en) Method for producing collection video clip and, integrated unmanned traffic control system for two/four wheeled vehicle therewith
KR102001002B1 (en) Method and system for recognzing license plate based on deep learning
US11025865B1 (en) Contextual visual dataspaces
KR20170124299A (en) A method and apparatus of assisting parking by creating virtual parking lines
KR102282800B1 (en) Method for trackig multi target employing ridar and camera
KR102197449B1 (en) Enforcement system for enforcement a certain section in the section enforcement point
KR102332517B1 (en) Image surveilance control apparatus
KR101542564B1 (en) system for managing traffic based on zone classified architecture
KR102018123B1 (en) Vehicle sensing apparatus using laser scanning
KR20120067890A (en) Apparatus for video analysis and method thereof
KR101210615B1 (en) Regulation system of u-turn violation vehicle
KR101719799B1 (en) CCTV monitoring system
KR20150018990A (en) Apparatus and method for guiding caution information of driving
KR102234768B1 (en) Multifunctional vehicle detecting system and thereof method

Legal Events

Date Code Title Description
E90F Notification of reason for final refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant