WO2017116134A1 - Radar and image-fusion vehicle enforcement system - Google Patents

Radar and image-fusion vehicle enforcement system Download PDF

Info

Publication number
WO2017116134A1
WO2017116134A1 PCT/KR2016/015387 KR2016015387W WO2017116134A1 WO 2017116134 A1 WO2017116134 A1 WO 2017116134A1 KR 2016015387 W KR2016015387 W KR 2016015387W WO 2017116134 A1 WO2017116134 A1 WO 2017116134A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
image
speed
radar
cropping
Prior art date
Application number
PCT/KR2016/015387
Other languages
French (fr)
Korean (ko)
Inventor
최광호
이상만
심광호
류승기
조영태
Original Assignee
건아정보기술 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020160173720A external-priority patent/KR101925293B1/en
Application filed by 건아정보기술 주식회사 filed Critical 건아정보기술 주식회사
Publication of WO2017116134A1 publication Critical patent/WO2017116134A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/50Systems of measurement based on relative movement of target
    • G01S13/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Definitions

  • the present invention relates to a radar and image fusion vehicle enforcement system.
  • the present invention relates to a radar and image fusion vehicle control system using an image captured by one camera and a radar for a vehicle on a multi-lane.
  • a method of measuring information and speed of a vehicle in a domestic ITS system includes a method of embedding a LOOP detector on a road, using a laser sensor, and acquiring information only through an image through a camera.
  • LOOP detector has the disadvantage that the road must be destroyed in the process of embedding on the road, and it can be a factor that hinders the flow of the vehicle in the process of construction.
  • Laser detectors also suffered from the effects of weather and climate (snow, rain, fog, dust, etc.) and had a narrow detection width.
  • the image detector also has a disadvantage that is affected by the weather and climate much, especially the detection rate is greatly reduced at night.
  • radar detectors are relatively less affected by weather and climate than other detectors, and are non-contact, so there is no road destruction.
  • the information output from the radar detector is the speed, distance, and angle of the vehicle. Through this, the exact position and information of the vehicle can be extracted.
  • the radar since the radar also uses radio waves, errors may occur due to the radio wave environment, thereby providing a method of simultaneously using the radar and the image.
  • the present invention provides a system that can calculate the speed of a vehicle accurately by using a radar and an image through a camera when calculating the speed of a vehicle on a multi-lane.
  • the present invention provides a system that can reduce the data processing capacity by using the image through the radar and one camera when calculating the speed for the vehicle on the multi-lane.
  • a radiation beam is emitted to a multi-lane road
  • radar transmission and reception means for receiving a reflected wave reflected from a vehicle traveling on the multi-lane road is triggered by the presence of a target vehicle sensed at high speed by the radar transmission and reception means.
  • Image capturing means for photographing a full frame image including a multi-lane road and the target vehicle
  • a radar speed calculator configured to generate the driving information for the target vehicle by analyzing the transmitted and received radar signals, and analyzing the driving information of the target vehicle.
  • a control unit including an image processing unit for storing the second cropping image as a second cropping image, and a number recognition unit for recognizing the number of the vehicle license plate in the second cropping image.
  • the image processing unit may further include an image speed calculator configured to receive images captured at a predetermined time interval from the image processor and calculate a vehicle speed.
  • the image velocity calculating unit converts a first coordinate value occupied by the target vehicle in the first cropping image coordinate system into a second coordinate value occupied in the full frame image coordinate system, and calculates a moving distance according to time of the second coordinate value. To calculate the speed of the vehicle.
  • An image taken at a predetermined time interval by defining the traveling speed of the target vehicle measured by the radar transmission and reception means from the radar speed calculating unit as a first vehicle speed, and receiving the position of the license plate determined by the image processing unit of the target vehicle.
  • the number of pixels of the moving vehicle number is defined and defined as the second vehicle speed, when both the first vehicle speed, the second vehicle speed, and the second vehicle speed after a predetermined time elapse from the threshold speed, It may include an intermittent determination unit to determine.
  • the control unit defines the first vehicle speed as the average speed when the deviation between the first vehicle speed and the second vehicle speed is less than a first threshold value, and the deviation between the first vehicle speed and the second vehicle speed is determined. If the range of the second threshold value is satisfied, the average value of the first vehicle speed and the second vehicle speed is defined as the average speed, and if the deviation exceeds the range of the second threshold value, the speed of the target vehicle is discarded. Can be.
  • the image photographing means may vary a time difference according to the speed of the first vehicle and capture an image of the target vehicle.
  • the embodiment emits a radar to measure the speed of all the vehicles traveling in the detection zone, and by taking a picture of a specific vehicle of the multi-lane vehicle through one camera, it is possible to reduce the demand of the camera .
  • the data processing capacity can be reduced, thereby improving the computation speed and reducing the computation amount.
  • the embodiment may calculate the speed using only the image of the recognition area, compensate for horizontal and vertical errors that may occur according to the coordinates when calculating the speed, and compensate the error according to the characteristics of the license plate to accurately calculate the speed from the image. Can improve the reliability.
  • the speed detected by the radar and the speed according to the video analysis do not coincide with each other and do not exist within the error range, it is treated as an error and the speed of the video analysis is sent to the control center, thereby eliminating the speed error. to provide.
  • FIG. 1 is an overall configuration diagram of a multi-lane vehicle speed measurement system according to an embodiment of the present invention.
  • FIG. 2 is a detailed block diagram of a multi-lane vehicle speed measuring system according to an embodiment of the present invention of FIG. 1.
  • FIG. 3 is a block diagram illustrating an exemplary embodiment of the controller of FIG. 2.
  • FIG. 4 is a flowchart illustrating a speed calculating method of the controller.
  • FIG. 5 is a diagram illustrating a step of acquiring a recognition region of an image velocity calculating unit.
  • FIG. 6 is a flowchart illustrating correction of the recognition region of FIG. 4.
  • FIG. 7 is a diagram showing obtaining the coordinates of FIG.
  • FIG. 8 is a diagram illustrating obtaining a reverse vehicle reference coordinate of FIG. 6.
  • FIG. 9 is a detailed flowchart illustrating license plate correction of FIG. 6.
  • FIG. 10 is a diagram illustrating an operation of extracting a license plate feature of FIG. 9.
  • FIG. 11 is a diagram illustrating a vehicle model analysis of FIG. 9.
  • FIG. 12 is a view showing the license plate height measurement of FIG.
  • first, second, A, and B may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • FIG. 1 is an overall configuration diagram of a multi-lane vehicle speed measurement system 10 according to an embodiment of the present invention
  • Figure 2 is a multi-lane vehicle speed measurement system 10 according to an embodiment of the present invention
  • FIG. 3 is a block diagram illustrating an embodiment of the controller 100 of FIG. 2.
  • the multi-lane vehicle speed measuring system 10 emits radiation waves on a multi-lane road and receives reflected waves reflected from the vehicle 20 traveling on the multi-lane road.
  • the radar 300 one camera 200 for capturing an image of the vehicle 20 sensed by the radar 300, and a speed of the first vehicle 20 from the radar 300.
  • a controller 100 for calculating, correcting and comparing the second vehicle speed from the camera 200 to calculate the final speed of the vehicle 20.
  • the radar 300 has a short period and emits the radiation wave, and the camera 200 triggers when the speeding vehicle 20 is recognized as a target by the radiation wave obtained from the radar 300. Then, the photographing is performed twice based on the lane in which the target vehicle 20 is located among the multi lanes.
  • the control unit 100 receives the reflected wave from the radar 300, reads the radar speed calculation unit 110 for calculating the first vehicle speed of the vehicle 20, and receives an image from the camera 200, An image speed calculator 120 that calculates a second vehicle speed of the vehicle 20 by processing and correcting the same, and a final speed calculator 130 that receives the first vehicle speed and the second vehicle speed and compares them to obtain a final speed. ).
  • the radar speed calculating unit 110 obtains information on the emission point of the radiation wave and the reception point of the reflected wave from the radar 300 to calculate distance information of the vehicle 20 spaced apart from the radar 300 to obtain distance information.
  • the speed information is extracted from the first vehicle speed, and the speed information 20 is recognized as the speed vehicle 20 when the first vehicle speed exceeds a critical speed (intermittent speed).
  • a critical speed intermittent speed
  • the camera 200 is triggered to capture a full frame image including the target vehicle 20 twice.
  • the controller 100 analyzes the driving information of the target vehicle 20 to determine the position of the target vehicle 20 in the full frame image on the multi-lane road, and the full frame image on the multi-lane road. Crop the area in which the vehicle is located and store it as a first cropping image, recognize the license plate of the target vehicle 20 from the first cropping image, and crop the area where the license plate is located. It may further include an image processor (not shown) for storing as two cropping images.
  • the image processor crops an area in which the target vehicle 20 is located from the two images to obtain a region of interest (ROI) of two first cropped images.
  • ROI region of interest
  • the image processor recognizes a vehicle license plate from the two first cropping images and the first cropping image, and determines the number and position of the vehicle license plate.
  • the license plate position may be defined as a reference point of a point in the license plate of the target vehicle 20.
  • the reference point may be the center of the license plate, and when the license plate is rectangular, it may be one of four corners.
  • control unit 100 defines the traveling speed of the target vehicle measured by the radar transmission and reception means from the radar speed calculating unit as a first vehicle speed, and transmits the position of the license plate determined by the image processing unit of the target vehicle.
  • the first vehicle speed, the second vehicle speed, and the second vehicle speed after a predetermined time elapse, when the number of pixels of the moved vehicle number is counted and defined as the second vehicle speed in the images photographed at predetermined time intervals. If it is out of the speed may further include an enforcement decision unit (not shown) that is determined to be subject to enforcement.
  • the apparatus may further include a number recognition unit (not shown) that recognizes the number of the vehicle license plate in the second cropping image.
  • the image velocity calculating unit 120 converts the first coordinate value occupied by the target vehicle 20 in the first cropping image coordinate system into a second coordinate value occupied in the full frame image coordinate system, and the time of the second coordinate value. Calculate the speed of the vehicle by calculating the distance traveled by
  • the image speed calculating unit 120 may include an image correcting unit 125 that corrects coordinates of the position of the recognition area within the entire image and corrects it according to the feature of the license plate serving as a reference point.
  • the image speed calculator 120 may calculate a second vehicle speed based on the corrected second coordinate values from the image corrector 125.
  • the second vehicle speed may be calculated based on a distance between two corrected second coordinate values for a time difference between two images.
  • the final speed calculator 130 compares the first vehicle speed with the second vehicle speed to determine whether the two vehicle speeds match each other, whether the first vehicle speed by the radar 300 and the second vehicle speed by the camera 200 are determined. It is determined whether is within the error range to calculate the final speed.
  • the control unit 100 includes a communication unit 150 for transmitting the calculated final speed and vehicle information to the control center 400 in real time, and stores the captured image, vehicle number, average speed, date, time data Memory 140.
  • the control center 400 receives vehicle information and final speed information about the speeding vehicle 20 from the multi-lane vehicle 20 speed detection system 10, and displays a vehicle number and an average on the image screen of the vehicle 20. It includes a server that displays and stores speed, date, and time data in a defined area.
  • FIG. 4 is a flowchart illustrating a speed calculating method of the controller 100
  • FIG. 5 is a diagram illustrating a step of obtaining a recognition region of the image speed calculating unit 120.
  • the radar 300 periodically receives a carrier wave and calculates it to generate a first vehicle speed (S110). If it is determined that the first vehicle speed of the vehicle 20 calculated by the radar speed calculating unit 110 exceeds a threshold speed (interruption speed) (s120), the vehicle 20 is defined as the target vehicle 20.
  • a threshold speed interruption speed
  • the critical speed may be an intermittent speed of the corresponding multi-lane road.
  • the target vehicle 20 When the target vehicle 20 is defined, the lane in which the target vehicle 20 is located is recognized (S130), and one camera 200 disposed on the multi-lane road is triggered to adjust the time difference so that the target vehicle 20 is included. Photographing is performed two times (s140).
  • two images taken two times may be obtained with a time difference of 80 msec as shown in FIGS. 5A and 5B.
  • the time difference may vary according to the first vehicle speed, and may have a time difference of 160 msec when it is 60 km / h or less, and may have a time difference of 120 msec when it is 60 km / h to 80 km / h, and 80 km / h to In the case of 100 km / h, it may have a time difference of 80 msec, and in the case of 100 km / h or more, it may have a time difference of 40 msec.
  • the image processor obtains vehicle number information of the target vehicle 20 from the two images, and crops an area where the target vehicle 20 is disposed to the recognition area (S150).
  • an area defining the position of the target vehicle 20 is obtained from the recognition area of the radar 300 distance information defined by the red line, and the recognition area including the same is cut out to be a signal processing target. define.
  • the size of the recognition area may be adjusted according to the size of the area defining the location of the target vehicle 20 and the size of the vehicle 20 in the image.
  • the image correcting unit 125 of the image speed calculating unit 120 performs vertical correction and horizontal correction according to the position of the entire image of the recognition area, performs correction with respect to the license plate, and corrects the coordinates of the reference point. It generates (s160).
  • the vertical and horizontal correction is to correct the error caused by the difference in perspective as the three-dimensional space is converted to the two-dimensional image, the position of one pixel in the image is located above or below the image. Different distances are displayed depending on whether the image is different from each other.
  • the image speed calculator 120 calculates a second vehicle speed of the target vehicle 20 from the image based on the corrected coordinate value (S170).
  • the final speed calculator 130 calculates the final speed by comparing the first vehicle speed with the second vehicle speed (S180).
  • the final speed calculator 130 calculates the speed deviation to define the final speed.
  • the final speed calculating unit defines the first vehicle speed as the speed of the target vehicle when the deviation between the first vehicle speed and the second vehicle speed is less than a first threshold value, and the first vehicle speed and the second vehicle speed.
  • the average value of the first vehicle speed and the second vehicle speed is defined as the speed of the target vehicle, and the deviation determines the second threshold value. If exceeded, the speed of the target vehicle may be discarded.
  • the final speed of the target vehicle is the first vehicle speed and the deviation Is 3% to 7%
  • the final speed of the target vehicle is the average of the first vehicle speed and the second vehicle speed. If the deviation exceeds 7%, the final speed of the target vehicle is determined to be an imaginary number that cannot be calculated. Process.
  • the number of the cameras 200 can be reduced by performing the speed of the vehicle 20 in the multi-lane through one radar 300 and one camera 200, and recognition of the data processing target in the captured image.
  • Cropping areas can reduce data throughput and speed up processing.
  • the multi-lane vehicle speed measuring system 10 of the present invention can calculate a more reliable speed by correcting errors that may occur in processing data by cropping the recognition area.
  • FIG. 6 is a flowchart illustrating correction of the recognition region of FIG. 4
  • FIG. 7 is a diagram illustrating obtaining the coordinates of FIG. 6
  • FIG. 8 is a diagram illustrating obtaining the inverse vehicle reference coordinates of FIG. 6.
  • the image corrector 125 may determine an area of a region corresponding to the first cropping region in the full frame image coordinate system. A coordinate value is obtained (s161).
  • the coordinate values in the full frame image coordinate system of the first cropping region are basic data for correcting a distance error generated according to where the first cropping region is located in the full frame image.
  • the vehicle reference coordinate value of the target vehicle 20 is obtained in the first cropping area coordinate system (S163).
  • the vehicle reference coordinate value may be a coordinate value of the license plate and may be one point or a plurality of points of the license plate. For example, as shown in FIG. 7, the coordinate values of the upper left corner and the lower right corner may be recognized as the vehicle reference coordinate.
  • the vehicle reference coordinate value in the first cropping area is obtained by converting the vehicle reference coordinate value into the coordinate value of the full frame video coordinate system (S167).
  • the y-axis coordinate value difference between the lower right corners of the two first cropping images from the two images having the time difference is 74 pixels.
  • the coordinate difference is 124 pixels.
  • the data processing is performed only by recognizing the coordinates of the recognition area, thereby increasing the computation speed and ensuring the accuracy of the data.
  • the vertical correction and the horizontal correction are performed by correcting the moving distance of the 3D real vehicle 20 from the 2D image by weighting the coordinate values in which region of the full frame image.
  • one pixel in the image has the same distance value, but the pixel is positioned up or down in the 2D image.
  • the distance value of the corresponding pixel is different depending on whether the pixel is arranged at the position of the pixel, and the distance value is different depending on which position is disposed at the left or right side of the pixel.
  • the moving distance of the vehicle 20 may be calculated from the corrected full frame coordinate value by multiplying the reference value with weights according to the position of each pixel.
  • the image correction unit 125 may further perform license plate correction after performing vertical and horizontal correction to the full frame image coordinate value (S169).
  • FIG. 9 is a detailed flowchart illustrating license plate correction of FIG. 6,
  • FIG. 10 is a diagram illustrating an operation of extracting license plate features of FIG. 9,
  • FIG. 11 is a diagram illustrating a vehicle model analysis of FIG. 9, and
  • FIG. 12 is a license plate of FIG. 9. This figure shows the height measurement.
  • the image corrector 125 extracts features of the license plate of the target vehicle 20 from two captured images (S200).
  • the license plate features may be the size, color, aspect ratio, etc. of the license plate.
  • vehicle type analysis of the target vehicle 20 is performed from the two captured images (S210).
  • the vehicle model analysis is classified into a large, medium, and small vehicle 20 according to the size of the vehicle 20, and performs detailed vehicle classification.
  • the vehicle 20 may be classified into a passenger car, a bus, a truck, and other vehicles 20, and the speed correction may be performed only for other vehicle types without performing speed correction in the case of a bus.
  • the license plate height d may be defined as the shortest distance from the bottom of the vehicle 20, that is, from the bottom of the tire of the vehicle 20 to a reference point of the license plate, for example, the lower right corner of the license plate, and FIGS. 11A to 11D. As shown in FIG. 12A and FIG. 12B, different heights may be provided depending on the vehicle type, and even in the same vehicle type as shown in FIGS. 12A and 12B.
  • the vehicle 20 performs headlight position analysis (S230).
  • the headlight analysis may estimate the width of the vehicle 20 at intervals between the headlights to distinguish the large vehicle 20, and analyze the reference point position of the license plate from the headlight of the vehicle 20.
  • the license plate coordinate correction is performed based on previously performed license plate features, vehicle type, license plate height, and headlight analysis contents (S240).
  • the size of the license plate may be enlarged by 5 to 15% as the vehicle 20 approaches. Accordingly, the coordinate correction of the pixel may be performed by weighting the enlargement ratio, and the coordinate may be a reference point, for example, an edge or a center of the license plate.
  • the height (d) of the license plate from the ground may vary according to the arrangement of the vehicle model and the license plate as shown in Figs. 11 and 12 to generate the vertical error described above.
  • the coordinates of the reference point that compensates for the error can be generated by weighting the vehicle according to the height of the model and the license plate.
  • the weight may be given only when the height d of the license plate is greater than or equal to a threshold.
  • the speed of the vehicle 20 may be calculated from the coordinates of the corrected reference point on which the license plate correction is performed, thereby obtaining data having improved reliability.

Abstract

In an embodiment, provided is a multilane vehicle speed measurement system comprising: a radar for emitting radio waves at a multilane road and receiving reflected waves reflected from vehicles traveling on the multilane road; a camera triggered by the presence of a speeding vehicle, detected by the radar, so as to capture images of the vehicle as a target vehicle with a time difference with respect to the lane at which the target vehicle is located; and a control unit for calculating a first vehicle speed on the basis of information on the target vehicle from the radar, calculating a second vehicle speed on the basis of information from the camera, and comparing the first vehicle speed and the second vehicle speed so as to calculate an average speed of the target vehicle. Therefore, the images of a specific vehicle among the vehicles on the multiple lanes are captured through one camera such that demand for the cameras can be reduced. In addition, a case, in which the speed detected by the radar and a speed obtained according to image analysis do not match and a difference therebetween is not within an error range, is treated as an error, and the speed obtained according to image analysis is transmitted to a control center, thereby providing an effect of eliminating a speed error.

Description

레이더 및 영상 융합 차량 단속시스템Radar and Image Fusion Vehicle Enforcement System
본 발명은 레이더 및 영상 융합 차량 단속시스템에 대한 것이다.The present invention relates to a radar and image fusion vehicle enforcement system.
더욱 상세하게는, 다차선 상의 차량에 대하여 하나의 카메라에 의해 촬영된영상 및 레이더를 이용한 레이더 및 영상 융합 차량 단속시스템에 대한 것이다.More specifically, the present invention relates to a radar and image fusion vehicle control system using an image captured by one camera and a radar for a vehicle on a multi-lane.
과속에 의한 교통사고는 운전자에게 치명적인 손상을 입히는 인명사고로 발전할 수 있기 때문에 차량의 과속을 방지하거나 과속차량을 검출하기 위하여 레이더 장비를 비롯한 첨단장비를 이용한 과속검출장치에 대한 연구가 다양하게 진행되고 있으며, 이러한 과속검출장치는 도로 곳곳에 설치되어 무인으로 과속차량을 검출할 수 있는 장점으로 인해 사용분야가 더욱 확대되고 있는 실정이다.As traffic accidents caused by speeding can develop into life-threatening accidents that can cause fatal damage to drivers, various studies on speed detection devices using advanced equipment such as radar equipment to prevent speeding of vehicles or to detect speeding vehicles are being conducted. In addition, such a speed detection device is installed throughout the road due to the advantage that can detect the speeding vehicle unattended, the use field is expanding.
현재 국내의 ITS 시스템에서 차량에 대한 정보 및 속도를 측정하는 방법은 도로에 LOOP 검지기를 매설하거나 레이저센서를 이용하는 방법, 카메라를 통해 영상만으로 정보를 취득하는 방법 등이 사용되고 있다.Currently, a method of measuring information and speed of a vehicle in a domestic ITS system includes a method of embedding a LOOP detector on a road, using a laser sensor, and acquiring information only through an image through a camera.
하지만 LOOP 검지기는 도로에 매설하는 과정에서 도로를 파괴하여야 하고, 공사하는 과정에서 차량의 흐름에 방해를 주는 요인이 될 수 있다는 단점이 있다.However, LOOP detector has the disadvantage that the road must be destroyed in the process of embedding on the road, and it can be a factor that hinders the flow of the vehicle in the process of construction.
또한, 차선을 변경하는 차량에 대해서는 제대로 된 데이터를 획득하기 어렵다.In addition, it is difficult to obtain proper data on a vehicle changing lanes.
레이저 검지기도 날씨 및 기후의 영향(눈, 비, 안개, 먼지 등)을 많이 받고, 검지폭이 좁은 단점을 가지고 있었다.Laser detectors also suffered from the effects of weather and climate (snow, rain, fog, dust, etc.) and had a narrow detection width.
또한, 영상 검지기도 역시 날씨 및 기후의 영향을 많이 받는 단점을 가지고 있으며, 특히 야간에는 검지율이 크게 저하되는 단점을 가지고 있다.In addition, the image detector also has a disadvantage that is affected by the weather and climate much, especially the detection rate is greatly reduced at night.
이에 반해서 레이더 검지기는 상대적으로 다른 검지기에 비해 날씨 및 기후의 영향을 가장 적게 받고, 비접촉식이라 도로의 파괴가 없다.On the other hand, radar detectors are relatively less affected by weather and climate than other detectors, and are non-contact, so there is no road destruction.
또한 광범위적으로 검지하기 때문에 LOOP나 레이저 검지기의 단점인 차선을 변경하는 차량에 대해서도 검지가 가능하다.In addition, since the detection is extensive, it is also possible to detect a vehicle changing lanes, which is a disadvantage of the LOOP or laser detector.
레이더 검지기에서 출력되는 정보는 차량의 속도, 거리, 각도이며, 이를 통해 차량의 정확한 위치 및 정보를 추출해 낼 수 있다.The information output from the radar detector is the speed, distance, and angle of the vehicle. Through this, the exact position and information of the vehicle can be extracted.
다만, 레이더도 전파를 사용하는 것이므로 전파 환경에 의해 오류가 발생할수 있어 레이더와 영상을 동시에 사용하는 방법이 제공되었다.However, since the radar also uses radio waves, errors may occur due to the radio wave environment, thereby providing a method of simultaneously using the radar and the image.
그러나, 차량이 다차선인 경우, 각 차선마다 전용의 카메라가 필요하고, 레이더를 통해 감지되는 특정 차량의 속도를 촬영하기 위해 복수의 카메라 전부가 촬영한 영상을 판독해야 하므로 메모리 및 처리의 과사용이 불가피하다.However, if the vehicle is multi-lane, a dedicated camera is required for each lane, and images of all of the plurality of cameras must be read to capture the speed of a specific vehicle detected by the radar. Inevitable
또한, 영상의 위치에 따라 실제 차량의 위치와 오차가 발생할 수 있어 이에 대한 보정이 요구된다.In addition, since the position and the error of the actual vehicle may occur according to the position of the image, correction is required.
본 발명은 다차선 상의 차량에 대한 속도를 연산할 때, 레이더 및 하나의 카메라를 통한 영상을 이용하여 정확한 차량의 속도를 연산할 수 있는 시스템을 제공한다.The present invention provides a system that can calculate the speed of a vehicle accurately by using a radar and an image through a camera when calculating the speed of a vehicle on a multi-lane.
또한, 본 발명은 다차선 상의 차량에 대한 속도를 연산할 때, 레이더 및 하나의 카메라를 통한 영상을 이용하여 데이터 처리 용량을 줄일 수 있는 시스템을 제공한다. In addition, the present invention provides a system that can reduce the data processing capacity by using the image through the radar and one camera when calculating the speed for the vehicle on the multi-lane.
실시예는 다차선 도로에 방사파를 출사하고, 상기 다차선 도로를 주행하는 차량으로부터 반사되는 반사파를 수신하는 레이더송수신수단, 상기 레이더송수신수단에 의해 과속으로 감지되는 타겟차량의 존재에 트리거되어 상기 다차선 도로 및 상기 타겟차량을 포함하는 풀프레임영상을 촬영하는 영상촬영수단, 송수신된 레이더 신호를 분석하여 상기 타겟차량에 대한 주행정보를 생성하는 레이더속도연산부, 상기 타겟차량의 상기 주행정보를 분석하여 상기 다차선 도로상의 풀프레임영상에서 상기 타겟차량의 위치를 파악하고, 상기 다차선 도로상의 풀프레임영상에서 상기 차량이 위치한 영역을 크롭핑(cropping)하여 제1크롭핑영상으로 저장하고, 상기 제1크롭핑영상으로부터 상기 타겟차량의 번호판을 인식하여 상기 번호판이 위치한 영역을 크롭핑하여 제2크롭핑영상으로 저장하는 영상처리부, 그리고 상기 제2크롭핑영상에서 상기 차량 번호판의 번호를 인식하는 번호인식부를 포함하는 제어부를 포함하는 레이더 및 영상 융합 차량 단속시스템을 제공한다. According to an embodiment of the present invention, a radiation beam is emitted to a multi-lane road, and radar transmission and reception means for receiving a reflected wave reflected from a vehicle traveling on the multi-lane road is triggered by the presence of a target vehicle sensed at high speed by the radar transmission and reception means. Image capturing means for photographing a full frame image including a multi-lane road and the target vehicle, a radar speed calculator configured to generate the driving information for the target vehicle by analyzing the transmitted and received radar signals, and analyzing the driving information of the target vehicle. Determine the location of the target vehicle in the full frame image on the multi-lane road, crop the area in which the vehicle is located in the full-frame image on the multi-lane road, and store the first cropping image; Recognizes the license plate of the target vehicle from a first cropping image and crops an area where the license plate is located And a control unit including an image processing unit for storing the second cropping image as a second cropping image, and a number recognition unit for recognizing the number of the vehicle license plate in the second cropping image.
상기 영상처리부로부터 소정의 시간간격으로 촬영된 영상들을 전달받아, 차량의 속도를 연산하는 영상속도연산부를 더 포함할 수 있다.The image processing unit may further include an image speed calculator configured to receive images captured at a predetermined time interval from the image processor and calculate a vehicle speed.
상기 영상속도연산부는 상기 타겟차량이 제1크롭핑영상 좌표계에서의 차지하는 제1좌표값을 풀프레임영상 좌표계에서 차지하는 제2좌표값으로 변환하고, 상기 제2좌표값의 시간에 따른 이동거리를 계산하여 차량의 속도를 연산할 수 있다.The image velocity calculating unit converts a first coordinate value occupied by the target vehicle in the first cropping image coordinate system into a second coordinate value occupied in the full frame image coordinate system, and calculates a moving distance according to time of the second coordinate value. To calculate the speed of the vehicle.
상기 레이더속도연산부로부터 상기 레이더송수신수단에 의해 측정된 상기 타겟차량의 주행속도를 제1 차량속도로 정의하고, 상기 타겟 차량의 영상처리부에서 판별된 번호판의 위치를 전달받아 소정 시간 간격으로 촬영한 영상에서 이동된 차량번호의 픽셀 수를 계수하여 제2 차량속도로 정의할 때, 상기 제1 차량속도와 제2 차량속도와 일정시간 경과 후의 상기 제2 차량 속도 모두 임계 속도를 벗어나는 경우 단속대상차량으로 판단하는 단속판단부를 포함할 수 있다.An image taken at a predetermined time interval by defining the traveling speed of the target vehicle measured by the radar transmission and reception means from the radar speed calculating unit as a first vehicle speed, and receiving the position of the license plate determined by the image processing unit of the target vehicle. When the number of pixels of the moving vehicle number is defined and defined as the second vehicle speed, when both the first vehicle speed, the second vehicle speed, and the second vehicle speed after a predetermined time elapse from the threshold speed, It may include an intermittent determination unit to determine.
상기 제어부는 상기 제1 차량 속도와 상기 제2 차량 속도의 편차가 제1 임계값 미만이면 상기 제1 차량 속도를 상기 평균 속도로 정의하고, 상기 제1 차량 속도와 상기 제2 차량 속도의 편차가 제2 임계값의 범위를 충족하면 상기 제1 차량 속도와 상기 제2 차량 속도의 평균값을 상기 평균 속도로 정의하고, 상기 편차가 상기 제2 임계값의 범위를 초과하면 상기 타겟 차량의 속도를 버릴 수 있다.The control unit defines the first vehicle speed as the average speed when the deviation between the first vehicle speed and the second vehicle speed is less than a first threshold value, and the deviation between the first vehicle speed and the second vehicle speed is determined. If the range of the second threshold value is satisfied, the average value of the first vehicle speed and the second vehicle speed is defined as the average speed, and if the deviation exceeds the range of the second threshold value, the speed of the target vehicle is discarded. Can be.
상기 영상촬영수단은 상기 제1 차량 속도에 따라 시간차를 가변하며 상기 타겟 차량의 영상을 촬영할 수 있다.The image photographing means may vary a time difference according to the speed of the first vehicle and capture an image of the target vehicle.
실시예는 레이더(radar)를 출사하여 감지구역 내를 주행하는 모든 차량의 속도를 측정하며, 하나의 카메라를 통해 다차선의 차량 중 특정 차량에 대한 영상을 촬영하므로, 카메라의 수요를 줄일 수 있다. 또한, 카메라로부터 촬영된 영상에서 특정 차량에 대한 영상을 인식 영역으로 잘라내어 상기 인식 영역만을 판독함으로 데이터 처리 용량을 줄일 수 있어 연산 속도를 향상시키고, 연산양을 줄일 수 있다.The embodiment emits a radar to measure the speed of all the vehicles traveling in the detection zone, and by taking a picture of a specific vehicle of the multi-lane vehicle through one camera, it is possible to reduce the demand of the camera . In addition, by cutting the image of the specific vehicle from the image taken from the camera into the recognition region and reading only the recognition region, the data processing capacity can be reduced, thereby improving the computation speed and reducing the computation amount.
또한, 실시예는 인식영역의 영상만으로 속도를 계산할 수 있으며, 속도계산시 좌표에 따라 발생할 수 있는 수평, 수직 오차를 보상하고, 번호판의 특징 등에 따른 오차를 보상하여 영상으로부터의 속도를 정확하게 연산할 수 있어 신뢰성이 향상된다.In addition, the embodiment may calculate the speed using only the image of the recognition area, compensate for horizontal and vertical errors that may occur according to the coordinates when calculating the speed, and compensate the error according to the characteristics of the license plate to accurately calculate the speed from the image. Can improve the reliability.
또한, 레이더에서 검지된 속도와 영상 분석에 따른 속도가 일치하지 않으면서 오차 범위 내에 존재하지 않을 경우에 오류로 처리하여 영상 분석에 따른 속도를 관제센터로 송출하도록 함으로써, 속도 오류를 제거하는 효과를 제공한다.In addition, if the speed detected by the radar and the speed according to the video analysis do not coincide with each other and do not exist within the error range, it is treated as an error and the speed of the video analysis is sent to the control center, thereby eliminating the speed error. to provide.
도 1은 본 발명의 일실시예에 따른 다차선 차량 속도 측정시스템의 전체 구성도이다.1 is an overall configuration diagram of a multi-lane vehicle speed measurement system according to an embodiment of the present invention.
도 2는 도 1의 본 발명의 일실시예에 따른 다차선 차량 속도 측정 시스템의 상세 블록도이다.FIG. 2 is a detailed block diagram of a multi-lane vehicle speed measuring system according to an embodiment of the present invention of FIG. 1.
도 3은 도 2의 제어부의 일 실시예를 나타내는 블록도이다.3 is a block diagram illustrating an exemplary embodiment of the controller of FIG. 2.
도 4는 제어부의 속도 연산 방법을 나타내는 순서도이다.4 is a flowchart illustrating a speed calculating method of the controller.
도 5는 영상 속도 연산부의 인식영역 획득하는 단계를 나타내는 그림이다.5 is a diagram illustrating a step of acquiring a recognition region of an image velocity calculating unit.
도 6은 도 4의 인식 영역에 대한 보정을 나타내는 순서도이다.6 is a flowchart illustrating correction of the recognition region of FIG. 4.
도 7은 도 6의 좌표 수득을 나타내는 그림이다.FIG. 7 is a diagram showing obtaining the coordinates of FIG.
도 8은 도 6의 역차량기준좌표 수득을 나타내는 그림이다.8 is a diagram illustrating obtaining a reverse vehicle reference coordinate of FIG. 6.
도 9는 도 6의 번호판 보정을 나타내는 상세 순서도이다.9 is a detailed flowchart illustrating license plate correction of FIG. 6.
도 10은 도 9의 번호판 특징 추출하는 동작을 나타내는 그림이다.FIG. 10 is a diagram illustrating an operation of extracting a license plate feature of FIG. 9.
도 11은 도 9의 차종분석을 나타내는 그림이다.FIG. 11 is a diagram illustrating a vehicle model analysis of FIG. 9.
도 12는 도 9의 번호판 높이 측정을 나타내는 그림이다.FIG. 12 is a view showing the license plate height measurement of FIG.
본 발명은 다양한 변경을 가할 수 있고 여러 가지 실시예를 가질 수 있는 바, 특정 실시예들을 도면에 예시하고 상세한 설명에 상세하게 설명하고자 한다. 그러나, 이는 본 발명을 특정한 실시 형태에 대해 한정하려는 것이 아니며, 본 발명의 사상 및 기술 범위에 포함되는 모든 변경, 균등물 내지 대체물을 포함하는 것으로 이해되어야 한다. 각 도면을 설명하면서 유사한 참조부호를 유사한 구성요소에 대해 사용하였다. As the invention allows for various changes and numerous embodiments, particular embodiments will be illustrated in the drawings and described in detail in the written description. However, this is not intended to limit the present invention to specific embodiments, it should be understood to include all modifications, equivalents, and substitutes included in the spirit and scope of the present invention. In describing the drawings, similar reference numerals are used for similar elements.
제1, 제2, A, B 등의 용어는 다양한 구성요소들을 설명하는데 사용될 수 있지만, 상기 구성요소들은 상기 용어들에 의해 한정되어서는 안 된다. 상기 용어들은 하나의 구성요소를 다른 구성요소로부터 구별하는 목적으로만 사용된다. 예를 들어, 본 발명의 권리 범위를 벗어나지 않으면서 제1 구성요소는 제2 구성요소로 명명될 수 있고, 유사하게 제2 구성요소도 제1 구성요소로 명명될 수 있다. 및/또는 이라는 용어는 복수의 관련된 기재된 항목들의 조합 또는 복수의 관련된 기재된 항목들 중의 어느 항목을 포함한다. Terms such as first, second, A, and B may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component. The term and / or includes a combination of a plurality of related items or any item of a plurality of related items.
어떤 구성요소가 다른 구성요소에 "연결되어" 있다거나 "접속되어" 있다고 언급된 때에는, 그 다른 구성요소에 직접적으로 연결되어 있거나 또는 접속되어 있을 수도 있지만, 중간에 다른 구성요소가 존재할 수도 있다고 이해되어야 할 것이다. 반면에, 어떤 구성요소가 다른 구성요소에 "직접 연결되어" 있다거나 "직접 접속되어" 있다고 언급된 때에는, 중간에 다른 구성요소가 존재하지 않는 것으로 이해되어야 할 것이다. When a component is referred to as being "connected" or "connected" to another component, it may be directly connected to or connected to that other component, but it may be understood that other components may be present in between. Should be. On the other hand, when a component is said to be "directly connected" or "directly connected" to another component, it should be understood that there is no other component in between.
본 출원에서 사용한 용어는 단지 특정한 실시예를 설명하기 위해 사용된 것으로, 본 발명을 한정하려는 의도가 아니다. 단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함한다. 본 출원에서, "포함하다" 또는 "가지다" 등의 용어는 명세서상에 기재된 특징, 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것이 존재함을 지정하려는 것이지, 하나 또는 그 이상의 다른 특징들이나 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것들의 존재 또는 부가 가능성을 미리 배제하지 않는 것으로 이해되어야 한다.The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting of the present invention. Singular expressions include plural expressions unless the context clearly indicates otherwise. In this application, the terms "comprise" or "have" are intended to indicate that there is a feature, number, step, operation, component, part, or combination thereof described in the specification, and one or more other features. It is to be understood that the present invention does not exclude the possibility of the presence or the addition of numbers, steps, operations, components, components, or a combination thereof.
다르게 정의되지 않는 한, 기술적이거나 과학적인 용어를 포함해서 여기서 사용되는 모든 용어들은 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자에 의해 일반적으로 이해되는 것과 동일한 의미를 가지고 있다. 일반적으로 사용되는 사전에 정의되어 있는 것과 같은 용어들은 관련 기술의 문맥 상 가지는 의미와 일치하는 의미를 가지는 것으로 해석되어야 하며, 본 출원에서 명백하게 정의하지 않는 한, 이상적이거나 과도하게 형식적인 의미로 해석되지 않는다.Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art. Terms such as those defined in the commonly used dictionaries should be construed as having meanings consistent with the meanings in the context of the related art and shall not be construed in ideal or excessively formal meanings unless expressly defined in this application. Do not.
이하, 본 발명에 따른 바람직한 실시예를 첨부된 도면을 참조하여 상세하게 설명한다.Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.
도 1은 본 발명의 일실시예에 따른 다차선 차량 속도 측정시스템(10)의 전체 구성도이고, 도 2는 도 1의 본 발명의 일실시예에 따른 다차선 차량 속도 측정 시스템(10)의 상세 블록도이고, 도 3은 도 2의 제어부(100)의 일 실시예를 나타내는 블록도이다.1 is an overall configuration diagram of a multi-lane vehicle speed measurement system 10 according to an embodiment of the present invention, Figure 2 is a multi-lane vehicle speed measurement system 10 according to an embodiment of the present invention of FIG. 3 is a block diagram illustrating an embodiment of the controller 100 of FIG. 2.
도 1을 참고하면, 본 발명의 일실시예에 따른 다차선 차량 속도 측정시스템(10)은 다차선 도로에 방사파를 출사하고, 다차선 도로를 주행하는 차량(20)으로부터 반사되는 반사파를 수신하기 위한 레이더(300), 상기 레이더(300)에 의해 감지되는 차량(20)을 촬영하여 영상을 획득하는 하나의 카메라(200), 그리고 상기 레이더(300)로부터의 제1 차량(20) 속도와 카메라(200)로부터의 제2 차량 속도를 연산하고 보정 및 비교하여 차량(20)의 최종 속도를 연산하는 제어부(100)를 포함한다.Referring to FIG. 1, the multi-lane vehicle speed measuring system 10 according to an embodiment of the present invention emits radiation waves on a multi-lane road and receives reflected waves reflected from the vehicle 20 traveling on the multi-lane road. The radar 300, one camera 200 for capturing an image of the vehicle 20 sensed by the radar 300, and a speed of the first vehicle 20 from the radar 300. And a controller 100 for calculating, correcting and comparing the second vehicle speed from the camera 200 to calculate the final speed of the vehicle 20.
상기 레이더(300)는 짧은 주기를 가지고, 상기 방사파를 출사할 수 있으며, 상기 카메라(200)는 상기 레이더(300)로부터 수득된 방사파에 의해 과속 차량(20)이 타겟으로 인지되면, 트리거되어 다차선 중 타겟 차량(20)이 위치한 차선을 중심으로 2회 촬영을 진행한다. The radar 300 has a short period and emits the radiation wave, and the camera 200 triggers when the speeding vehicle 20 is recognized as a target by the radiation wave obtained from the radar 300. Then, the photographing is performed twice based on the lane in which the target vehicle 20 is located among the multi lanes.
상기 제어부(100)는 레이더(300)로부터의 반사파를 수신하고, 이를 판독하여 차량(20)의 제1 차량 속도를 연산하는 레이더 속도 연산부(110), 상기 카메라(200)로부터 영상을 수신하고, 가공 및 보정하여 차량(20)의 제2 차량 속도를 연산하는 영상 속도 연산부(120), 그리고 제1 차량 속도와 제2 차량 속도를 수신하고, 이를 비교하여 최종 속도를 수득하는 최종 속도 연산부(130)를 포함한다.The control unit 100 receives the reflected wave from the radar 300, reads the radar speed calculation unit 110 for calculating the first vehicle speed of the vehicle 20, and receives an image from the camera 200, An image speed calculator 120 that calculates a second vehicle speed of the vehicle 20 by processing and correcting the same, and a final speed calculator 130 that receives the first vehicle speed and the second vehicle speed and compares them to obtain a final speed. ).
상세하게는, 상기 레이더 속도 연산부(110)는 레이더(300)로부터 방사파의 출사시점과 반사파의 수신시점 정보를 획득하여 레이더(300)로부터 차량(20)이 이격된 거리정보를 산출하여 거리정보로부터 속도 정보를 추출하여 제1 차량 속도로 정의하고, 상기 제1 차량 속도가 임계속도(단속속도)를 초과하는 경우 이를 과속 차량(20)으로 인지한다. 상기 과속 차량(20)이 타겟 차량(20)으로 정의되면, 이에 카메라(200)가 트리거되어 타겟 차량(20)이 포함된 풀프레임 영상을 2회에 걸쳐 촬영을 수행한다.In detail, the radar speed calculating unit 110 obtains information on the emission point of the radiation wave and the reception point of the reflected wave from the radar 300 to calculate distance information of the vehicle 20 spaced apart from the radar 300 to obtain distance information. The speed information is extracted from the first vehicle speed, and the speed information 20 is recognized as the speed vehicle 20 when the first vehicle speed exceeds a critical speed (intermittent speed). When the speeding vehicle 20 is defined as the target vehicle 20, the camera 200 is triggered to capture a full frame image including the target vehicle 20 twice.
또한, 상기 제어부(100)는 상기 타겟차량(20)의 상기 주행정보를 분석하여 상기 다차선 도로상의 풀프레임영상에서 상기 타겟차량(20)의 위치를 파악하고, 상기 다차선 도로상의 풀프레임영상에서 상기 차량이 위치한 영역을 크롭핑(cropping)하여 제1크롭핑영상으로 저장하고, 상기 제1 크롭핑영상으로부터 상기 타겟차량(20)의 번호판을 인식하여 상기 번호판이 위치한 영역을 크롭핑하여 제2 크롭핑영상으로 저장하는 영상처리부(도시하지 않음)를 더 포함할 수 있다.In addition, the controller 100 analyzes the driving information of the target vehicle 20 to determine the position of the target vehicle 20 in the full frame image on the multi-lane road, and the full frame image on the multi-lane road. Crop the area in which the vehicle is located and store it as a first cropping image, recognize the license plate of the target vehicle 20 from the first cropping image, and crop the area where the license plate is located. It may further include an image processor (not shown) for storing as two cropping images.
상기 영상처리부는 상기 2회에 걸친 영상으로부터 타겟 차량(20)이 위치하는 영역을 크롭(crop)하여 2 개의 제1 크롭핑 영상을 인식 영역(ROI: region of interest)을 획득한다. The image processor crops an area in which the target vehicle 20 is located from the two images to obtain a region of interest (ROI) of two first cropped images.
상기 영상처리부는 상기 2개의 제1 크롭핑 영상 및 제1 크롭핑 영상으로부터 차량 번호판을 인식하고, 차량 번호판의 번호 및 위치를 판별한다. 이때, 상기 번호판 위치는 타겟 차량(20)의 번호판 내의 한 점을 기준점으로 정의할 수 있다. 일 예로 상기 기준점은 번호판의 중심일 수 있으며, 번호판이 직사각형인 경우, 4 개의 모서리 중 하나일 수 있다.The image processor recognizes a vehicle license plate from the two first cropping images and the first cropping image, and determines the number and position of the vehicle license plate. In this case, the license plate position may be defined as a reference point of a point in the license plate of the target vehicle 20. For example, the reference point may be the center of the license plate, and when the license plate is rectangular, it may be one of four corners.
또한, 상기 제어부(100)는 상기 레이더속도연산부로부터 상기 레이더송수신수단에 의해 측정된 상기 타겟차량의 주행속도를 제1 차량속도로 정의하고, 상기 타겟 차량의 영상처리부에서 판별된 번호판의 위치를 전달받아 소정 시간 간격으로 촬영한 영상에서 이동된 차량번호의 픽셀 수를 계수하여 제2 차량속도로 정의할 때, 상기 제1 차량속도와 제2 차량속도와 일정시간 경과 후의 상기 제2 차량 속도 모두 임계 속도를 벗어나는 경우 단속대상차량으로 판단하는 단속판단부(도시하지 않음)를 더 포함할 수 있다.In addition, the control unit 100 defines the traveling speed of the target vehicle measured by the radar transmission and reception means from the radar speed calculating unit as a first vehicle speed, and transmits the position of the license plate determined by the image processing unit of the target vehicle. The first vehicle speed, the second vehicle speed, and the second vehicle speed after a predetermined time elapse, when the number of pixels of the moved vehicle number is counted and defined as the second vehicle speed in the images photographed at predetermined time intervals. If it is out of the speed may further include an enforcement decision unit (not shown) that is determined to be subject to enforcement.
또한, 상기 제2크롭핑영상에서 상기 차량 번호판의 번호를 인식하는 번호인식부(도시하지 않음)를 더 포함할 수 있다. The apparatus may further include a number recognition unit (not shown) that recognizes the number of the vehicle license plate in the second cropping image.
상기 영상 속도 연산부(120)는 상기 타겟차량(20)이 제1크롭핑영상 좌표계에서의 차지하는 제1좌표값을 풀프레임영상 좌표계에서 차지하는 제2좌표값으로 변환하고, 상기 제2좌표값의 시간에 따른 이동거리를 계산하여 차량의 속도를 연산한다.The image velocity calculating unit 120 converts the first coordinate value occupied by the target vehicle 20 in the first cropping image coordinate system into a second coordinate value occupied in the full frame image coordinate system, and the time of the second coordinate value. Calculate the speed of the vehicle by calculating the distance traveled by
즉, 상기 영상 속도 연산부(120)는 전체 영상 내에서의 인식 영역의 위치에 대한좌표 보정을 수행하고, 기준점이 되는 번호판의 특징에 따라 보정하는 영상 보정부(125)를 포함할 수 있다. That is, the image speed calculating unit 120 may include an image correcting unit 125 that corrects coordinates of the position of the recognition area within the entire image and corrects it according to the feature of the license plate serving as a reference point.
상기 영상 속도 연산부(120)는 상기 영상 보정부(125)로부터의 보정된 제2 좌표값에 의해 제2 차량속도를 연산할 수 있다.The image speed calculator 120 may calculate a second vehicle speed based on the corrected second coordinate values from the image corrector 125.
상기 제2 차량 속도는 두 개의 영상 사이의 시간차에 대한 두 개의 보정된 제2 좌표값 사이의 거리를 기준으로 연산될 수 있다.The second vehicle speed may be calculated based on a distance between two corrected second coordinate values for a time difference between two images.
상기 최종 속도 연산부(130)는 상기 제1 차량 속도와 제2 차량 속도를 비교하여 두 개의 차량속도가 일치하는지, 레이더(300)에 의한 제1 차량 속도와 카메라(200)에 의한 제2 차량 속도가 오차 범위 내에 포함되는지를 판단하여 최종 속도를 산출한다. The final speed calculator 130 compares the first vehicle speed with the second vehicle speed to determine whether the two vehicle speeds match each other, whether the first vehicle speed by the radar 300 and the second vehicle speed by the camera 200 are determined. It is determined whether is within the error range to calculate the final speed.
상기 제어부(100)는 산출된 최종 속도 및 차량 정보를 실시간으로 관제 센터(400)로 전송하기 위한 통신부(150)를 포함하며, 촬영된 영상, 차량번호, 평균속도, 날짜, 시간 데이터를 저장하는 메모리(140)를 포함한다.The control unit 100 includes a communication unit 150 for transmitting the calculated final speed and vehicle information to the control center 400 in real time, and stores the captured image, vehicle number, average speed, date, time data Memory 140.
관제 센터(400)는 상기 다차선 차량(20) 속도 검출 시스템(10)으로부터의 과속 차량(20)에 대한 차량정보 및 최종속도 정보를 수신하고, 상기 차량(20) 영상 화면에 차량번호, 평균속도, 날짜 및 시간 데이터를 정해진 영역에 표출시키고 저장하는 서버를 포함한다.The control center 400 receives vehicle information and final speed information about the speeding vehicle 20 from the multi-lane vehicle 20 speed detection system 10, and displays a vehicle number and an average on the image screen of the vehicle 20. It includes a server that displays and stores speed, date, and time data in a defined area.
이하, 본 발명에 의한 다차선 차량(20) 속도 측정시스템(10)의 동작을 상세히 설명한다.Hereinafter, the operation of the multi-lane vehicle 20 speed measurement system 10 according to the present invention will be described in detail.
도 4는 제어부(100)의 속도 연산 방법을 나타내는 순서도이고, 도 5는 영상 속도 연산부(120)의 인식영역 획득하는 단계를 나타내는 그림이다.4 is a flowchart illustrating a speed calculating method of the controller 100, and FIG. 5 is a diagram illustrating a step of obtaining a recognition region of the image speed calculating unit 120.
먼저, 레이더(300)가 주기적으로 반송파를 수신하고, 이를 연산하여 제1 차량 속도를 생성한다(s110). 레이더 속도 연산부(110)에서 연산된 차량(20)의 제1 차량 속도가 임계속도(단속속도)을 초과하는 것으로 판단되면(s120), 해당 차량(20)을 타겟 차량(20)으로 정의한다. 이때, 임계속도는 해당 다차선 도로의 단속속도일 수 있다. First, the radar 300 periodically receives a carrier wave and calculates it to generate a first vehicle speed (S110). If it is determined that the first vehicle speed of the vehicle 20 calculated by the radar speed calculating unit 110 exceeds a threshold speed (interruption speed) (s120), the vehicle 20 is defined as the target vehicle 20. In this case, the critical speed may be an intermittent speed of the corresponding multi-lane road.
타겟 차량(20)이 정의되면, 타겟 차량(20)이 위치하는 차선을 인식하고(s130), 다차선 도로에 배치된 한대의 카메라(200)가 트리거되어 타겟 차량(20)이 포함되도록 시간차를 두고 2회에 걸쳐 촬영이 진행된다(s140).When the target vehicle 20 is defined, the lane in which the target vehicle 20 is located is recognized (S130), and one camera 200 disposed on the multi-lane road is triggered to adjust the time difference so that the target vehicle 20 is included. Photographing is performed two times (s140).
일 예로, 2회에 걸쳐 활영된 2개의 영상은 도 5a 및 도 5b와 같이 80msec의 시간차를 두고 수득될 수 있다.As an example, two images taken two times may be obtained with a time difference of 80 msec as shown in FIGS. 5A and 5B.
이때, 시간차는 제1 차량 속도에 따라 변동할 수 있으며, 60km/h 이하인 경우, 160msec 시간차를 가질 수 있고, 60km/h~80km/h인 경우, 120msec 의 시간차를 가질 수 있으며, 80km/h~100km/h인 경우, 80msec 의 시간차를 가질 수 있으며, 100km/h 이상인 경우, 40msec의 시간차를 가질 수 있다.In this case, the time difference may vary according to the first vehicle speed, and may have a time difference of 160 msec when it is 60 km / h or less, and may have a time difference of 120 msec when it is 60 km / h to 80 km / h, and 80 km / h to In the case of 100 km / h, it may have a time difference of 80 msec, and in the case of 100 km / h or more, it may have a time difference of 40 msec.
영상 처리부는 2개의 영상으로부터 타겟 차량(20)의 차량 번호 정보를 수득하고, 타겟 차량(20)이 배치되어 있는 영역을 인식영역으로 크롭(crop)한다(s150). The image processor obtains vehicle number information of the target vehicle 20 from the two images, and crops an area where the target vehicle 20 is disposed to the recognition area (S150).
즉, 도 5a 및 도 5b와 같이, 붉은 선으로 정의되는 레이더(300) 거리 정보의 인식영역으로부터 타겟 차량(20) 위치를 정의하는 영역을 수득하고, 이를 포함하는 인식영역을 잘라내어 신호 처리 대상으로 정의한다. That is, as shown in FIGS. 5A and 5B, an area defining the position of the target vehicle 20 is obtained from the recognition area of the radar 300 distance information defined by the red line, and the recognition area including the same is cut out to be a signal processing target. define.
이때, 인식 영역의 크기는 해당 타겟 차량(20) 위치를 정의하는 영역의 크기 및 영상 내 차량(20)의 크기에 따라 조절될 수 있다.In this case, the size of the recognition area may be adjusted according to the size of the area defining the location of the target vehicle 20 and the size of the vehicle 20 in the image.
도 5a 및 도 5b와 같이 시간차를 가지는 두 개의 영상에서 인식 영역의 위치가 차량(20)의 속도에 따라 이동한 것을 볼 수 있다.5A and 5B, it can be seen that the position of the recognition area is moved according to the speed of the vehicle 20 in two images having a time difference.
상기 영상 속도 연산부(120)의 영상 보정부(125)는 상기 인식 영역의 전체 영상에서의 위치에 따라 수직 보정 및 수평 보정을 수행하고, 번호판과 관련하여 보정을 수행하고 기준점에 대한 보정된 좌표를 생성한다(s160).The image correcting unit 125 of the image speed calculating unit 120 performs vertical correction and horizontal correction according to the position of the entire image of the recognition area, performs correction with respect to the license plate, and corrects the coordinates of the reference point. It generates (s160).
이때, 수직 및 수평 보정은 3차원의 공간이 2차원의 촬영 영상으로 전환되면서 원근감의 차이에 의해 발생하는 오차를 보정하는 것으로, 촬영 영상에서 픽셀 하나의 거리가 영상의 상부에 위치하는지 하부에 위치하는지에 따라 서로 다른 거리를 나타내게 되며, 이는 영상의 좌우의 차이에 의하여도 발생하게 된다.At this time, the vertical and horizontal correction is to correct the error caused by the difference in perspective as the three-dimensional space is converted to the two-dimensional image, the position of one pixel in the image is located above or below the image. Different distances are displayed depending on whether the image is different from each other.
따라서, 이러한 오차를 보상하여 수직 및 수평 보정을 진행함으로써 영상 전체가 아닌 인식 영역에 대하여만 데이터를 판독하여도 원근감으로부터의 오차 없이 정확한 좌표를 수득할 수 있다. Therefore, by correcting these errors and performing vertical and horizontal correction, accurate coordinates can be obtained without errors from perspective even if data is read only for the recognition region, not the entire image.
상기 보정에 대하여는 뒤에 상세히 설명한다.The correction will be described later in detail.
영상 속도 연산부(120)는 보정된 좌표값을 기준으로 영상으로부터의 타겟 차량(20)의 제2 차량 속도를 연산한다(s170).The image speed calculator 120 calculates a second vehicle speed of the target vehicle 20 from the image based on the corrected coordinate value (S170).
다음으로, 최종속도 연산부(130)는 제1 차량 속도와 제2 차량 속도를 비교하여 최종속도를 연산한다(s180).Next, the final speed calculator 130 calculates the final speed by comparing the first vehicle speed with the second vehicle speed (S180).
최종속도 연산부(130)는 속도 편차를 연산하여 최종속도를 규정한다.The final speed calculator 130 calculates the speed deviation to define the final speed.
속도 편차는 다음의 수식을 따른다.The speed deviation follows the equation
[수학식 1][Equation 1]
속도 편차=(제1 차량 속도/제2 차량 속도)??100Speed deviation = (first vehicle speed / second vehicle speed) ?? 100
최종속도연산부는 상기 제1 차량 속도와 상기 제2 차량 속도의 편차가 제1 임계값 미만이면 상기 제1 차량 속도를 상기 타겟차량의 속도로 정의하고, 상기 제1 차량 속도와 상기 제2 차량 속도의 편차가 제1임계값과 제2 임계값 사이의 범위를 충족하면 상기 제1 차량 속도와 상기 제2 차량 속도의 평균값을 상기 타겟차량의 속도로 정의하고, 상기 편차가 상기 제2 임계값을 초과하면 상기 타겟 차량의 속도를 버릴 수 있다.The final speed calculating unit defines the first vehicle speed as the speed of the target vehicle when the deviation between the first vehicle speed and the second vehicle speed is less than a first threshold value, and the first vehicle speed and the second vehicle speed. When the deviation of the value satisfies the range between the first threshold value and the second threshold value, the average value of the first vehicle speed and the second vehicle speed is defined as the speed of the target vehicle, and the deviation determines the second threshold value. If exceeded, the speed of the target vehicle may be discarded.
예컨데, 제1임계값이 3%, 제2임계값이 7%인 경우 제1차량속도와 제2차량속도의 편차가 3%보다 작으면, 타겟차량의 최종속도는 제1차량속도이고, 편차가 3% 내지 7%인 경우 타겟차량의 최종속도는 제1차량속도와 제2차량속도의 평균이고, 편차가 7%를 초과하는 경우 타겟차량의 최종속도를 연산하지 못하는 허수로 판단하고 에러로 처리한다.For example, when the first threshold value is 3% and the second threshold value is 7%, if the deviation between the first vehicle speed and the second vehicle speed is less than 3%, the final speed of the target vehicle is the first vehicle speed and the deviation Is 3% to 7%, the final speed of the target vehicle is the average of the first vehicle speed and the second vehicle speed. If the deviation exceeds 7%, the final speed of the target vehicle is determined to be an imaginary number that cannot be calculated. Process.
마지막으로, 타겟 차량(20)에 대한 최종속도를 수득하면, 상기 타겟 차량(20)에 대한 정보, 즉, 차량 번호, 시간, 위치와 상기 최종속도를 관제 센터(400)에 실시간으로 전송하고 감지 동작을 종료한다.Finally, when the final speed for the target vehicle 20 is obtained, information about the target vehicle 20, that is, the vehicle number, time, location and the final speed, is transmitted to the control center 400 in real time and detected. End the operation.
이와 같이 다차선에서의 차량(20)의 과속 여부를 하나의 레이더(300) 및 하나의 카메라(200)를 통해 수행하여 카메라(200)의 개수를 줄일 수 있고, 촬영된 영상에서 데이터 처리 대상인 인식 영역을 크롭하여 데이터 처리량을 줄이고, 처리 속도를 향상시킬 수 있다.As such, the number of the cameras 200 can be reduced by performing the speed of the vehicle 20 in the multi-lane through one radar 300 and one camera 200, and recognition of the data processing target in the captured image. Cropping areas can reduce data throughput and speed up processing.
한편, 본 발명의 다차선 차량 속도 측정 시스템(10)은 인식 영역을 크롭하여 데이터를 처리함에 있어 발생할 수 있는 오류에 대하여 보정을 수행하여 보다 신뢰성 있는 속도를 연산할 수 있다. Meanwhile, the multi-lane vehicle speed measuring system 10 of the present invention can calculate a more reliable speed by correcting errors that may occur in processing data by cropping the recognition area.
이하에서는, 도 6 내지 도 8을 참고하여, 도 4의 인식 영역에 대한 좌표 보정을 상세히 설명한다.Hereinafter, the coordinate correction of the recognition area of FIG. 4 will be described in detail with reference to FIGS. 6 to 8.
도 6은 도 4의 인식 영역에 대한 보정을 나타내는 순서도이고, 도 7은 도 6의 좌표 수득을 나타내는 그림이며, 도 8은 도 6의 역 차량기준좌표 수득을 나타내는 그림이다.6 is a flowchart illustrating correction of the recognition region of FIG. 4, FIG. 7 is a diagram illustrating obtaining the coordinates of FIG. 6, and FIG. 8 is a diagram illustrating obtaining the inverse vehicle reference coordinates of FIG. 6.
도 6을 참고하면, 풀프레임영상에서의 좌표계 및 제1크롭핑영역에서의 좌표계가 각각 정의된 상태에서, 영상 보정부(125)는 풀프레임영상 좌표계에서 제1크롭핑영역이 해당하는 영역의 좌표값을 수득한다(s161).Referring to FIG. 6, in a state where a coordinate system in a full frame image and a coordinate system in a first cropping region are respectively defined, the image corrector 125 may determine an area of a region corresponding to the first cropping region in the full frame image coordinate system. A coordinate value is obtained (s161).
제1크롭핑영역의 풀프레임영상 좌표계에서의 좌표값은 풀프레임영상에서 제1크롭핑영역이 어디에 위치하는지에 따라 발생하는 거리 오차를 보정하기 위한 기초 데이터가 된다.The coordinate values in the full frame image coordinate system of the first cropping region are basic data for correcting a distance error generated according to where the first cropping region is located in the full frame image.
다음으로, 제1크롭핑영역 좌표계에서 상기 타겟 차량(20)의 차량기준좌표값를 수득한다(s163). Next, the vehicle reference coordinate value of the target vehicle 20 is obtained in the first cropping area coordinate system (S163).
이러한 차량기준좌표값은 번호판의 좌표값일 수 있으며, 번호판의 한 점 또는 복수의 점일 수 있다. 일예로, 도 7과 같이, 좌측 상단 모서리와 우측 하단 모서리의 좌표값을 차량기준좌표로 인식할 수 있다. The vehicle reference coordinate value may be a coordinate value of the license plate and may be one point or a plurality of points of the license plate. For example, as shown in FIG. 7, the coordinate values of the upper left corner and the lower right corner may be recognized as the vehicle reference coordinate.
다음으로, 제1크롭핑영상을 풀프레임영상 내로 변환을 수행하면, 도 8과 같이 시간차를 두고 촬영된 두 개의 제1크롭핑영상(t1) 및 제1크롭핑영상(t2)의 좌우 상단의 배치가 이동한다(s165). Next, when the first cropping image is converted into a full frame image, two first cropping images t1 and first cropping images t2 taken at a time difference as shown in FIG. The batch moves (s165).
다음으로, 제1크롭핑영역에서의 차량기준좌표값을 풀프레임영상 좌표계의 좌표값으로 변환하여 수득한다(s167). Next, the vehicle reference coordinate value in the first cropping area is obtained by converting the vehicle reference coordinate value into the coordinate value of the full frame video coordinate system (S167).
이로써, 도 7 및 도 8과 같이 시간차를 가지는 두 개의 영상으로부터의 두 개의 제1크롭핑영상에서의 우측 하단 모서리의 y축좌표값 차가 74 픽셀에서 풀프레임영상 좌표계에 의한 우측 하단 모서리의 y축좌표 차는 124픽셀이 된다.Thus, as shown in FIGS. 7 and 8, the y-axis coordinate value difference between the lower right corners of the two first cropping images from the two images having the time difference is 74 pixels. The coordinate difference is 124 pixels.
즉, 인식 영역의 좌표를 반영하여 데이터 처리를 인식 영역만을 수행하여 연산 속도를 높이면서도 데이터의 정확성을 확보할 수 있다.That is, the data processing is performed only by recognizing the coordinates of the recognition area, thereby increasing the computation speed and ensuring the accuracy of the data.
다음으로, 풀프레임영상 좌표계에 수직 보정 및 수평 보정을 수행한다(s168).Next, vertical and horizontal corrections are performed to the full-frame video coordinate system (S168).
상기 수직 보정 및 수평 보정은 상기 좌표값이 풀프레임영상의 어느 영역에 위치하는지에 따라 가중치를 두어 2차원 영상으로부터 3차원 실제 차량(20)의 이동 거리를 보정하여 이루어진다. The vertical correction and the horizontal correction are performed by correcting the moving distance of the 3D real vehicle 20 from the 2D image by weighting the coordinate values in which region of the full frame image.
즉, 3차원 공간이 2차원 촬영 연상으로 전환되면서 원근감의 차이에 발생하는 오차를 보정하는 것으로, 영상에서 하나의 픽셀은 동일한 거리 값을 가지나, 상기 픽셀이 2차원 영상에서 위 또는 아래 중 어느 위치에 배치되는지에 따라 해당 픽셀이 가지는 거리 값이 상이하고, 좌 또는 우 중 어느 위치에 배치되는지에 따라 거리 값이 상이하다. That is, by correcting an error occurring in the difference of perspective as the 3D space is converted into 2D image association, one pixel in the image has the same distance value, but the pixel is positioned up or down in the 2D image. The distance value of the corresponding pixel is different depending on whether the pixel is arranged at the position of the pixel, and the distance value is different depending on which position is disposed at the left or right side of the pixel.
따라서, 각 픽셀의 위치에 따라 가중치를 두고, 해당 기준치를 곱하여 보정된 풀프레임 좌표값으로부터 차량(20)의 이동 거리를 연산할 수 있다. Accordingly, the moving distance of the vehicle 20 may be calculated from the corrected full frame coordinate value by multiplying the reference value with weights according to the position of each pixel.
이때, 상기 영상 보정부(125)는 풀프레임영상 좌표값에 수직 및 수평 보정을 수행한 뒤, 번호판 보정을 더 진행할 수 있다(s169).In this case, the image correction unit 125 may further perform license plate correction after performing vertical and horizontal correction to the full frame image coordinate value (S169).
도 9는 도 6의 번호판 보정을 나타내는 상세 순서도이고, 도 10은 도 9의 번호판 특징 추출하는 동작을 나타내는 그림이고, 도 11은 도 9의 차종분석을 나타내는 그림이며, 도 12는 도 9의 번호판 높이 측정을 나타내는 그림이다.FIG. 9 is a detailed flowchart illustrating license plate correction of FIG. 6, FIG. 10 is a diagram illustrating an operation of extracting license plate features of FIG. 9, FIG. 11 is a diagram illustrating a vehicle model analysis of FIG. 9, and FIG. 12 is a license plate of FIG. 9. This figure shows the height measurement.
도 9를 참고하면, 번호판 보정이 시작되면, 상기 영상 보정부(125)는 두 개의 촬영 영상으로부터 타겟 차량(20)의 번호판의 특징을 추출한다(s200).Referring to FIG. 9, when the license plate correction starts, the image corrector 125 extracts features of the license plate of the target vehicle 20 from two captured images (S200).
*이때, 번호판 특징으로는 번호판의 크기, 색깔, 가로세로비율 등일 수 있다.At this time, the license plate features may be the size, color, aspect ratio, etc. of the license plate.
다음으로, 두 개의 촬영 영상으로부터 타겟 차량(20)의 차종 분석을 수행한다(s210). 상기 차종 분석은 차량(20)의 크기에 따라 대형, 중형, 소형 차량(20)으로 분류하고, 세부 차종분류를 수행한다.Next, vehicle type analysis of the target vehicle 20 is performed from the two captured images (S210). The vehicle model analysis is classified into a large, medium, and small vehicle 20 according to the size of the vehicle 20, and performs detailed vehicle classification.
즉, 차량(20)을 승용차, 버스, 트럭, 기타 차량(20)으로 분류하고, 버스인 경우 속도 보정을 수행하지 않고, 다른 차종인 경우에만 속도 보정을 수행할 수 있다.That is, the vehicle 20 may be classified into a passenger car, a bus, a truck, and other vehicles 20, and the speed correction may be performed only for other vehicle types without performing speed correction in the case of a bus.
다음으로, 번호판 높이(d)를 측정한다(s220).Next, the height d of the license plate is measured (s220).
상기 번호판 높이(d)는 차량(20)의 하단, 즉, 차량(20)의 타이어 밑단으로부터 번호판의 기준점, 일예로 번호판의 우측 하단 모서리까지의 최단거리로 정의할 수 있으며, 도 11a 내지 도 11d와 같이 차종에 따라 서로 다른 높이를 가질 수 있고, 도 12a 및 도 12b와 같이 같은 차종에서도 차량(20) 내 번호판이 배치된 영역에 따라 서로 다를 수 있다.The license plate height d may be defined as the shortest distance from the bottom of the vehicle 20, that is, from the bottom of the tire of the vehicle 20 to a reference point of the license plate, for example, the lower right corner of the license plate, and FIGS. 11A to 11D. As shown in FIG. 12A and FIG. 12B, different heights may be provided depending on the vehicle type, and even in the same vehicle type as shown in FIGS. 12A and 12B.
이와 같이 번호판의 위치 분석이 종료되면, 차량(20) 헤드라이트 위치 분석을 수행한다(s230).When the position analysis of the license plate is completed as described above, the vehicle 20 performs headlight position analysis (S230).
상기 헤드라이트 분석은 헤드라이트 사이의 간격으로 차량(20)의 폭을 추정하여 대형 차량(20)을 구분할 수 있으며, 차량(20) 헤드라이트로부터의 번호판의 기준점 위치를 분석할 수 있다.The headlight analysis may estimate the width of the vehicle 20 at intervals between the headlights to distinguish the large vehicle 20, and analyze the reference point position of the license plate from the headlight of the vehicle 20.
다음으로, 기 수행된 번호판 특징, 차종, 번호판 높이, 헤드 라이트 분석 내용을 바탕으로 번호판 좌표 보정을 수행한다(s240).Next, the license plate coordinate correction is performed based on previously performed license plate features, vehicle type, license plate height, and headlight analysis contents (S240).
즉, 도 10의 번호판 a와 번호판 b를 검토하면, 차량(20)이 근접함에 따라 번호판의 크기가 5 ~15% 확대되어 보일 수 있다. 따라서, 확대 비율에 대한 가중치를 두어 픽셀의 좌표 보정을 수행할 수 있으며, 이때 좌표는 기준점, 일 예로 번호판의 모서리 또는 중심일 수 있다.That is, when the license plate a and the license plate b of FIG. 10 are examined, the size of the license plate may be enlarged by 5 to 15% as the vehicle 20 approaches. Accordingly, the coordinate correction of the pixel may be performed by weighting the enlargement ratio, and the coordinate may be a reference point, for example, an edge or a center of the license plate.
또한, 도 11 및 도 12와 같이 차종 및 번호판의 배치에 따라 지면으로부터 번호판의 높이(d)가 달라질 수 있어 앞서 설명한 수직 오차를 발생하게 된다. In addition, the height (d) of the license plate from the ground may vary according to the arrangement of the vehicle model and the license plate as shown in Figs. 11 and 12 to generate the vertical error described above.
따라서, 차종 및 번호판의 높이(d)에 따라 가중치를 두어 오차를 보상하는 기준점의 좌표를 생성할 수 있다. 이때, 상기 가중치는 상기 번호판의 높이(d)가 임계값 이상인 경우에만 부여될 수 있다. Therefore, the coordinates of the reference point that compensates for the error can be generated by weighting the vehicle according to the height of the model and the license plate. In this case, the weight may be given only when the height d of the license plate is greater than or equal to a threshold.
이와 같이 번호판 보정을 수행한 보정된 기준점의 좌표로부터 차량(20)의 속도를 연산하여 신뢰도가 향상된 데이터를 구할 수 있다.As such, the speed of the vehicle 20 may be calculated from the coordinates of the corrected reference point on which the license plate correction is performed, thereby obtaining data having improved reliability.
이상, 첨부한 도면을 참조하여 본 발명의 바람직한 실시예에 대하여 상세히 설명하였으나 이는 예시에 불과한 것이다.As mentioned above, although the preferred embodiment of this invention was described in detail with reference to attached drawing, this is only an illustration.
해당 기술 분야의 숙련된 당업자는 하기의 특허청구범위에 기재된 본 발명의 사상 및 영역으로부터 벗어나지 않는 범위 내에서 본 발명을 다양하게 수정 및 변경시킬 수 있음을 이해할 수 있을 것이다.Those skilled in the art will appreciate that various modifications and changes can be made in the present invention without departing from the spirit and scope of the invention as set forth in the claims below.

Claims (8)

  1. 다차선 도로에 방사파를 출사하고, 상기 다차선 도로를 주행하는 차량으로부터 반사되는 반사파를 수신하는 레이더송수신수단;Radar transmitting and receiving means for emitting radiation on a multi-lane road and receiving reflected waves reflected from a vehicle traveling on the multi-lane road;
    상기 레이더송수신수단에 의해 과속으로 감지되는 타겟차량의 존재에 트리거되어 상기 다차선 도로 및 상기 타겟차량을 포함하는 풀프레임영상을 촬영하는 영상촬영수단; Image capturing means for capturing a full frame image including the multi-lane road and the target vehicle triggered by the presence of a target vehicle detected by the radar transmitting / receiving means at speed;
    송수신된 레이더 신호를 분석하여 상기 타겟차량에 대한 주행정보를 생성하는 레이더속도연산부,A radar speed calculator configured to analyze the radar signals transmitted and received to generate driving information for the target vehicle;
    상기 타겟차량의 상기 주행정보를 분석하여 상기 다차선 도로상의 풀프레임영상에서 상기 타겟차량의 위치를 파악하고, 상기 다차선 도로상의 풀프레임영상에서 상기 차량이 위치한 영역을 크롭핑(cropping)하여 제1크롭핑영상으로 저장하고, 상기 제1크롭핑영상으로부터 상기 타겟차량의 번호판을 인식하여 상기 번호판이 위치한 영역을 크롭핑하여 제2크롭핑영상으로 저장하는 영상처리부, 그리고Analyzing the driving information of the target vehicle to determine the position of the target vehicle in a full frame image on the multi-lane road, and cropping an area where the vehicle is located in the full-frame image on the multi-lane road An image processing unit for storing the first cropping image, recognizing the license plate of the target vehicle from the first cropping image, cropping an area where the license plate is located, and storing the cropping image as a second cropping image; and
    상기 제2크롭핑영상에서 상기 차량 번호판의 번호를 인식하는 번호인식부를 포함하는 제어부;A control unit including a number recognition unit recognizing a number of the license plate number in the second cropping image;
    를 포함하는 레이더 및 영상 융합 차량 단속시스템. Radar and image fusion vehicle control system comprising a.
  2. 제1항에 있어서,The method of claim 1,
    상기 영상처리부로부터 소정의 시간간격으로 촬영된 영상들을 전달받아, 차량의 속도를 연산하는 영상속도연산부를 더 포함하는 레이더 및 영상 융합 차량 단속시스템.And an image speed calculator configured to receive the images captured at a predetermined time interval from the image processor and calculate a speed of the vehicle.
  3. 제2항에 있어서,The method of claim 2,
    상기 영상속도연산부는 상기 타겟차량이 제1크롭핑영상 좌표계에서의 차지하는 제1좌표값을 풀프레임영상 좌표계에서 차지하는 제2좌표값으로 변환하고, 상기 제2좌표값의 시간에 따른 이동거리를 계산하여 차량의 속도를 연산하는 레이더 및 영상 융합 차량 단속시스템.The image velocity calculating unit converts a first coordinate value occupied by the target vehicle in the first cropping image coordinate system into a second coordinate value occupied in the full frame image coordinate system, and calculates a moving distance according to time of the second coordinate value. Radar and image fusion vehicle enforcement system for calculating the speed of the vehicle.
  4. 제5항에 있어서,The method of claim 5,
    상기 영상촬영수단은 상기 제1 차량 속도에 따라 시간차를 가변하며 상기 타겟 차량의 영상을 촬영하는 레이더 및 영상 융합 차량 단속시스템.And the image photographing means changes the time difference according to the first vehicle speed and captures an image of the target vehicle.
  5. 다차선 도로에 방사파를 출사하고, 상기 다차선 도로를 주행하는 차량으로부터 반사되는 반사파를 수신하는 단계,Emitting radiation on a multi-lane road and receiving reflected waves reflected from a vehicle traveling on the multi-lane road,
    수신된 반사파에 의해 과속으로 감지되는 타겟차량의 존재에 트리거되어 상기 다차선 도로 및 상기 타겟차량을 포함하는 풀프레임영상을 촬영하는 단계,Capturing a full frame image including the multi-lane road and the target vehicle triggered by the presence of the target vehicle detected at an excessive speed by the received reflected wave;
    송수신된 레이더 신호를 분석하여 상기 타겟차량에 대한 주행정보를 생성하는 단계,Generating driving information about the target vehicle by analyzing the transmitted and received radar signals;
    상기 타겟차량의 상기 주행정보를 분석하여 상기 다차선 도로상의 풀프레임영상에서 상기 타겟차량의 위치를 파악하고, 상기 다차선 도로상의 풀프레임영상에서 상기 차량이 위치한 영역을 크롭핑(cropping)하여 제1크롭핑영상으로 저장하고, 상기 제1크롭핑영상으로부터 상기 타겟차량의 번호판을 인식하여 상기 번호판이 위치한 영역을 크롭핑하여 제2크롭핑영상으로 저장하는 단계, 그리고 Analyzing the driving information of the target vehicle to determine the position of the target vehicle in a full frame image on the multi-lane road, and cropping an area where the vehicle is located in the full-frame image on the multi-lane road Storing a first cropping image, recognizing the license plate of the target vehicle from the first cropping image, cropping an area where the license plate is located, and storing the cropping image as a second cropping image; and
    상기 제2크롭핑영상에서 상기 차량 번호판의 번호를 인식하는 단계Recognizing a number of the license plate number in the second cropping image;
    를 포함하는 레이더 및 영상 융합 차량 단속 방법. Radar and image fusion vehicle control method comprising a.
  6. 제5항에 있어서,The method of claim 5,
    소정의 시간간격으로 촬영된 영상들을 전달받아, 차량의 속도를 연산하는 단계를 더 포함하는 레이더 및 영상 융합 차량 단속 방법.The radar and image convergence vehicle cracking method further comprising the step of receiving the images taken at a predetermined time interval, calculating the speed of the vehicle.
  7. 제6항에 있어서,The method of claim 6,
    속도를 연산하는 단계는, Calculating the speed
    상기 타겟차량이 제1크롭핑영상 좌표계에서의 차지하는 제1좌표값을 풀프레임영상 좌표계에서 차지하는 제2좌표값으로 변환하고, 상기 제2좌표값의 시간에 따른 이동거리를 계산하여 차량의 속도를 연산하는 레이더 및 영상 융합 차량 단속 방법.Convert the first coordinate value occupied by the target vehicle in the first cropping image coordinate system to a second coordinate value occupied in the full frame image coordinate system, and calculate a moving distance according to time of the second coordinate value to calculate the speed of the vehicle. How to crack radar and image fusion vehicle.
  8. 제7항에 있어서,The method of claim 7, wherein
    상기 차량의 영상을 촬영하는 단계는,Taking the image of the vehicle,
    상기 제1 차량 속도에 따라 시간차를 가변하며 상기 타겟 차량의 영상을 촬영하는 레이더 및 영상 융합 차량 단속 방법.And radar and image convergence vehicle control method of capturing an image of the target vehicle while varying a time difference according to the first vehicle speed.
PCT/KR2016/015387 2015-12-30 2016-12-28 Radar and image-fusion vehicle enforcement system WO2017116134A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20150190070 2015-12-30
KR10-2015-0190070 2015-12-30
KR1020160173720A KR101925293B1 (en) 2015-12-30 2016-12-19 The vehicle detecting system by converging radar and image
KR10-2016-0173720 2016-12-19

Publications (1)

Publication Number Publication Date
WO2017116134A1 true WO2017116134A1 (en) 2017-07-06

Family

ID=59225279

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2016/015387 WO2017116134A1 (en) 2015-12-30 2016-12-28 Radar and image-fusion vehicle enforcement system

Country Status (1)

Country Link
WO (1) WO2017116134A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109444916A (en) * 2018-10-17 2019-03-08 上海蔚来汽车有限公司 The unmanned travelable area determining device of one kind and method
CN109886308A (en) * 2019-01-25 2019-06-14 中国汽车技术研究中心有限公司 One kind being based on the other dual sensor data fusion method of target level and device
CN110444026A (en) * 2019-08-06 2019-11-12 北京万集科技股份有限公司 The triggering grasp shoot method and system of vehicle
CN111177297A (en) * 2019-12-31 2020-05-19 信阳师范学院 Dynamic target speed calculation optimization method based on video and GIS
CN111634290A (en) * 2020-05-22 2020-09-08 华域汽车系统股份有限公司 Advanced driving assistance forward fusion system and method
CN112419712A (en) * 2020-11-04 2021-02-26 同盾控股有限公司 Road section vehicle speed detection method and system
CN113658427A (en) * 2021-08-06 2021-11-16 深圳英飞拓智能技术有限公司 Road condition monitoring method, system and equipment based on vision and radar
CN114220285A (en) * 2021-12-14 2022-03-22 中国电信股份有限公司 Positioning and warning method and device for overspeed vehicle, electronic equipment and readable medium
CN115331457A (en) * 2022-05-17 2022-11-11 重庆交通大学 Vehicle speed management method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040252193A1 (en) * 2003-06-12 2004-12-16 Higgins Bruce E. Automated traffic violation monitoring and reporting system with combined video and still-image data
KR20100003381A (en) * 2008-07-01 2010-01-11 서울시립대학교 산학협력단 The collector for traffic information
KR101288264B1 (en) * 2012-01-20 2013-07-26 이구형 Multi lane velocity detection system and method thereof
KR101291301B1 (en) * 2013-02-28 2013-07-30 심광호 Vehicle speed measurement system using image and radar
KR20140126188A (en) * 2013-04-22 2014-10-30 오성레이저테크 (주) Over speed enforcement apparatus using wide beam laser

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040252193A1 (en) * 2003-06-12 2004-12-16 Higgins Bruce E. Automated traffic violation monitoring and reporting system with combined video and still-image data
KR20100003381A (en) * 2008-07-01 2010-01-11 서울시립대학교 산학협력단 The collector for traffic information
KR101288264B1 (en) * 2012-01-20 2013-07-26 이구형 Multi lane velocity detection system and method thereof
KR101291301B1 (en) * 2013-02-28 2013-07-30 심광호 Vehicle speed measurement system using image and radar
KR20140126188A (en) * 2013-04-22 2014-10-30 오성레이저테크 (주) Over speed enforcement apparatus using wide beam laser

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109444916A (en) * 2018-10-17 2019-03-08 上海蔚来汽车有限公司 The unmanned travelable area determining device of one kind and method
CN109444916B (en) * 2018-10-17 2023-07-04 上海蔚来汽车有限公司 Unmanned driving drivable area determining device and method
CN109886308B (en) * 2019-01-25 2023-06-23 中国汽车技术研究中心有限公司 Target level-based dual-sensor data fusion method and device
CN109886308A (en) * 2019-01-25 2019-06-14 中国汽车技术研究中心有限公司 One kind being based on the other dual sensor data fusion method of target level and device
CN110444026A (en) * 2019-08-06 2019-11-12 北京万集科技股份有限公司 The triggering grasp shoot method and system of vehicle
CN111177297A (en) * 2019-12-31 2020-05-19 信阳师范学院 Dynamic target speed calculation optimization method based on video and GIS
CN111177297B (en) * 2019-12-31 2022-09-02 信阳师范学院 Dynamic target speed calculation optimization method based on video and GIS
CN111634290A (en) * 2020-05-22 2020-09-08 华域汽车系统股份有限公司 Advanced driving assistance forward fusion system and method
CN111634290B (en) * 2020-05-22 2023-08-11 华域汽车系统股份有限公司 Advanced driving assistance forward fusion system and method
CN112419712A (en) * 2020-11-04 2021-02-26 同盾控股有限公司 Road section vehicle speed detection method and system
CN113658427A (en) * 2021-08-06 2021-11-16 深圳英飞拓智能技术有限公司 Road condition monitoring method, system and equipment based on vision and radar
CN114220285A (en) * 2021-12-14 2022-03-22 中国电信股份有限公司 Positioning and warning method and device for overspeed vehicle, electronic equipment and readable medium
CN115331457A (en) * 2022-05-17 2022-11-11 重庆交通大学 Vehicle speed management method and system
CN115331457B (en) * 2022-05-17 2024-03-29 重庆交通大学 Vehicle speed management method and system

Similar Documents

Publication Publication Date Title
WO2017116134A1 (en) Radar and image-fusion vehicle enforcement system
KR101925293B1 (en) The vehicle detecting system by converging radar and image
KR102267335B1 (en) Method for detecting a speed employing difference of distance between an object and a monitoring camera
US9886649B2 (en) Object detection device and vehicle using same
US10015394B2 (en) Camera-based speed estimation and system calibration therefor
KR101898051B1 (en) Multilane vehicle speed detecting system
KR101999993B1 (en) Automatic traffic enforcement system using radar and camera
US10699567B2 (en) Method of controlling a traffic surveillance system
KR101446546B1 (en) Display system of vehicle information based on the position
US11367349B2 (en) Method of detecting speed using difference of distance between object and monitoring camera
US10163341B2 (en) Double stereoscopic sensor
EP3432265A1 (en) Image processing device, apparatus control system, image pickup device, image processing method, and program
KR20160100788A (en) Apparatus and method for measuring speed of moving object
WO2020235734A1 (en) Method for estimating distance to and location of autonomous vehicle by using mono camera
WO2020101071A1 (en) Traffic monitoring system using lidar capable of providing notification of road obstacle and tracking vehicle
WO2022114455A1 (en) Device for correcting position signal of autonomous vehicle by using road surface image information
KR102484688B1 (en) Section control method and section cotrol system using camera and radar
WO2013022153A1 (en) Apparatus and method for detecting lane
CN117197779A (en) Track traffic foreign matter detection method, device and system based on binocular vision
JPH07244717A (en) Travel environment recognition device for vehicle
WO2020130209A1 (en) Method and apparatus for measuring vehicle speed by using image processing
KR102418344B1 (en) Traffic information analysis apparatus and method
CN112406700B (en) Blind area early warning system based on upper and lower binocular vision analysis range finding
KR102385907B1 (en) Method And Apparatus for Autonomous Vehicle Navigation System
KR102062579B1 (en) Vehicle license-plate recognition system that recognition of Vehicle license-plate damaged by shadow and light reflection through the correction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16882086

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16882086

Country of ref document: EP

Kind code of ref document: A1