WO2015025555A1 - Traffic volume measurement device and traffic volume measurement method - Google Patents

Traffic volume measurement device and traffic volume measurement method Download PDF

Info

Publication number
WO2015025555A1
WO2015025555A1 PCT/JP2014/059862 JP2014059862W WO2015025555A1 WO 2015025555 A1 WO2015025555 A1 WO 2015025555A1 JP 2014059862 W JP2014059862 W JP 2014059862W WO 2015025555 A1 WO2015025555 A1 WO 2015025555A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
captured image
traffic volume
traffic
unit
Prior art date
Application number
PCT/JP2014/059862
Other languages
French (fr)
Japanese (ja)
Inventor
孝光 渡辺
渡辺 孝弘
Original Assignee
沖電気工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 沖電気工業株式会社 filed Critical 沖電気工業株式会社
Publication of WO2015025555A1 publication Critical patent/WO2015025555A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats

Definitions

  • the present invention relates to a traffic volume measuring device and a traffic volume measuring method.
  • the vehicle silhouette area cannot be stably extracted from the captured image based on the background subtraction method, it may be difficult to accurately track the vehicle silhouette area. In such a situation, it is difficult to accurately count the number of vehicle silhouette regions that have passed through the imaging range, and it may be difficult to accurately measure the traffic volume.
  • the vehicle silhouette region cannot be stably extracted due to a change in weather or a change in sunlight.
  • a phenomenon occurs in which the vehicle on the back side is concealed in the vehicle on the near side, a situation may occur where the vehicle silhouette area overlaps between the vehicle on the near side and the vehicle on the back side. There is a possibility that the vehicle silhouette region cannot be extracted stably.
  • the present invention has been made in view of the above problems, and an object of the present invention is to provide a technique for improving the accuracy of traffic volume measurement when measuring traffic volume based on a captured image. There is.
  • an information acquisition unit that acquires a captured image obtained by capturing a road plane and an imaging time, and a predetermined area based on a vehicle area extracted from the captured image.
  • a traffic volume measuring device comprising: a position detection section that detects a detection position of the camera; and a measurement section that measures traffic volume based on the number of voting peaks obtained for the combination of the detection position and the imaging time.
  • the measurement unit may perform a Hough transform on the combination of the detection position and the imaging time, and measure the number of voting peak points as a traffic volume in the Hough space.
  • the position detection unit may detect an edge feature from the captured image and extract the vehicle region based on the edge feature.
  • the measuring unit may specify the vehicle speed based on the position of the voting peak point.
  • the position detection unit may detect, as the detection position, coordinates on a vehicle travel axis in a real space set in accordance with an actual size.
  • the position detection unit detects a predetermined vertical plane orthogonal to the vehicle travel axis in real space based on the vehicle region, and detects an intersection coordinate between the vehicle travel axis and the vertical plane as the detection position. Good.
  • the position detection unit may detect a vehicle front surface or a vehicle back surface as the predetermined vertical plane.
  • the position detection unit detects the front surface of the vehicle as the vertical plane when the vehicle traveling direction in the real space is from the back to the front, and the vehicle traveling direction in the real space is the direction from the front to the back.
  • the rear surface of the vehicle may be detected as the vertical plane.
  • a step of acquiring a captured image obtained by capturing a road plane and a capturing time, a step of detecting a predetermined detection position based on a vehicle region extracted from the captured image, And measuring the traffic volume based on the peak point of voting obtained for the combination of the detection position and the imaging time, a traffic volume measuring method is provided.
  • a plurality of constituent elements having substantially the same functional configuration may be distinguished by attaching different alphabets or numbers after the same reference numeral.
  • it is not necessary to particularly distinguish each of a plurality of constituent elements having substantially the same functional configuration only the same reference numerals are given.
  • FIG. 1 is a diagram for explaining an outline of an embodiment of the present invention.
  • a traffic volume measuring device 10 incorporating an imaging unit and a road plane exist in real space. Further, the traffic volume measuring device 10 incorporating the imaging unit is installed in a state where the imaging direction is directed to the road plane.
  • the captured image Img ′ captured by the traffic measuring device 10 shows the boundary line of the lane provided on the road. Further, as shown in FIG. 1, the center of the lens of the traffic measuring device 10 is set to the origin O.
  • FIG. 1 shows an example in which an imaging unit is incorporated in the traffic measurement device 10, but the imaging unit is not incorporated in the traffic measurement device 10 and is installed outside the traffic measurement device 10. It may be.
  • the traffic measurement device 10 may acquire the captured image Img ′ by receiving the captured image Img ′ transmitted from the imaging unit. Further, for example, the traffic volume measuring device 10 may acquire the captured image Img ′ by reading the captured image Img ′ recorded on the recording medium.
  • a technique for measuring traffic volume from a captured image Img ′ obtained by imaging a road plane by an imaging unit has been proposed.
  • a vehicle silhouette area is extracted from the captured image Img ′ based on the background difference method, and the number of vehicle silhouette areas that have passed through the imaging range is counted while tracking the vehicle silhouette area. Is generally measured.
  • the imaging unit when the imaging unit is installed outdoors, there is a possibility that the vehicle silhouette region cannot be stably extracted due to a change in weather or a change in sunlight.
  • a phenomenon occurs in which the vehicle on the back side is concealed in the vehicle on the near side, a situation may occur where the vehicle silhouette area overlaps between the vehicle on the near side and the vehicle on the back side. There is a possibility that the vehicle silhouette region cannot be extracted stably.
  • FIG. 2 is a diagram illustrating a functional configuration example of the traffic volume measuring device 10 according to the embodiment of the present invention.
  • the traffic volume measuring device 10 according to the embodiment of the present invention includes a control unit 110, an imaging unit 170, a storage unit 180, and an output unit 190.
  • the control unit 110 has a function of controlling the entire operation of the traffic measuring device 10.
  • the imaging unit 170 has a function of acquiring a captured image by imaging a real space, and is configured by, for example, a monocular camera.
  • the storage unit 180 can store a program and data for operating the control unit 110. In addition, the storage unit 180 can temporarily store various data necessary in the course of the operation of the control unit 110.
  • the output unit 190 has a function of performing output in accordance with control by the control unit 110.
  • the type of the output unit 190 is not particularly limited, and may be a measurement result recording device, a device that transmits a measurement result to another device via a communication line, or a display device. An audio output device may be used.
  • the imaging unit 170, the storage unit 180, and the output unit 190 exist inside the traffic measurement device 10, but all or one of the imaging unit 170, the storage unit 180, and the output unit 190.
  • the unit may be provided outside the traffic measurement device 10.
  • the control unit 110 includes an information acquisition unit 111, a setting unit 112, an output control unit 113, a position detection unit 114, and a measurement unit 115. Details of these functional units included in the control unit 110 will be described later.
  • calibration can be performed by the traffic measuring device 10 according to the embodiment of the present invention. More specifically, a process for calculating a road plane formula (hereinafter also referred to as a “road plane type”) and a process for calculating the traveling direction of the vehicle may be performed as calibration.
  • a process for calculating a road plane formula hereinafter also referred to as a “road plane type”
  • a process for calculating the traveling direction of the vehicle may be performed as calibration.
  • calibration that can be performed by the setting unit 112 will be described with reference to FIGS. 3 and 4.
  • FIG. 3 is a diagram showing parameters used by the setting unit 112.
  • the setting unit 112 first determines the size pix_dot of the captured image Img ′ per unit pixel of the image sensor based on the size of the image sensor that constitutes the image capturing unit 170 and the size of the captured image Img ′ provided to the control unit 110. As a parameter.
  • the captured image Img ′ is generated based on the captured image Img captured on the imaging surface of the image sensor that is separated from the origin O by the focal length. Further, the captured image Img ′ provided to the control unit 110 can be acquired by the information acquisition unit 111 and used by the setting unit 112.
  • the imaging device is a CCD (Charge Coupled Device)
  • the CCD Charge Coupled Device
  • the imaging element may be a CMOS (Complementary Metal Oxide Semiconductor) or the like.
  • the setting unit 112 can calculate pix_dot by the following (Equation 1).
  • the CCD size is represented by the length of the diagonal line of the CCD
  • the CCD size is divided by the square root of the sum of squares in the vertical and horizontal directions of the captured image Img ′ as shown in (Equation 1). Is calculated by
  • the calculation of the parameter pix_dot by such a method is only an example, and the parameter pix_dot may be calculated by another method.
  • the vertical or horizontal length of the CCD may be used instead of the diagonal line of the CCD.
  • the CCD size is easily obtained from the imaging unit 170, for example. Further, the size of the captured image Img ′ is acquired from the storage unit 180, for example. Therefore, the control unit 110 determines, based on these sizes, the three-dimensional coordinates in the real space of the captured image Img captured on the imaging surface of the CCD and the two-dimensional coordinates of the captured image Img ′ provided to the control unit 110.
  • the correspondence can be grasped. That is, the control unit 110 grasps the three-dimensional coordinates in the real space of the captured image Img imaged on the imaging surface of the CCD from the two-dimensional coordinates of the captured image Img ′ provided to the control unit 110 based on this correspondence. can do.
  • Calibration can be performed using the parameters calculated in this way.
  • the calibration performed using the parameters by the setting unit 112 will be described with reference to FIG.
  • FIG. 4 is a diagram for explaining the function of the setting unit 112.
  • an xyz coordinate system real space
  • a traveling direction vector v that is a vector indicating the traveling direction of the vehicle is defined as (vx, vy, vz).
  • a point (focal point) that is separated from the origin O by the focal length f is set on the y axis, and a plane that passes through this focal point and is perpendicular to the y axis is defined as an imaging surface.
  • the captured image Img is captured on the imaging surface, but the setting of each coordinate axis is not limited to such an example.
  • the two points through which the first straight line passes are T1 (x1, y1, z1) and T4 (x4, y4, z4), and the two points through which the second straight line passes are T2.
  • T1 x1, y1, z1
  • T4 x4, y4, z4
  • T2 two points through which the second straight line passes
  • T3 x3, y3, z3
  • the coordinates of the intersection of the straight line connecting each of T1, T2, T3 and T4 and the origin O and the road plane are t1 ⁇ T1, t2 ⁇ T2, t3 ⁇ T3 and t4 ⁇ T4.
  • the setting unit 112 can perform calibration based on the following (Precondition 1).
  • the setting unit 112 can derive the relational expressions shown in the following (Equation 2) and (Equation 3) based on the various data acquired as described above and (Condition 1).
  • the setting unit 112 can derive the relational expression shown in the following (Formula 5) based on the various data acquired as described above and (Condition 3).
  • the setting unit 112 can derive the relational expressions shown in the following (Expression 6) and (Expression 7) based on the various data acquired as described above and (Condition 4).
  • the distance from the origin O to Q1 (xr1, yr1, zr1) on the road plane is a multiple of the distance from the origin O to Q1 ′ (xs1, f, zs1) on the captured image Img. It is a value indicating whether or not
  • the distance from the origin O to Q2 (xr2, yr2, zr2) on the road plane is a multiple of the distance from the origin O to Q2 ′ (xs2, f, zs2) on the captured image Img. It is a value indicating whether or not Therefore, the relational expression shown in the following (Formula 8) can be derived.
  • the setting unit 112 can calculate the measured value Q_dis ′ of the distance between two points (Q1 and Q2) on the road plane from the relational expression shown in (Formula 8) by the following (Formula 9).
  • the setting unit 112 can calculate R1, R2, R3, and R4 when the difference between the measured value Q_dis ′ and the known magnitude Q_dis is the smallest based on (Formula 1) to (Formula 9). .
  • the setting unit 112 can also calculate the road plane equation by another method. For example, if the distance between two parallel straight lines on the road plane is known, the road plane formula is calculated without using (Condition 2) by using the distance between the two parallel straight lines on the road plane. be able to.
  • the setting unit 112 can also calculate the traveling direction vector v (vx, vy, vz). More specifically, the setting unit 112 can calculate the traveling direction vector v by calculating the direction of at least one of two parallel straight lines on the road plane. For example, the setting unit 112 may calculate the difference between the coordinates t2 ⁇ T2 and the coordinates t3 ⁇ T3 as the traveling direction vector v, or the difference between the coordinates t1 ⁇ T1 and the coordinates t4 ⁇ T4 as the traveling direction vector v. It may be calculated.
  • the setting unit 112 can perform calibration by the method described above.
  • the setting unit 112 may set a vehicle travel axis A that is parallel to the traveling direction vector v (vx, vy, vz). If the xyz coordinates are set according to the actual size of the real space as described above, the vehicle travel axis A can also be set according to the actual size in the real space.
  • the setting unit 112 may set the measurement range E1. If it does so, traffic volume may be measured based on the vehicle area
  • the output control unit 113 may cause the output unit 190 to output various information set by the setting unit 112.
  • the captured image Img is provided from the imaging unit 170 to the control unit 110. Furthermore, the imaging unit 170 has a timekeeping function. When the imaging time of the captured image Img is detected by the imaging unit 170, the detected imaging time is provided to the control unit 110. The information acquisition unit 111 acquires the captured image Img and the imaging time provided from the imaging unit 170 in this way.
  • the timekeeping function may be included in the control unit 110, or the time obtained by the control unit 110 may be used.
  • the position detection unit 114 detects a predetermined detection position based on the vehicle area extracted from the captured image Img.
  • the predetermined detection position may be a coordinate on the vehicle travel axis A. That is, the position detection unit 114 may detect coordinates on the vehicle travel axis A as detection positions.
  • the detection by the position detection unit 114 will be described in more detail with reference to FIG.
  • FIG. 5 is a diagram for explaining an example of functions of the position detection unit 114. Note that the example illustrated in FIG. 5 is merely an example of detection by the position detection unit 114, and thus the detection by the position detection unit 114 is not limited to the example illustrated in FIG.
  • the vehicle V is traveling on the road plane.
  • the position detection unit 114 extracts a vehicle region from the captured image Img.
  • the vehicle area may be extracted in any way.
  • the vehicle region may be a region specified from a silhouette extracted by the difference between the captured images Img in the frames before and after the vehicle V is shown in FIG.
  • the silhouette extracted by the difference of a background image and the captured image Img may be sufficient.
  • the position detection unit 114 further detects an edge feature from the vehicle region based on the silhouette extracted by the above processing and detects the vehicle region based on the edge feature, a vehicle with higher accuracy against changes in the imaging environment. Region extraction may be performed.
  • the position detection unit 114 may detect edge features based on the silhouette extracted by the above processing, and extract a collection of detected edge features as a vehicle region. However, if only the edge detection process is performed, there is a possibility that an area in which one vehicle V appears is divided into a plurality of edge areas and extracted. Therefore, the position detection unit 114 may combine edge features that are at a distance less than the threshold into one edge region. Specifically, the position detection unit 114 performs a labeling process on each edge feature, and if the labeled edge features are at a distance less than a threshold, the edge features are combined into one edge region. Good.
  • the position detection unit 114 detects the detection position based on the vehicle area as described above.
  • the position detection unit 114 may detect a predetermined vertical plane orthogonal to the vehicle travel axis A based on the vehicle region, and detect an intersection coordinate between the vehicle travel axis A and the predetermined vertical plane as a detection position.
  • the predetermined vertical plane is not limited, but may be the front surface of the vehicle or the back surface of the vehicle. That is, the position detection unit 114 may detect the front surface or the rear surface of the vehicle as a predetermined vertical plane.
  • the position detection unit 114 may change whether to detect the front surface of the vehicle or the back surface of the vehicle as a predetermined vertical plane depending on the situation. For example, the position detection unit 114 detects the front surface of the vehicle as a vertical plane when the traveling direction vector v is a direction from the back to the front, and when the traveling direction vector v is a direction from the front to the back, The back surface may be detected as a vertical plane. In this way, it is expected that the detection accuracy will be further improved because there is a high possibility that the front side surface is more clearly reflected in the captured image Img than the back side surface.
  • the position detection unit 114 detects the vehicle front lowest point m1 ′ based on the vehicle region extracted from the captured image Img.
  • the vehicle front lowest point m ⁇ b> 1 ′ is a point having the lowest height from the ground in the vehicle body of the vehicle V.
  • the vehicle front lowest point m1 ′ may be detected in any way.
  • the traveling direction vector v is a direction from the right back to the left front
  • the point (for example, the middle point) on the edge line (the lower left line segment of the vehicle area) that is the lowest point is detected as the vehicle front lowest point m1 ′.
  • the position detection unit 114 draws a plane perpendicular to the vehicle travel axis A passing through the intersection point m0 from the lowest point m1 in front of the vehicle in real space to the road plane with a vertical line having the minimum ground height h. It detects as vehicle front F1.
  • the minimum ground height h may be a predetermined value or a value determined based on the minimum ground height detected so far.
  • a predetermined value is used as the minimum ground height h, for example, an average value of the minimum ground heights of a plurality of vehicles may be used as the predetermined value.
  • the distance between the ground contact point D0 where the vehicle V is in contact with the road plane and the vehicle body low plane is shown as the minimum ground height h.
  • the traveling direction vector v is from the left front to the right back.
  • a point for example, a middle point
  • the lower edge line the lower left line segment of the vehicle area
  • the rear surface of the vehicle can be detected by a similar method.
  • the lowest ground point m1 ′ is a point on the lower edge line (the lower right line segment of the vehicle area) on the front of the vehicle (for example, middle Point) is detected as the lowest ground point m1 ′, and the front surface of the vehicle can be detected by the same method.
  • the traveling direction vector v is in the direction from the right front to the left back
  • the lowest ground point m1 ′ is a point on the lower edge line (the lower right line segment of the vehicle area) on the back of the vehicle (for example, The middle point) is detected as the lowest ground point m1 ′, and the rear surface of the vehicle can be detected by the same method.
  • the detection position can be detected based on the vehicle area by the method described in the above example. If the detection position is detected by the position detection unit 114, a combination of the detection position and the imaging time is obtained.
  • FIG. 6 is a diagram illustrating an example of a combination of a detection position and an imaging time. Subsequently, the measurement unit 115 measures the traffic volume from the combination of the detection position and the imaging time obtained in this way.
  • the measurement unit 115 performs voting on a combination in which the detection position and the imaging time are a constant rule, and if the vote count becomes a value (peak point) equal to or greater than a certain value, it is detected based on one vehicle area. It can be considered that the set of combinations is satisfied. Therefore, since it is possible to consider that the vehicle has passed the number of voting peaks, the measurement unit 115 may measure the number of voting peaks as the traffic volume. Note that the peak may be a case where the voting power exceeds a threshold and the voting power is maximized.
  • the measurement unit 115 may measure the number of voting peaks as the traffic volume. Note that the peak point may be when the vote count exceeds a threshold value or when the vote count becomes a maximum.
  • the number of voting peaks may be measured as the traffic volume, there is no need to track the vehicle silhouette area and count the number of vehicle silhouette areas that have been successfully tracked within the imaging range. Therefore, even when a situation in which it is difficult to accurately track the vehicle silhouette region occurs, the traffic volume can be accurately measured. Therefore, according to this method, it is possible to improve the accuracy of traffic volume measurement when measuring the traffic volume based on the captured image.
  • the combination of the detection position and the imaging time should change linearly, and the measurement unit 115 votes for a set of straight lines passing through each combination, and the vote count reaches a peak. What is necessary is just to consider that the set of the combinations detected based on one vehicle area satisfies the straight line. Therefore, the measurement part 115 should just measure the number of the straight lines from which a vote frequency becomes a peak as traffic volume. The measuring unit 115 can also specify the slope of the straight line as the vehicle speed.
  • the measurement unit 115 may perform a Hough transform on the combination of the detection position and the imaging time in order to detect a straight line having a peak vote count.
  • the imaging time is X
  • the detection position is Y
  • the XY plane can be converted to the ⁇ - ⁇ plane by the relationship shown in the following (formula 1).
  • FIG. 7 is a diagram illustrating a result obtained by performing Hough transform on the combination of the detection position and the imaging time illustrated in FIG.
  • voting peak points P ⁇ b> 1 and P ⁇ b> 2 are illustrated, and the measurement unit 115 may measure “2” corresponding to the number of voting peak points as the traffic volume.
  • the measuring unit 115 may specify the vehicle speed based on the position of the voting peak point. More specifically, as described above, if ⁇ is equivalent to the angle between the X axis and the straight line in the XY plane, and the vehicle speed is equivalent to the slope of the straight line in the XY plane, the measurement is performed.
  • the unit 115 can specify the vehicle speed by tan ⁇ using ⁇ of the voting peak point. In the example illustrated in FIG. 7, the measurement unit 115 can specify the speeds of the two vehicles by tan ⁇ using ⁇ of each of the voting peak points P1 and P2.
  • FIG. 8 is a flowchart showing an operation example of the traffic volume measuring apparatus 10 according to the embodiment of the present invention.
  • the flowchart shown in FIG. 8 only showed an example of operation
  • FIG. Therefore, the operation of the traffic volume measuring device 10 is not limited to the operation example shown by the flowchart of FIG.
  • the information acquisition unit 111 acquires a captured image in which a road plane is captured by the imaging unit 170 and an imaging time corresponding to the time at which the captured image is captured (Ste S11). Subsequently, the position detection unit 114 extracts a vehicle region from the captured image (step S12), and detects a predetermined detection position based on the vehicle region (step S13).
  • the measurement unit 115 performs Hough transform on the combination of the detection position and the imaging time, and measures the number of voting peak points on the ⁇ - ⁇ plane as the traffic volume (step S15). Further, the measuring unit 115 identifies the vehicle speed based on the position of the voting peak point on the ⁇ - ⁇ plane (step S16). More specifically, the measurement unit 115 can specify the vehicle speed by tan ⁇ using ⁇ of the voting peak point.
  • the information acquisition unit 111 that acquires the captured image obtained by capturing the road plane and the imaging time, and a predetermined area based on the vehicle area extracted from the captured image.
  • a traffic volume measuring device 10 including a position detection unit 114 that detects a detection position, and a measurement unit 115 that measures the number of voting peaks obtained for a combination of the detection position and an imaging time as a traffic volume.
  • the number of voting peaks may be measured as the traffic volume, it is not necessary to track the vehicle silhouette area that has passed through the imaging range while tracking the vehicle silhouette area. Therefore, even when a situation in which it is difficult to accurately track the vehicle silhouette region occurs, the traffic volume can be accurately measured. Therefore, according to this method, it is possible to improve the accuracy of traffic volume measurement when measuring the traffic volume based on the captured image.
  • the traveling state of the vehicle is not limited to this example, and the vehicle may travel on a curved road plane.
  • the traveling state of the vehicle is not limited to this example, and the vehicle may travel on a curved road plane.
  • the detection position based on one vehicle area is This is because, as in the above example, it is estimated that the image changes along a straight line with a change in imaging time.
  • the measurement range E1 may be set along a curved road plane.
  • the case where the speed of the vehicle changes may be used instead of the case where the vehicle moves at a substantially constant speed.
  • the detection position based on one vehicle area changes substantially along a curve as the imaging time changes. There may be cases.
  • the measurement unit 115 may measure the number of curves having a peak vote frequency as the traffic volume. Moreover, the measurement part 115 can also specify each vehicle speed by the differentiation of each curve where the vote frequency becomes a peak. Furthermore, the measurement unit 115 can also calculate the acceleration of each vehicle by the second-order differentiation of each curve having a peak vote frequency.
  • Each block configuring the control unit 110 includes, for example, a CPU (Central Processing Unit), a RAM (Random Access Memory), and the program stored in the storage unit 180 is expanded and executed by the CPU.
  • a CPU Central Processing Unit
  • RAM Random Access Memory
  • the function can be realized.
  • each block which comprises the control part 110 may be comprised by the hardware for exclusive use, and may be comprised by the combination of several hardware.

Abstract

[Problem] To provide a technology whereby precision of traffic volume measurement is improved when measuring traffic volume on the basis of a captured image. [Solution] Provided is a traffic volume measurement device (10), comprising: an information acquisition unit (111) which acquires a captured image whereupon a road surface is image captured and an image capture time; a location detection unit (114) which detects a prescribed detection location on the basis of a vehicle region which is extracted from the captured image; and a measurement unit (115) which measures, as a traffic volume, a number of vote peaks which are obtained with regard to combinations of detection locations and image capture times.

Description

交通量計測装置および交通量計測方法Traffic volume measuring apparatus and traffic volume measuring method
 本発明は、交通量計測装置および交通量計測方法に関するものである。 The present invention relates to a traffic volume measuring device and a traffic volume measuring method.
 近年、カメラによって道路平面が撮像されて得られた撮像画像から交通量を計測する技術が開発されている。かかる交通量計測技術においては、撮像画像から背景差分法に基づいて車両シルエット領域を抽出し、車両シルエット領域を追跡しながら撮像範囲を通過した車両シルエット領域の数をカウントすることによって交通量を計測するのが一般的である(例えば、特許文献1参照)。 In recent years, a technique for measuring traffic volume from a captured image obtained by capturing a road plane with a camera has been developed. In such traffic measurement technology, the vehicle silhouette area is extracted from the captured image based on the background difference method, and the traffic volume is measured by counting the number of vehicle silhouette areas that have passed through the imaging range while tracking the vehicle silhouette area. It is common (see, for example, Patent Document 1).
特開平7-262489号公報JP-A-7-262489
 しかし、撮像画像から背景差分法に基づいて安定的に車両シルエット領域の抽出ができない場合には、車両シルエット領域を正確に追跡することが困難な状況が起こり得る。かかる状況では、撮像範囲を通過した車両シルエット領域の数を精度よくカウントすることが困難となり、交通量を精度よく計測することが困難となり得る。 However, if the vehicle silhouette area cannot be stably extracted from the captured image based on the background subtraction method, it may be difficult to accurately track the vehicle silhouette area. In such a situation, it is difficult to accurately count the number of vehicle silhouette regions that have passed through the imaging range, and it may be difficult to accurately measure the traffic volume.
 例えば、カメラが屋外に設置されるような場合には、天候の変化や日照の変化などが原因となって、安定的に車両シルエット領域の抽出ができなくなる可能性がある。また、手前側の車両に奥側の車両が隠蔽されてしまう現象が生じた場合には、手前側の車両と奥側の車両との間で車両シルエット領域が重複してしまう状況が起こり得るため、安定的に車両シルエット領域の抽出ができなくなる可能性がある。 For example, when the camera is installed outdoors, there is a possibility that the vehicle silhouette region cannot be stably extracted due to a change in weather or a change in sunlight. In addition, when a phenomenon occurs in which the vehicle on the back side is concealed in the vehicle on the near side, a situation may occur where the vehicle silhouette area overlaps between the vehicle on the near side and the vehicle on the back side. There is a possibility that the vehicle silhouette region cannot be extracted stably.
 そこで、本発明は、上記問題に鑑みてなされたものであり、本発明の目的とするところは、撮像画像に基づいて交通量を計測する場合において交通量計測の精度を向上させる技術を提供することにある。 Therefore, the present invention has been made in view of the above problems, and an object of the present invention is to provide a technique for improving the accuracy of traffic volume measurement when measuring traffic volume based on a captured image. There is.
 上記問題を解決するために、本発明のある観点によれば、道路平面が撮像された撮像画像と撮像時刻とを取得する情報取得部と、前記撮像画像から抽出される車両領域に基づいて所定の検出位置を検出する位置検出部と、前記検出位置と前記撮像時刻との組み合わせに対して得られる投票ピークの数に基づいて交通量を計測する計測部と、を備える、交通量計測装置が提供される。 In order to solve the above problem, according to an aspect of the present invention, an information acquisition unit that acquires a captured image obtained by capturing a road plane and an imaging time, and a predetermined area based on a vehicle area extracted from the captured image. A traffic volume measuring device, comprising: a position detection section that detects a detection position of the camera; and a measurement section that measures traffic volume based on the number of voting peaks obtained for the combination of the detection position and the imaging time. Provided.
 前記計測部は、前記検出位置と前記撮像時刻との組み合わせに対してハフ変換を施してハフ空間上で投票ピーク点の数を交通量として計測してもよい。 The measurement unit may perform a Hough transform on the combination of the detection position and the imaging time, and measure the number of voting peak points as a traffic volume in the Hough space.
 前記位置検出部は、前記撮像画像からエッジ特徴を検出し、前記エッジ特徴に基づいて前記車両領域を抽出してもよい。 The position detection unit may detect an edge feature from the captured image and extract the vehicle region based on the edge feature.
 前記計測部は、前記投票ピーク点の位置に基づいて車両速度を特定してもよい。 The measuring unit may specify the vehicle speed based on the position of the voting peak point.
 前記位置検出部は、実サイズに合わせて設定された実空間における車両走行軸における座標を前記検出位置として検出してもよい。 The position detection unit may detect, as the detection position, coordinates on a vehicle travel axis in a real space set in accordance with an actual size.
 前記位置検出部は、前記車両領域に基づいて実空間における車両走行軸に直交する所定の垂直平面を検出し、前記車両走行軸と前記垂直平面との交点座標を前記検出位置として検出してもよい。 The position detection unit detects a predetermined vertical plane orthogonal to the vehicle travel axis in real space based on the vehicle region, and detects an intersection coordinate between the vehicle travel axis and the vertical plane as the detection position. Good.
 前記位置検出部は、前記所定の垂直平面として車両前面または車両背面を検出してもよい。 The position detection unit may detect a vehicle front surface or a vehicle back surface as the predetermined vertical plane.
 前記位置検出部は、実空間における車両進行方向が奥から手前に向かう方向の場合には、前記車両前面を前記垂直平面として検出し、実空間における車両進行方向が手前から奥に向かう方向の場合には、前記車両背面を前記垂直平面として検出してもよい。 The position detection unit detects the front surface of the vehicle as the vertical plane when the vehicle traveling direction in the real space is from the back to the front, and the vehicle traveling direction in the real space is the direction from the front to the back. Alternatively, the rear surface of the vehicle may be detected as the vertical plane.
 また、本発明のある観点によれば、道路平面が撮像された撮像画像と撮像時刻とを取得するステップと、前記撮像画像から抽出される車両領域に基づいて所定の検出位置を検出するステップと、前記検出位置と前記撮像時刻との組み合わせに対して得られる投票のピーク点に基づいて交通量を計測するステップと、を含む、交通量計測方法が提供される。 Further, according to an aspect of the present invention, a step of acquiring a captured image obtained by capturing a road plane and a capturing time, a step of detecting a predetermined detection position based on a vehicle region extracted from the captured image, And measuring the traffic volume based on the peak point of voting obtained for the combination of the detection position and the imaging time, a traffic volume measuring method is provided.
 以上説明したように本発明によれば、撮像画像に基づいて交通量を計測する場合において交通量計測の精度を向上させることが可能である。 As described above, according to the present invention, it is possible to improve the accuracy of traffic volume measurement when traffic volume is measured based on a captured image.
本発明の実施形態の概要を説明するための図である。It is a figure for demonstrating the outline | summary of embodiment of this invention. 本発明の実施形態に係る交通量計測装置の機能構成例を示す図である。It is a figure which shows the function structural example of the traffic measuring device which concerns on embodiment of this invention. 設定部により使用されるパラメータの例を示す図である。It is a figure which shows the example of the parameter used by the setting part. 設定部の機能の例を説明するための図である。It is a figure for demonstrating the example of the function of a setting part. 位置検出部の機能の例を説明するための図である。It is a figure for demonstrating the example of the function of a position detection part. 検出位置と撮像時刻との組み合わせの例を示す図である。It is a figure which shows the example of the combination of a detection position and imaging time. 図6に示した検出位置と撮像時刻との組み合わせの例に対してハフ変換を施して得られた結果を示す図である。It is a figure which shows the result obtained by performing a Hough transformation with respect to the example of the combination of the detection position and imaging time shown in FIG. 本発明の実施形態に係る交通量計測装置の動作例を示すフローチャートである。It is a flowchart which shows the operation example of the traffic measuring device which concerns on embodiment of this invention.
 以下に添付図面を参照しながら、本発明の好適な実施の形態について詳細に説明する。なお、本明細書および図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the present specification and drawings, components having substantially the same functional configuration are denoted by the same reference numerals, and redundant description is omitted.
 また、本明細書および図面において、実質的に同一の機能構成を有する複数の構成要素を、同一の符号の後に異なるアルファベットまたは数字を付して区別する場合もある。ただし、実質的に同一の機能構成を有する複数の構成要素の各々を特に区別する必要がない場合、同一符号のみを付する。 In the present specification and drawings, a plurality of constituent elements having substantially the same functional configuration may be distinguished by attaching different alphabets or numbers after the same reference numeral. However, when it is not necessary to particularly distinguish each of a plurality of constituent elements having substantially the same functional configuration, only the same reference numerals are given.
 [概要の説明]
 続いて、本発明の実施形態の概要を説明する。図1は、本発明の実施形態の概要を説明するための図である。図1に示すように、撮像部が組み込まれた交通量計測装置10および道路平面が実空間に存在する。また、撮像部が組み込まれた交通量計測装置10は、撮像方向が道路平面に向けられた状態で設置されている。交通量計測装置10により撮像された撮像画像Img’には道路に設けられたレーンの境界線が映っている。また、図1に示すように、交通量計測装置10のレンズの中心が原点Oに設定されている。
[Description of overview]
Subsequently, an outline of an embodiment of the present invention will be described. FIG. 1 is a diagram for explaining an outline of an embodiment of the present invention. As shown in FIG. 1, a traffic volume measuring device 10 incorporating an imaging unit and a road plane exist in real space. Further, the traffic volume measuring device 10 incorporating the imaging unit is installed in a state where the imaging direction is directed to the road plane. The captured image Img ′ captured by the traffic measuring device 10 shows the boundary line of the lane provided on the road. Further, as shown in FIG. 1, the center of the lens of the traffic measuring device 10 is set to the origin O.
 図1には、交通量計測装置10に撮像部が組み込まれている例が示されているが、撮像部は交通量計測装置10に組み込まれておらず交通量計測装置10の外部に設置されていてもよい。かかる場合、例えば、交通量計測装置10は、撮像部から送信された撮像画像Img’を受信することにより撮像画像Img’を取得してもよい。また、例えば、交通量計測装置10は、記録媒体に記録された撮像画像Img’を読み込むことにより撮像画像Img’を取得してもよい。 FIG. 1 shows an example in which an imaging unit is incorporated in the traffic measurement device 10, but the imaging unit is not incorporated in the traffic measurement device 10 and is installed outside the traffic measurement device 10. It may be. In such a case, for example, the traffic measurement device 10 may acquire the captured image Img ′ by receiving the captured image Img ′ transmitted from the imaging unit. Further, for example, the traffic volume measuring device 10 may acquire the captured image Img ′ by reading the captured image Img ′ recorded on the recording medium.
 ここで、撮像部によって道路平面が撮像されて得られた撮像画像Img’から交通量を計測する技術が提案されている。かかる交通量計測技術においては、撮像画像Img’から背景差分法に基づいて車両シルエット領域を抽出し、車両シルエット領域を追跡しながら撮像範囲を通過した車両シルエット領域の数をカウントすることによって交通量を計測するのが一般的である。 Here, a technique for measuring traffic volume from a captured image Img ′ obtained by imaging a road plane by an imaging unit has been proposed. In such traffic measurement technology, a vehicle silhouette area is extracted from the captured image Img ′ based on the background difference method, and the number of vehicle silhouette areas that have passed through the imaging range is counted while tracking the vehicle silhouette area. Is generally measured.
 しかし、撮像画像Img’から背景差分法に基づいて安定的に車両シルエット領域の抽出ができない場合には、車両シルエット領域を正確に追跡することが困難な状況が起こり得る。かかる状況では、撮像範囲を通過した車両シルエット領域の数を精度よくカウントすることが困難となり、交通量を精度よく計測することが困難となり得る。 However, when the vehicle silhouette region cannot be stably extracted from the captured image Img ′ based on the background subtraction method, a situation in which it is difficult to accurately track the vehicle silhouette region may occur. In such a situation, it is difficult to accurately count the number of vehicle silhouette regions that have passed through the imaging range, and it may be difficult to accurately measure the traffic volume.
 例えば、撮像部が屋外に設置されるような場合には、天候の変化や日照の変化などが原因となって、安定的に車両シルエット領域の抽出ができなくなる可能性がある。また、手前側の車両に奥側の車両が隠蔽されてしまう現象が生じた場合には、手前側の車両と奥側の車両との間で車両シルエット領域が重複してしまう状況が起こり得るため、安定的に車両シルエット領域の抽出ができなくなる可能性がある。 For example, when the imaging unit is installed outdoors, there is a possibility that the vehicle silhouette region cannot be stably extracted due to a change in weather or a change in sunlight. In addition, when a phenomenon occurs in which the vehicle on the back side is concealed in the vehicle on the near side, a situation may occur where the vehicle silhouette area overlaps between the vehicle on the near side and the vehicle on the back side. There is a possibility that the vehicle silhouette region cannot be extracted stably.
 そこで、本明細書においては、撮像画像に基づいて交通量を計測する場合において交通量計測の精度を向上させる技術を提案する。 Therefore, in this specification, a technique for improving the accuracy of traffic volume measurement when measuring traffic volume based on captured images is proposed.
 以上、本発明の実施形態の概要を説明した。 The outline of the embodiment of the present invention has been described above.
 [実施形態の詳細]
 続いて、本発明の実施形態の詳細について説明する。まず、本発明の実施形態に係る交通量計測装置10の機能構成について説明する。図2は、本発明の実施形態に係る交通量計測装置10の機能構成例を示す図である。図2に示すように、本発明の実施形態に係る交通量計測装置10は、制御部110、撮像部170、記憶部180および出力部190を備える。
[Details of the embodiment]
Next, details of the embodiment of the present invention will be described. First, the functional configuration of the traffic volume measuring device 10 according to the embodiment of the present invention will be described. FIG. 2 is a diagram illustrating a functional configuration example of the traffic volume measuring device 10 according to the embodiment of the present invention. As shown in FIG. 2, the traffic volume measuring device 10 according to the embodiment of the present invention includes a control unit 110, an imaging unit 170, a storage unit 180, and an output unit 190.
 制御部110は、交通量計測装置10の動作全体を制御する機能を有する。撮像部170は、実空間を撮像することにより撮像画像を取得する機能を有し、例えば、単眼カメラにより構成される。記憶部180は、制御部110を動作させるためのプログラムやデータを記憶することができる。また、記憶部180は、制御部110の動作の過程で必要となる各種データを一時的に記憶することもできる。出力部190は、制御部110による制御に従って出力を行う機能を有する。出力部190の種類は特に限定されず、計測結果記録装置であってもよいし、計測結果を通信回線にて他装置へ送信する装置であってもよいし、表示装置であってもよいし、音声出力装置であってもよい。 The control unit 110 has a function of controlling the entire operation of the traffic measuring device 10. The imaging unit 170 has a function of acquiring a captured image by imaging a real space, and is configured by, for example, a monocular camera. The storage unit 180 can store a program and data for operating the control unit 110. In addition, the storage unit 180 can temporarily store various data necessary in the course of the operation of the control unit 110. The output unit 190 has a function of performing output in accordance with control by the control unit 110. The type of the output unit 190 is not particularly limited, and may be a measurement result recording device, a device that transmits a measurement result to another device via a communication line, or a display device. An audio output device may be used.
 なお、図2に示した例では、撮像部170、記憶部180および出力部190は、交通量計測装置10の内部に存在するが、撮像部170、記憶部180および出力部190の全部または一部は、交通量計測装置10の外部に備えられていてもよい。また、制御部110は、情報取得部111と、設定部112と、出力制御部113と、位置検出部114と、計測部115とを備える。制御部110が備えるこれらの各機能部の詳細については、後に説明する。 In the example illustrated in FIG. 2, the imaging unit 170, the storage unit 180, and the output unit 190 exist inside the traffic measurement device 10, but all or one of the imaging unit 170, the storage unit 180, and the output unit 190. The unit may be provided outside the traffic measurement device 10. The control unit 110 includes an information acquisition unit 111, a setting unit 112, an output control unit 113, a position detection unit 114, and a measurement unit 115. Details of these functional units included in the control unit 110 will be described later.
 以上、本発明の実施形態に係る交通量計測装置10の機能構成例について説明した。 The functional configuration example of the traffic measurement device 10 according to the embodiment of the present invention has been described above.
 まず、本発明の実施形態に係る交通量計測装置10によりキャリブレーションが行われ得る。より詳細には、道路の平面式(以下、「道路平面式」とも言う)を算出する処理と車両の進行方向とを算出する処理とがキャリブレーションとして行われ得る。以下では、図3および図4を参照しながら、設定部112により行われ得るキャリブレーションについて説明する。 First, calibration can be performed by the traffic measuring device 10 according to the embodiment of the present invention. More specifically, a process for calculating a road plane formula (hereinafter also referred to as a “road plane type”) and a process for calculating the traveling direction of the vehicle may be performed as calibration. Hereinafter, calibration that can be performed by the setting unit 112 will be described with reference to FIGS. 3 and 4.
 図3は、設定部112により使用されるパラメータを示す図である。設定部112は、まず、撮像部170を構成する撮像素子のサイズと制御部110に提供される撮像画像Img’のサイズとに基づいて、撮像素子の単位pixel当たりの撮像画像Img’のサイズpix_dotをパラメータとして算出する。撮像画像Img’は、原点Oから焦点距離だけ離れた撮像素子の撮像面上に撮像された撮像画像Imgに基づいて生成される。また、制御部110に提供された撮像画像Img’は、情報取得部111によって取得されて設定部112によって利用され得る。 FIG. 3 is a diagram showing parameters used by the setting unit 112. The setting unit 112 first determines the size pix_dot of the captured image Img ′ per unit pixel of the image sensor based on the size of the image sensor that constitutes the image capturing unit 170 and the size of the captured image Img ′ provided to the control unit 110. As a parameter. The captured image Img ′ is generated based on the captured image Img captured on the imaging surface of the image sensor that is separated from the origin O by the focal length. Further, the captured image Img ′ provided to the control unit 110 can be acquired by the information acquisition unit 111 and used by the setting unit 112.
 図3に示すように、ここでは、撮像素子がCCD(Charge Coupled Device)である場合を例として説明するが、CCDは撮像素子の一例に過ぎない。したがって、撮像素子はCMOS(Complementary Metal Oxide Semiconductor)等であってもよい。 As shown in FIG. 3, here, a case where the imaging device is a CCD (Charge Coupled Device) will be described as an example, but the CCD is only an example of the imaging device. Therefore, the imaging element may be a CMOS (Complementary Metal Oxide Semiconductor) or the like.
 ここで、CCDサイズをccd_sizeとし、撮像画像Img’(横:width×縦:height)のサイズをimg_sizeとすると、設定部112は、以下の(数式1)によりpix_dotを算出することができる。一般的に、CCDサイズは、CCDの対角線の長さで表されるため、この(数式1)に示されるように、CCDサイズが撮像画像Img’の縦横の2乗和の平方根で除されることにより算出される。しかし、このような手法によるパラメータpix_dotの算出は一例に過ぎないため、他の手法によりパラメータpix_dotが算出されてもよい。例えば、CCDの対角線の代わりにCCDの縦または横の長さが用いられてもよい。 Here, assuming that the CCD size is ccd_size and the size of the captured image Img ′ (horizontal: width × vertical: height) is img_size, the setting unit 112 can calculate pix_dot by the following (Equation 1). In general, since the CCD size is represented by the length of the diagonal line of the CCD, the CCD size is divided by the square root of the sum of squares in the vertical and horizontal directions of the captured image Img ′ as shown in (Equation 1). Is calculated by However, the calculation of the parameter pix_dot by such a method is only an example, and the parameter pix_dot may be calculated by another method. For example, the vertical or horizontal length of the CCD may be used instead of the diagonal line of the CCD.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 なお、CCDサイズは、例えば、撮像部170から容易に取得される。また、撮像画像Img’のサイズは、例えば、記憶部180から取得される。したがって、制御部110は、これらのサイズに基づいて、CCDの撮像面に撮像される撮像画像Imgの実空間における3次元座標と制御部110に提供される撮像画像Img’の2次元座標との対応関係を把握することができる。すなわち、制御部110は、この対応関係に基づいて、制御部110に提供される撮像画像Img’の2次元座標からCCDの撮像面に撮像される撮像画像Imgの実空間における3次元座標を把握することができる。 Note that the CCD size is easily obtained from the imaging unit 170, for example. Further, the size of the captured image Img ′ is acquired from the storage unit 180, for example. Therefore, the control unit 110 determines, based on these sizes, the three-dimensional coordinates in the real space of the captured image Img captured on the imaging surface of the CCD and the two-dimensional coordinates of the captured image Img ′ provided to the control unit 110. The correspondence can be grasped. That is, the control unit 110 grasps the three-dimensional coordinates in the real space of the captured image Img imaged on the imaging surface of the CCD from the two-dimensional coordinates of the captured image Img ′ provided to the control unit 110 based on this correspondence. can do.
 このように算出されたパラメータを用いてキャリブレーションが行われ得る。以下、図4を参照しながら、設定部112によりパラメータを用いて行われるキャリブレーションについて説明する。 Calibration can be performed using the parameters calculated in this way. Hereinafter, the calibration performed using the parameters by the setting unit 112 will be described with reference to FIG.
 図4は、設定部112の機能を説明するための図である。図4に示したように、原点Oを基準としたxyz座標系(実空間)を想定する。このxyz座標系において、道路平面式をR1x+R2x+R3z+R4=0とする。また、車両の進行方向を示すベクトルである進行方向ベクトルvを(vx,vy,vz)とする。なお、以下の説明では、図4に示したように、原点Oから焦点距離fだけ離れた点(焦点)をy軸上に設定し、この焦点を通りy軸に垂直な平面を撮像面とし、この撮像面上に撮像画像Imgが撮像されるものとして説明を続けるが、各座標軸の設定はこのような例に限定されない。 FIG. 4 is a diagram for explaining the function of the setting unit 112. As shown in FIG. 4, an xyz coordinate system (real space) with the origin O as a reference is assumed. In this xyz coordinate system, the road plane expression is R1x + R2x + R3z + R4 = 0. Further, a traveling direction vector v that is a vector indicating the traveling direction of the vehicle is defined as (vx, vy, vz). In the following description, as shown in FIG. 4, a point (focal point) that is separated from the origin O by the focal length f is set on the y axis, and a plane that passes through this focal point and is perpendicular to the y axis is defined as an imaging surface. The description will be continued assuming that the captured image Img is captured on the imaging surface, but the setting of each coordinate axis is not limited to such an example.
 道路平面上には、平行な2直線があらかじめ描かれている。したがって、撮像画像Imgには、この平行な2直線が映されている。また、道路平面上には、既知の大きさQ_dis離れた2点Q1,Q2があらかじめ描かれている。撮像画像Imgには、2点Q1,Q2が、Q1’(xs1,f,zs1),Q2’(xs2,f,zs2)として映される。なお、図4に示した例では、Q1,Q2が道路平面上の平行な2直線の各々上の点として描かれているが、Q1,Q2は、道路平面上の点であれば、特に限定されない。 2 Parallel lines are drawn in advance on the road plane. Therefore, the parallel two straight lines are shown in the captured image Img. On the road plane, two points Q1 and Q2 that are separated by a known size Q_dis are drawn in advance. In the captured image Img, two points Q1 and Q2 are displayed as Q1 '(xs1, f, zs1) and Q2' (xs2, f, zs2). In the example shown in FIG. 4, Q1 and Q2 are drawn as points on two parallel straight lines on the road plane. However, Q1 and Q2 are not particularly limited as long as they are points on the road plane. Not.
 また、撮像画像Imgに映る2直線のうち、第1の直線が通る2点をT1(x1,y1,z1)およびT4(x4,y4,z4)とし、第2の直線が通る2点をT2(x2,y2,z2)およびT3(x3,y3,z3)とする。すると、図4に示すように、T1、T2、T3およびT4の各々と原点Oとを結ぶ直線と道路平面との交点の座標は、t1・T1、t2・T2、t3・T3およびt4・T4と表される。設定部112は、例えば、以下に示す(前提条件1)に基づいて、キャリブレーションを行うことができる。 Of the two straight lines shown in the captured image Img, the two points through which the first straight line passes are T1 (x1, y1, z1) and T4 (x4, y4, z4), and the two points through which the second straight line passes are T2. Let (x2, y2, z2) and T3 (x3, y3, z3). Then, as shown in FIG. 4, the coordinates of the intersection of the straight line connecting each of T1, T2, T3 and T4 and the origin O and the road plane are t1 · T1, t2 · T2, t3 · T3 and t4 · T4. It is expressed. For example, the setting unit 112 can perform calibration based on the following (Precondition 1).
(前提条件1)
 (条件1)道路平面上の平行な2直線の方向ベクトルは同じである。
 (条件2)撮像部170のロールは0である。
 (条件3)原点Oから道路平面までの距離を高さHとする。
 (条件4)道路平面上にQ_dis離れたQ1およびQ2が存在する。
 なお、上記ロールが0であるとは、道路平面に対して垂直な方向に設置されている物体が撮像画像Img上においても縦方向に映るように撮像部170が設置されている状態を意味する。
(Prerequisite 1)
(Condition 1) The direction vectors of two parallel straight lines on the road plane are the same.
(Condition 2) The roll of the imaging unit 170 is zero.
(Condition 3) The distance from the origin O to the road plane is the height H.
(Condition 4) Q1 and Q2 that are Q_dis apart exist on the road plane.
Note that the roll being 0 means that the imaging unit 170 is installed so that an object installed in a direction perpendicular to the road plane is reflected in the vertical direction on the captured image Img. .
 設定部112は、以上に示したように取得される各種データと(条件1)とに基づいて、以下の(数式2)および(数式3)に示される関係式を導き出すことができる。 The setting unit 112 can derive the relational expressions shown in the following (Equation 2) and (Equation 3) based on the various data acquired as described above and (Condition 1).
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 また、設定部112は、以上に示したように取得される各種データと(条件2)とに基づいて、以下の(数式4)に示される関係式を導き出すことができる。なお、ロールが0の状態であれば、道路平面式と平行な軸方向(図4に示した例では、x軸方向)への道路平面に対する垂線の成分が0になるため、計算式が簡略化される(例えば、x軸方向への垂線の成分が0であれば、R1=0として計算できる)。 Also, the setting unit 112 can derive the relational expression shown in the following (Formula 4) based on the various data acquired as described above and (Condition 2). If the roll is in a state of 0, the perpendicular component to the road plane in the axial direction parallel to the road plane formula (in the example shown in FIG. 4, the x-axis direction) is 0, so the calculation formula is simplified. (For example, if the perpendicular component in the x-axis direction is 0, R1 = 0 can be calculated).
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 また、設定部112は、以上に示したように取得される各種データと(条件3)とに基づいて、以下の(数式5)に示される関係式を導き出すことができる。 Also, the setting unit 112 can derive the relational expression shown in the following (Formula 5) based on the various data acquired as described above and (Condition 3).
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 また、設定部112は、以上に示したように取得される各種データと(条件4)とに基づいて、以下の(数式6)および(数式7)に示される関係式を導き出すことができる。 Also, the setting unit 112 can derive the relational expressions shown in the following (Expression 6) and (Expression 7) based on the various data acquired as described above and (Condition 4).
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007
 ここで、K1は、原点Oから道路平面上のQ1(xr1,yr1,zr1)までの距離が原点Oから撮像画像Img上のQ1’(xs1,f,zs1)までの距離の何倍になっているかを示す値である。同様に、K2は、原点Oから道路平面上のQ2(xr2,yr2,zr2)までの距離が原点Oから撮像画像Img上のQ2’(xs2,f,zs2)までの距離の何倍になっているかを示す値である。したがって、以下の(数式8)に示される関係式を導き出すことができる。 Here, in K1, the distance from the origin O to Q1 (xr1, yr1, zr1) on the road plane is a multiple of the distance from the origin O to Q1 ′ (xs1, f, zs1) on the captured image Img. It is a value indicating whether or not Similarly, in K2, the distance from the origin O to Q2 (xr2, yr2, zr2) on the road plane is a multiple of the distance from the origin O to Q2 ′ (xs2, f, zs2) on the captured image Img. It is a value indicating whether or not Therefore, the relational expression shown in the following (Formula 8) can be derived.
Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008
 設定部112は、(数式8)に示される関係式から、道路平面上の2点(Q1およびQ2)の距離の測定値Q_dis’を、以下の(数式9)により算出することができる。 The setting unit 112 can calculate the measured value Q_dis ′ of the distance between two points (Q1 and Q2) on the road plane from the relational expression shown in (Formula 8) by the following (Formula 9).
Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000009
 設定部112は、測定値Q_dis’と既知の大きさQ_disとの差分が最も小さくなる場合におけるR1、R2、R3およびR4を、(数式1)~(数式9)に基づいて算出することができる。このようにR1、R2、R3およびR4が算出されることにより、道路平面式R1x+R2x+R3z+R4=0が決定される。 The setting unit 112 can calculate R1, R2, R3, and R4 when the difference between the measured value Q_dis ′ and the known magnitude Q_dis is the smallest based on (Formula 1) to (Formula 9). . By calculating R1, R2, R3 and R4 in this way, the road plane formula R1x + R2x + R3z + R4 = 0 is determined.
 以上に説明したような道路平面式の算出手法は、一例に過ぎない。したがって、設定部112は、他の手法により道路平面式を算出することも可能である。例えば、道路平面上の平行な2直線間の距離が既知であれば、この道路平面上の平行な2直線間の距離を用いることにより、(条件2)を使用しないで道路平面式を算出することができる。 The road plane calculation method described above is only an example. Therefore, the setting unit 112 can also calculate the road plane equation by another method. For example, if the distance between two parallel straight lines on the road plane is known, the road plane formula is calculated without using (Condition 2) by using the distance between the two parallel straight lines on the road plane. be able to.
 また、設定部112は、進行方向ベクトルv(vx,vy,vz)を算出することもできる。より詳細には、設定部112は、道路平面上の平行な2直線のうちの少なくともいずれか一方の直線の方向を算出することにより、進行方向ベクトルvを算出することができる。例えば、設定部112は、座標t2・T2と座標t3・T3との差分を進行方向ベクトルvとして算出してもよいし、座標t1・T1と座標t4・T4との差分を進行方向ベクトルvとして算出してもよい。 Further, the setting unit 112 can also calculate the traveling direction vector v (vx, vy, vz). More specifically, the setting unit 112 can calculate the traveling direction vector v by calculating the direction of at least one of two parallel straight lines on the road plane. For example, the setting unit 112 may calculate the difference between the coordinates t2 · T2 and the coordinates t3 · T3 as the traveling direction vector v, or the difference between the coordinates t1 · T1 and the coordinates t4 · T4 as the traveling direction vector v. It may be calculated.
 以上に説明したような手法により、設定部112は、キャリブレーションを行うことができる。このようなキャリブレーションにより算出された道路平面式R1x+R2x+R3z+R4=0および進行方向ベクトルv(vx,vy,vz)を交通量および車両速度の計測のために利用することができる。図4に示すように、設定部112は、進行方向ベクトルv(vx,vy,vz)と平行な車両走行軸Aを設定してもよい。上記のようにxyz座標が実空間の実サイズに合わせて設定されれば、車両走行軸Aも実空間において実サイズに合わせて設定され得る。 The setting unit 112 can perform calibration by the method described above. The road plane equation R1x + R2x + R3z + R4 = 0 and the traveling direction vector v (vx, vy, vz) calculated by such calibration can be used for measuring the traffic volume and the vehicle speed. As illustrated in FIG. 4, the setting unit 112 may set a vehicle travel axis A that is parallel to the traveling direction vector v (vx, vy, vz). If the xyz coordinates are set according to the actual size of the real space as described above, the vehicle travel axis A can also be set according to the actual size in the real space.
 また、図4に示すように、設定部112は、計測範囲E1を設定してもよい。そうすれば、計測範囲E1から抽出される車両領域に基づいて交通量が計測され得る。例えば、設定部112は、入力操作に基づいて計測範囲E1を設定してもよいし、進行方向ベクトルv(vx,vy,vz)に基づいて自動的に計測範囲E1を設定してもよい。ただし、撮像範囲自体を計測範囲とする場合などには、計測範囲の設定は特になされなくてもよい。以下では、説明を簡便にするため、撮像範囲自体を計測範囲とする場合を主に説明する。出力制御部113は、設定部112によって設定された各種情報を出力部190に出力させてもよい。 Further, as shown in FIG. 4, the setting unit 112 may set the measurement range E1. If it does so, traffic volume may be measured based on the vehicle area | region extracted from the measurement range E1. For example, the setting unit 112 may set the measurement range E1 based on the input operation, or may automatically set the measurement range E1 based on the traveling direction vector v (vx, vy, vz). However, when the imaging range itself is the measurement range, the measurement range need not be set. Below, in order to demonstrate easily, the case where the imaging range itself is made into a measurement range is mainly demonstrated. The output control unit 113 may cause the output unit 190 to output various information set by the setting unit 112.
 以上、設定部112により行われるキャリブレーションについて説明した。 The calibration performed by the setting unit 112 has been described above.
 続いて、道路平面が撮像された撮像画像Imgに基づいた交通量計測の詳細について説明する。上記したように、撮像部170から制御部110に対して撮像画像Imgが提供される。さらに、撮像部170は計時機能を有しており、撮像画像Imgの撮像時刻が撮像部170によって検出されると、検出された撮像時刻が制御部110に提供される。情報取得部111は、このようにして撮像部170から提供された撮像画像Imgと撮像時刻とを取得する。計時機能は制御部110が有してもよく、制御部110で得られる時刻を用いても良い。 Next, details of traffic volume measurement based on a captured image Img obtained by capturing a road plane will be described. As described above, the captured image Img is provided from the imaging unit 170 to the control unit 110. Furthermore, the imaging unit 170 has a timekeeping function. When the imaging time of the captured image Img is detected by the imaging unit 170, the detected imaging time is provided to the control unit 110. The information acquisition unit 111 acquires the captured image Img and the imaging time provided from the imaging unit 170 in this way. The timekeeping function may be included in the control unit 110, or the time obtained by the control unit 110 may be used.
 続いて、位置検出部114は、撮像画像Imgから抽出される車両領域に基づいて所定の検出位置を検出する。所定の検出位置は車両走行軸Aにおける座標であってもよい。すなわち、位置検出部114は、車両走行軸Aにおける座標を検出位置として検出してよい。位置検出部114による検出について図5を参照しながらさらに詳細に説明する。図5は、位置検出部114の機能の例を説明するための図である。なお、図5に示した例は、位置検出部114による検出の一例に過ぎないため、位置検出部114による検出は、図5に示した例に限定されない。 Subsequently, the position detection unit 114 detects a predetermined detection position based on the vehicle area extracted from the captured image Img. The predetermined detection position may be a coordinate on the vehicle travel axis A. That is, the position detection unit 114 may detect coordinates on the vehicle travel axis A as detection positions. The detection by the position detection unit 114 will be described in more detail with reference to FIG. FIG. 5 is a diagram for explaining an example of functions of the position detection unit 114. Note that the example illustrated in FIG. 5 is merely an example of detection by the position detection unit 114, and thus the detection by the position detection unit 114 is not limited to the example illustrated in FIG.
 図5を参照すると、道路平面上を車両Vが走行している。まず、位置検出部114は、撮像画像Imgから車両領域を抽出する。車両領域はどのように抽出されてもよい。例えば、車両領域は、車両Vが図5上で映る前後のフレームにおける撮像画像Img同士の差分によって抽出されるシルエットから特定される領域であってもよい。あるいは、背景画像と撮像画像Imgとの差分により抽出されるシルエットであってもよい。 Referring to FIG. 5, the vehicle V is traveling on the road plane. First, the position detection unit 114 extracts a vehicle region from the captured image Img. The vehicle area may be extracted in any way. For example, the vehicle region may be a region specified from a silhouette extracted by the difference between the captured images Img in the frames before and after the vehicle V is shown in FIG. Or the silhouette extracted by the difference of a background image and the captured image Img may be sufficient.
 加えて、撮像映像上の車両V上には車両輪郭およびフロントガラス等によりエッジ特徴が多数検出される。そこで上記処理によって抽出されたシルエットに基づいた車両領域から、位置検出部114ではさらにエッジ特徴を検出し、エッジ特徴に基づいて車両領域を検出すると、撮像環境の変化に対してより高精度な車両領域抽出が行われ得る。 In addition, many edge features are detected on the vehicle V on the captured image by the vehicle contour, the windshield, and the like. Therefore, when the position detection unit 114 further detects an edge feature from the vehicle region based on the silhouette extracted by the above processing and detects the vehicle region based on the edge feature, a vehicle with higher accuracy against changes in the imaging environment. Region extraction may be performed.
 具体的には、位置検出部114は、上記処理によって抽出されたシルエットに基づいてエッジ特徴を検出し、検出されたエッジ特徴の集まりを車両領域として抽出してもよい。ただし、エッジ検出処理だけでは、1台の車両Vが映った領域が複数のエッジ領域に分断されて抽出されてしまう可能性がある。そこで、位置検出部114は、閾値未満の距離にあるエッジ特徴同士を一つのエッジ領域にまとめてよい。具体的には、位置検出部114は、各エッジ特徴に対してラベリング処理を行い、ラベル付けされたエッジ特徴同士が閾値未満の距離にあれば、そのエッジ特徴同士を一つのエッジ領域にまとめてよい。 Specifically, the position detection unit 114 may detect edge features based on the silhouette extracted by the above processing, and extract a collection of detected edge features as a vehicle region. However, if only the edge detection process is performed, there is a possibility that an area in which one vehicle V appears is divided into a plurality of edge areas and extracted. Therefore, the position detection unit 114 may combine edge features that are at a distance less than the threshold into one edge region. Specifically, the position detection unit 114 performs a labeling process on each edge feature, and if the labeled edge features are at a distance less than a threshold, the edge features are combined into one edge region. Good.
 続いて、位置検出部114は、上記したように車両領域に基づいて検出位置を検出する。例えば、位置検出部114は、車両領域に基づいて車両走行軸Aに直交する所定の垂直平面を検出し、車両走行軸Aと所定の垂直平面との交点座標を検出位置として検出すればよい。所定の垂直平面は限定されないが、車両前面であってもよいし、車両背面であってもよい。すなわち、位置検出部114は、所定の垂直平面として車両前面または車両背面を検出してもよい。 Subsequently, the position detection unit 114 detects the detection position based on the vehicle area as described above. For example, the position detection unit 114 may detect a predetermined vertical plane orthogonal to the vehicle travel axis A based on the vehicle region, and detect an intersection coordinate between the vehicle travel axis A and the predetermined vertical plane as a detection position. The predetermined vertical plane is not limited, but may be the front surface of the vehicle or the back surface of the vehicle. That is, the position detection unit 114 may detect the front surface or the rear surface of the vehicle as a predetermined vertical plane.
 あるいは、位置検出部114は、所定の垂直平面として車両前面を検出するか車両背面を検出するかを状況に応じて変更してもよい。例えば、位置検出部114は、進行方向ベクトルvが奥から手前に向かう方向の場合には、車両前面を垂直平面として検出し、進行方向ベクトルvが手前から奥に向かう方向の場合には、車両背面を垂直平面として検出してもよい。このようにすれば、奥側の面よりも手前側の面がより鮮明に撮像画像Imgに映る可能性が高いため、検出精度がより高まることが期待される。 Alternatively, the position detection unit 114 may change whether to detect the front surface of the vehicle or the back surface of the vehicle as a predetermined vertical plane depending on the situation. For example, the position detection unit 114 detects the front surface of the vehicle as a vertical plane when the traveling direction vector v is a direction from the back to the front, and when the traveling direction vector v is a direction from the front to the back, The back surface may be detected as a vertical plane. In this way, it is expected that the detection accuracy will be further improved because there is a high possibility that the front side surface is more clearly reflected in the captured image Img than the back side surface.
 ここでは、進行方向ベクトルvが右奥から左手前に向かう方向の場合に所定の垂直平面として車両前面F1を検出する手法の一例を説明する。まず、図5に示すように、位置検出部114は、撮像画像Imgから抽出される車両領域に基づいて、車両前面最低点m1’を検出する。車両前面最低点m1’は、車両Vの車体のうち地上からの高さが最も低い点である。車両前面最低点m1’はどのように検出されてもよいが、例えば、進行方向ベクトルvが右奥から左手前に向かう方向の場合には、車両走行軸Aに垂直な平面に対して画像上で最下点となるエッジ線(車両領域の左下の線分)上の点(例えば、中点)が車両前面最低点m1’として検出される。 Here, an example of a technique for detecting the vehicle front surface F1 as a predetermined vertical plane when the traveling direction vector v is a direction from the right back to the left front will be described. First, as illustrated in FIG. 5, the position detection unit 114 detects the vehicle front lowest point m1 ′ based on the vehicle region extracted from the captured image Img. The vehicle front lowest point m <b> 1 ′ is a point having the lowest height from the ground in the vehicle body of the vehicle V. The vehicle front lowest point m1 ′ may be detected in any way. For example, when the traveling direction vector v is a direction from the right back to the left front, The point (for example, the middle point) on the edge line (the lower left line segment of the vehicle area) that is the lowest point is detected as the vehicle front lowest point m1 ′.
 続いて、位置検出部114は、実空間上の車両前面最低点m1から道路平面に最低地上高hの長さの垂線を下してその交点m0を通過する車両走行軸Aに垂直な平面を車両前面F1として検出する。ここで、最低地上高hは、あらかじめ決められた値であってもよいし、これまでに検出された最低地上高に基づいて定められる値であってもよい。あらかじめ決められた値を最低地上高hとして使用する場合には、例えば、あらかじめ決められた値として複数の車両における最低地上高の平均値が使用されてもよい。図5には、車両Vが道路平面に接する接地点D0と車体低平面との距離が最低地上高hとして示されている。 Subsequently, the position detection unit 114 draws a plane perpendicular to the vehicle travel axis A passing through the intersection point m0 from the lowest point m1 in front of the vehicle in real space to the road plane with a vertical line having the minimum ground height h. It detects as vehicle front F1. Here, the minimum ground height h may be a predetermined value or a value determined based on the minimum ground height detected so far. When a predetermined value is used as the minimum ground height h, for example, an average value of the minimum ground heights of a plurality of vehicles may be used as the predetermined value. In FIG. 5, the distance between the ground contact point D0 where the vehicle V is in contact with the road plane and the vehicle body low plane is shown as the minimum ground height h.
 なお、ここでは、進行方向ベクトルvが右奥から左手前に向かう方向の場合に所定の垂直平面として車両前面F1を検出する手法を説明したが、進行方向ベクトルvが左手前から右奥に向かう方向の場合にも同様にして、最低地上点m1’として車両背面の下部のエッジ線(車両領域の左下の線分)上の点(例えば、中点)が最低地上点m1’として検出され、同様の手法により車両背面が検出され得る。 Here, the method of detecting the vehicle front surface F1 as a predetermined vertical plane when the traveling direction vector v is in the direction from the right back to the left front has been described, but the traveling direction vector v is from the left front to the right back. Similarly, in the case of the direction, a point (for example, a middle point) on the lower edge line (the lower left line segment of the vehicle area) on the back of the vehicle is detected as the lowest ground point m1 ′ as the lowest ground point m1 ′. The rear surface of the vehicle can be detected by a similar method.
 また、進行方向ベクトルvが左奥から右手前に向かう方向の場合には、最低地上点m1’として車両前面の下部のエッジ線(車両領域の右下の線分)上の点(例えば、中点)が最低地上点m1’として検出され、同様の手法により車両前面が検出され得る。進行方向ベクトルvが右手前から左奥に向かう方向の場合にも同様にして、最低地上点m1’として車両背面の下部のエッジ線(車両領域の右下の線分)上の点(例えば、中点)が最低地上点m1’として検出され、同様の手法により車両背面が検出され得る。 When the traveling direction vector v is a direction from the left back to the right front, the lowest ground point m1 ′ is a point on the lower edge line (the lower right line segment of the vehicle area) on the front of the vehicle (for example, middle Point) is detected as the lowest ground point m1 ′, and the front surface of the vehicle can be detected by the same method. Similarly, when the traveling direction vector v is in the direction from the right front to the left back, the lowest ground point m1 ′ is a point on the lower edge line (the lower right line segment of the vehicle area) on the back of the vehicle (for example, The middle point) is detected as the lowest ground point m1 ′, and the rear surface of the vehicle can be detected by the same method.
 以上の例において説明したような手法により、車両領域に基づいて検出位置が検出され得る。位置検出部114によって検出位置が検出されれば、検出位置と撮像時刻との組み合わせが得られる。図6は、検出位置と撮像時刻との組み合わせの例を示す図である。続いて、計測部115は、このようにして得られた検出位置と撮像時刻との組み合わせから交通量を計測する。 The detection position can be detected based on the vehicle area by the method described in the above example. If the detection position is detected by the position detection unit 114, a combination of the detection position and the imaging time is obtained. FIG. 6 is a diagram illustrating an example of a combination of a detection position and an imaging time. Subsequently, the measurement unit 115 measures the traffic volume from the combination of the detection position and the imaging time obtained in this way.
 通行車両は通常、等速直線運動をしているとみなせることから、車両検出位置と撮像時間は一定の法則を満たすことになる。計測部115は、検出位置と撮像時刻が一定の法則となる組み合わせに対して投票を行い、投票度数がある一定値以上の値(ピーク点)になれば1台の車両領域に基づいて検出された組み合わせの集合が満たすとみなせばよい。したがって、投票ピークの数だけ車両が通過したとみなすことが可能であるため、計測部115は、投票ピークの数を交通量として計測すればよい。なお、ピークは、投票度数が閾値を超える場合であり、かつ、投票度数が極大となる場合であってよい。したがって、投票ピークの数だけ車両が通過したとみなすことが可能であるため、計測部115は、投票ピークの数を交通量として計測すればよい。なお、ピーク点は、投票度数が閾値を超える場合であるか、または、投票度数が極大となる場合であってよい。 Since a passing vehicle can normally be regarded as moving at a constant linear velocity, the vehicle detection position and the imaging time satisfy a certain law. The measurement unit 115 performs voting on a combination in which the detection position and the imaging time are a constant rule, and if the vote count becomes a value (peak point) equal to or greater than a certain value, it is detected based on one vehicle area. It can be considered that the set of combinations is satisfied. Therefore, since it is possible to consider that the vehicle has passed the number of voting peaks, the measurement unit 115 may measure the number of voting peaks as the traffic volume. Note that the peak may be a case where the voting power exceeds a threshold and the voting power is maximized. Therefore, since it is possible to consider that the vehicle has passed the number of voting peaks, the measurement unit 115 may measure the number of voting peaks as the traffic volume. Note that the peak point may be when the vote count exceeds a threshold value or when the vote count becomes a maximum.
 かかる手法によれば、投票ピークの数を交通量として計測すればよいため、車両シルエット領域を追跡処理し、撮像範囲内で追跡処理に成功した車両シルエット領域の数をカウントする必要がない。そのため、車両シルエット領域を正確に追跡することが困難な状況が生じている場合であっても、交通量を精度よく計測することが可能となる。したがって、かかる手法によれば、撮像画像に基づいて交通量を計測する場合において交通量計測の精度を向上させることが可能となる。 According to this method, since the number of voting peaks may be measured as the traffic volume, there is no need to track the vehicle silhouette area and count the number of vehicle silhouette areas that have been successfully tracked within the imaging range. Therefore, even when a situation in which it is difficult to accurately track the vehicle silhouette region occurs, the traffic volume can be accurately measured. Therefore, according to this method, it is possible to improve the accuracy of traffic volume measurement when measuring the traffic volume based on the captured image.
 具体的な例を用いてさらに説明を続ける。例えば、図5に示した例のように略直線上を車両が走行する場合には、車両はほぼ等速直線運動をすることが推測される。したがって、図6に示したように、撮像時刻と検出位置との各々の組み合わせを2次元座標上にプロットすると、1台の車両領域に基づいた検出位置は、撮像時刻の変化とともにほぼ直線に沿って変化することが推測される。 さ ら に Further explanation using specific examples. For example, when the vehicle travels on a substantially straight line as in the example shown in FIG. 5, it is estimated that the vehicle performs a substantially uniform linear motion. Therefore, as shown in FIG. 6, when each combination of the imaging time and the detection position is plotted on the two-dimensional coordinates, the detection position based on one vehicle area substantially follows a straight line as the imaging time changes. It is speculated that it will change.
 そこで、検出位置と撮像時刻との組み合わせが直線的に変化すべきであると推測し、計測部115は、各々の組み合わせを通過する直線の集合に対して投票を行い、投票度数がピークとなる直線を1台の車両領域に基づいて検出された組み合わせの集合が満たすとみなせばよい。したがって、計測部115は、投票度数がピークとなる直線の数を交通量として計測すればよい。また、計測部115は、直線の傾きを車両速度として特定することも可能である。 Therefore, it is assumed that the combination of the detection position and the imaging time should change linearly, and the measurement unit 115 votes for a set of straight lines passing through each combination, and the vote count reaches a peak. What is necessary is just to consider that the set of the combinations detected based on one vehicle area satisfies the straight line. Therefore, the measurement part 115 should just measure the number of the straight lines from which a vote frequency becomes a peak as traffic volume. The measuring unit 115 can also specify the slope of the straight line as the vehicle speed.
 ここで、投票度数がピークとなる直線をどのように検出するかに関しては特に限定されない。例えば、計測部115は、投票度数がピークとなる直線を検出するため、検出位置と撮像時刻との組み合わせに対してハフ変換を施してもよい。かかる場合、撮像時刻をXとし、検出位置をYとすると、以下の(式1)に示すような関係によって、X-Y平面がρ-θ平面に変換され得る。 Here, there is no particular limitation on how to detect a straight line having a peak vote count. For example, the measurement unit 115 may perform a Hough transform on the combination of the detection position and the imaging time in order to detect a straight line having a peak vote count. In this case, when the imaging time is X and the detection position is Y, the XY plane can be converted to the ρ-θ plane by the relationship shown in the following (formula 1).
 ρ = X・cosθ + Y・sinθ ・・・(式1) Ρ = X · cos θ + Y · sin θ (Formula 1)
 ここで、ρは、X-Y平面上において原点Oから直線までの距離に相当し、θは、X軸と直線とのなす角度に相当する。X-Y平面における一つの直線は、ρ-θ平面においては、曲線の交点として表される。図7は、図6に示した検出位置と撮像時刻との組み合わせに対してハフ変換を施して得られた結果を示す図である。図7に示した例では、投票ピーク点P1、P2が示されており、計測部115は、投票ピーク点の数に相当する「2」を交通量として計測すればよい。 Here, ρ corresponds to the distance from the origin O to the straight line on the XY plane, and θ corresponds to the angle formed by the X axis and the straight line. One straight line in the XY plane is represented as an intersection of curves in the ρ-θ plane. FIG. 7 is a diagram illustrating a result obtained by performing Hough transform on the combination of the detection position and the imaging time illustrated in FIG. In the example illustrated in FIG. 7, voting peak points P <b> 1 and P <b> 2 are illustrated, and the measurement unit 115 may measure “2” corresponding to the number of voting peak points as the traffic volume.
 また、計測部115は、投票ピーク点の位置に基づいて車両速度を特定してよい。より詳細には、上記したように、θがX-Y平面におけるX軸と直線とのなす角度に相当し、車両速度がX-Y平面における直線の傾きに相当することを考慮すれば、計測部115は、投票ピーク点のθを用いて、tanθにより車両速度を特定することが可能である。図7に示した例では、計測部115は、投票ピーク点P1、P2の各々のθを用いて、tanθにより2台の車両の各々の速度を特定することが可能である。 Further, the measuring unit 115 may specify the vehicle speed based on the position of the voting peak point. More specifically, as described above, if θ is equivalent to the angle between the X axis and the straight line in the XY plane, and the vehicle speed is equivalent to the slope of the straight line in the XY plane, the measurement is performed. The unit 115 can specify the vehicle speed by tan θ using θ of the voting peak point. In the example illustrated in FIG. 7, the measurement unit 115 can specify the speeds of the two vehicles by tan θ using θ of each of the voting peak points P1 and P2.
 以上、道路平面が撮像された撮像画像Imgに基づいた交通量計測の詳細について説明した。 The details of the traffic volume measurement based on the captured image Img obtained by capturing the road plane have been described above.
 続いて、本発明の実施形態に係る交通量計測装置10の動作例について説明する。図8は、本発明の実施形態に係る交通量計測装置10の動作例を示すフローチャートである。なお、図8に示したフローチャートは、交通量計測装置10の動作の一例を示したに過ぎない。したがって、交通量計測装置10の動作は、図8のフローチャートによって示される動作例に限定されない。 Subsequently, an operation example of the traffic volume measuring apparatus 10 according to the embodiment of the present invention will be described. FIG. 8 is a flowchart showing an operation example of the traffic volume measuring apparatus 10 according to the embodiment of the present invention. In addition, the flowchart shown in FIG. 8 only showed an example of operation | movement of the traffic volume measuring apparatus 10. FIG. Therefore, the operation of the traffic volume measuring device 10 is not limited to the operation example shown by the flowchart of FIG.
 図8に示すように、まず、交通量計測装置10において情報取得部111は、撮像部170によって道路平面が撮像された撮像画像と撮像画像を撮像した時刻に相当する撮像時刻とを取得する(ステップS11)。続いて、位置検出部114は、撮像画像から車両領域を抽出し(ステップS12)、車両領域に基づいて所定の検出位置を検出する(ステップS13)。 As illustrated in FIG. 8, first, in the traffic measurement device 10, the information acquisition unit 111 acquires a captured image in which a road plane is captured by the imaging unit 170 and an imaging time corresponding to the time at which the captured image is captured ( Step S11). Subsequently, the position detection unit 114 extracts a vehicle region from the captured image (step S12), and detects a predetermined detection position based on the vehicle region (step S13).
 続いて、計測部115は、検出位置と撮像時刻との組み合わせに対してハフ変換を施し、ρ-θ平面上における投票ピーク点の数を交通量として計測する(ステップS15)。また、計測部115は、ρ-θ平面上における投票ピーク点の位置に基づいて車両速度を特定する(ステップS16)。より詳細には、計測部115は、投票ピーク点のθを用いてtanθにより車両速度を特定することが可能である。 Subsequently, the measurement unit 115 performs Hough transform on the combination of the detection position and the imaging time, and measures the number of voting peak points on the ρ-θ plane as the traffic volume (step S15). Further, the measuring unit 115 identifies the vehicle speed based on the position of the voting peak point on the ρ-θ plane (step S16). More specifically, the measurement unit 115 can specify the vehicle speed by tan θ using θ of the voting peak point.
 [効果の説明]
 以上に説明したように、本発明の実施形態によれば、道路平面が撮像された撮像画像と撮像時刻とを取得する情報取得部111と、撮像画像から抽出される車両領域に基づいて所定の検出位置を検出する位置検出部114と、検出位置と撮像時刻との組み合わせに対して得られる投票ピークの数を交通量として計測する計測部115と、を備える、交通量計測装置10が提供される。
[Description of effects]
As described above, according to the embodiment of the present invention, the information acquisition unit 111 that acquires the captured image obtained by capturing the road plane and the imaging time, and a predetermined area based on the vehicle area extracted from the captured image. There is provided a traffic volume measuring device 10 including a position detection unit 114 that detects a detection position, and a measurement unit 115 that measures the number of voting peaks obtained for a combination of the detection position and an imaging time as a traffic volume. The
 かかる構成によれば、投票ピークの数を交通量として計測すればよいため、車両シルエット領域を追跡しながら撮像範囲を通過した車両シルエット領域を追跡処理する必要がない。そのため、車両シルエット領域を正確に追跡することが困難な状況が生じている場合であっても、交通量を精度よく計測することが可能となる。したがって、かかる手法によれば、撮像画像に基づいて交通量を計測する場合において交通量計測の精度を向上させることが可能となる。 According to such a configuration, since the number of voting peaks may be measured as the traffic volume, it is not necessary to track the vehicle silhouette area that has passed through the imaging range while tracking the vehicle silhouette area. Therefore, even when a situation in which it is difficult to accurately track the vehicle silhouette region occurs, the traffic volume can be accurately measured. Therefore, according to this method, it is possible to improve the accuracy of traffic volume measurement when measuring the traffic volume based on the captured image.
[変形例の説明]
 以上、添付図面を参照しながら本発明の好適な実施形態について詳細に説明したが、本発明はかかる例に限定されない。本発明の属する技術の分野における通常の知識を有する者であれば、請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本発明の技術的範囲に属するものと了解される。
[Description of modification]
The preferred embodiments of the present invention have been described in detail above with reference to the accompanying drawings, but the present invention is not limited to such examples. It is obvious that a person having ordinary knowledge in the technical field to which the present invention belongs can come up with various changes or modifications within the scope of the technical idea described in the claims. Of course, it is understood that these also belong to the technical scope of the present invention.
 例えば、上記の例では、車両がほぼ等速直線運動をする場合について説明した。しかし、車両の走行状況はかかる例に限定されず、カーブした道路平面上を車両が走行する場合であってもよい。かかる場合であっても車両はほぼ等速運動をすることが推測されるため、カーブした道路平面に沿って車両走行軸Aが設定されれば、1台の車両領域に基づいた検出位置は、上記の例と同様に、撮像時刻の変化とともにほぼ直線に沿って変化することが推測されるからである。なお、かかる場合には、計測範囲E1もカーブした道路平面に沿って設定されればよい。 For example, in the above example, the case where the vehicle has a substantially constant linear motion has been described. However, the traveling state of the vehicle is not limited to this example, and the vehicle may travel on a curved road plane. Even in such a case, since it is estimated that the vehicle moves substantially at a constant speed, if the vehicle travel axis A is set along a curved road plane, the detection position based on one vehicle area is This is because, as in the above example, it is estimated that the image changes along a straight line with a change in imaging time. In such a case, the measurement range E1 may be set along a curved road plane.
 さらに、車両がほぼ等速運動をする場合ではなく、車両の速さが変化する場合であってもよい。例えば、撮像時刻と検出位置との各々の組み合わせを2次元座標上にプロットすると、1台の車両領域に基づいた検出位置が、撮像時刻の変化とともにほぼ曲線に沿って変化することが推測される場合もあり得る。 Furthermore, the case where the speed of the vehicle changes may be used instead of the case where the vehicle moves at a substantially constant speed. For example, when each combination of the imaging time and the detection position is plotted on a two-dimensional coordinate, it is estimated that the detection position based on one vehicle area changes substantially along a curve as the imaging time changes. There may be cases.
 かかる場合には、検出位置と撮像時刻との組み合わせが曲線上を変化すべきであると推測し、計測部115は、各々の組み合わせを通過する曲線の集合に対して投票を行い、投票度数がピークとなる曲線を1台の車両領域に基づいて検出された組み合わせの集合が満たすとみなせばよい。したがって、計測部115は、投票度数がピークとなる曲線の数を交通量として計測すればよい。また、計測部115は、投票度数がピークとなる曲線それぞれの微分によって各車両速度を特定することも可能である。さらに、計測部115は、投票度数がピークとなる曲線それぞれの二階微分によって各車両の加速度を算出することも可能である。 In such a case, it is assumed that the combination of the detection position and the imaging time should change on the curve, and the measurement unit 115 performs voting on a set of curves that pass through each combination, and the vote count is What is necessary is just to consider that the set of combinations detected based on one vehicle area | region satisfy | fills the curve used as a peak. Therefore, the measurement unit 115 may measure the number of curves having a peak vote frequency as the traffic volume. Moreover, the measurement part 115 can also specify each vehicle speed by the differentiation of each curve where the vote frequency becomes a peak. Furthermore, the measurement unit 115 can also calculate the acceleration of each vehicle by the second-order differentiation of each curve having a peak vote frequency.
 制御部110を構成する各ブロックは、例えば、CPU(Central Processing Unit)、RAM(Random Access Memory)などから構成され、記憶部180により記憶されているプログラムがCPUによりRAMに展開されて実行されることにより、その機能が実現され得る。あるいは、制御部110を構成する各ブロックは、専用のハードウェアにより構成されていてもよいし、複数のハードウェアの組み合わせにより構成されてもよい。 Each block configuring the control unit 110 includes, for example, a CPU (Central Processing Unit), a RAM (Random Access Memory), and the program stored in the storage unit 180 is expanded and executed by the CPU. Thus, the function can be realized. Or each block which comprises the control part 110 may be comprised by the hardware for exclusive use, and may be comprised by the combination of several hardware.
 尚、本明細書において、フローチャートに記述されたステップは、記載された順序に沿って時系列的に行われる処理はもちろん、必ずしも時系列的に処理されなくとも、並列的に又は個別的に実行される処理をも含む。また時系列的に処理されるステップでも、場合によっては適宜順序を変更することが可能であることは言うまでもない。 In this specification, the steps described in the flowcharts are executed in parallel or individually even if they are not necessarily processed in time series, as well as processes performed in time series in the described order. Including processing to be performed. Further, it goes without saying that the order can be appropriately changed even in the steps processed in time series.
 10  交通量計測装置
 110 制御部
 111 情報取得部
 112 設定部
 113 出力制御部
 114 位置検出部
 115 計測部
 170 撮像部
 180 記憶部
 190 出力部
 E1  計測範囲
 F1  車両前面
 P1  投票ピーク点
 A   車両走行軸
 V   車両
DESCRIPTION OF SYMBOLS 10 Traffic measurement apparatus 110 Control part 111 Information acquisition part 112 Setting part 113 Output control part 114 Position detection part 115 Measurement part 170 Imaging part 180 Storage part 190 Output part E1 Measurement range F1 Vehicle front surface P1 Voting peak point A Vehicle travel axis V vehicle

Claims (9)

  1.  道路平面が撮像された撮像画像と撮像時刻とを取得する情報取得部と、
     前記撮像画像から抽出される車両領域に基づいて所定の検出位置を検出する位置検出部と、
     前記検出位置と前記撮像時刻との組み合わせに対して得られる投票ピークの数に基づいて交通量を計測する計測部と、
     を備える、交通量計測装置。
    An information acquisition unit for acquiring a captured image obtained by capturing a road plane and an imaging time;
    A position detection unit that detects a predetermined detection position based on a vehicle region extracted from the captured image;
    A measurement unit that measures traffic based on the number of voting peaks obtained for the combination of the detection position and the imaging time;
    A traffic volume measuring device.
  2.  前記計測部は、前記検出位置と前記撮像時刻との組み合わせに対してハフ変換を施してハフ空間上で投票ピーク点の数を交通量として計測する、
     請求項1に記載の交通量計測装置。
    The measurement unit performs a Hough transform on the combination of the detection position and the imaging time, and measures the number of voting peak points on the Hough space as a traffic volume.
    The traffic measuring device according to claim 1.
  3.  前記位置検出部は、前記撮像画像からエッジ特徴を検出し、前記エッジ特徴に基づいて前記車両領域を抽出する、
     請求項1に記載の交通量計測装置。
    The position detection unit detects an edge feature from the captured image, and extracts the vehicle region based on the edge feature;
    The traffic measuring device according to claim 1.
  4.  前記計測部は、前記投票ピーク点の位置に基づいて車両速度を特定する、
     請求項1に記載の交通量計測装置。
    The measurement unit identifies a vehicle speed based on a position of the voting peak point;
    The traffic measuring device according to claim 1.
  5.  前記位置検出部は、実サイズに合わせて設定された実空間における車両走行軸における座標を前記検出位置として検出する、
     請求項1に記載の交通量計測装置。
    The position detection unit detects, as the detection position, coordinates on a vehicle travel axis in a real space set in accordance with an actual size.
    The traffic measuring device according to claim 1.
  6.  前記位置検出部は、前記車両領域に基づいて実空間における車両走行軸に直交する所定の垂直平面を検出し、前記車両走行軸と前記垂直平面との交点座標を前記検出位置として検出する、
     請求項5に記載の交通量計測装置。
    The position detection unit detects a predetermined vertical plane orthogonal to a vehicle travel axis in real space based on the vehicle region, and detects an intersection coordinate between the vehicle travel axis and the vertical plane as the detection position;
    The traffic measuring device according to claim 5.
  7.  前記位置検出部は、前記所定の垂直平面として車両前面または車両背面を検出する、
     請求項6に記載の交通量計測装置。
    The position detection unit detects a vehicle front surface or a vehicle back surface as the predetermined vertical plane;
    The traffic measuring device according to claim 6.
  8.  前記位置検出部は、実空間における車両進行方向が奥から手前に向かう方向の場合には、前記車両前面を前記垂直平面として検出し、実空間における車両進行方向が手前から奥に向かう方向の場合には、前記車両背面を前記垂直平面として検出する、
     請求項7に記載の交通量計測装置。
    The position detection unit detects the front surface of the vehicle as the vertical plane when the vehicle traveling direction in the real space is from the back to the front, and the vehicle traveling direction in the real space is the direction from the front to the back. In detecting the back of the vehicle as the vertical plane,
    The traffic measuring device according to claim 7.
  9.  道路平面が撮像された撮像画像と撮像時刻とを取得するステップと、
     前記撮像画像から抽出される車両領域に基づいて所定の検出位置を検出するステップと、
     前記検出位置と前記撮像時刻との組み合わせに対して得られる投票のピーク点に基づいて交通量を計測するステップと、
     を含む、交通量計測方法。
    Obtaining a captured image obtained by capturing a road plane and an imaging time;
    Detecting a predetermined detection position based on a vehicle area extracted from the captured image;
    Measuring traffic based on a voting peak point obtained for the combination of the detection position and the imaging time;
    Including traffic volume measurement method.
PCT/JP2014/059862 2013-08-21 2014-04-03 Traffic volume measurement device and traffic volume measurement method WO2015025555A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013-171166 2013-08-21
JP2013171166A JP5783211B2 (en) 2013-08-21 2013-08-21 Traffic volume measuring apparatus and traffic volume measuring method

Publications (1)

Publication Number Publication Date
WO2015025555A1 true WO2015025555A1 (en) 2015-02-26

Family

ID=52483344

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/059862 WO2015025555A1 (en) 2013-08-21 2014-04-03 Traffic volume measurement device and traffic volume measurement method

Country Status (2)

Country Link
JP (1) JP5783211B2 (en)
WO (1) WO2015025555A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101573660B1 (en) * 2015-05-29 2015-12-02 아마노코리아 주식회사 Method and system of providing parking lot information using car counting system
KR101563112B1 (en) * 2015-05-29 2015-11-02 아마노코리아 주식회사 Car counting system using Omnidirectional camera and laser sensor
JP2017016460A (en) * 2015-07-02 2017-01-19 沖電気工業株式会社 Traffic flow measurement device and traffic flow measurement method
CN107644529A (en) * 2017-08-03 2018-01-30 浙江浩腾电子科技股份有限公司 A kind of vehicle queue length detection method based on motion detection

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1196375A (en) * 1997-09-22 1999-04-09 Nippon Telegr & Teleph Corp <Ntt> Method and device for measuring time series image moving place and recording medium recorded with time series image moving place measuring program
JP2003085686A (en) * 2001-09-13 2003-03-20 Mitsubishi Electric Corp Traffic flow measurement image processor
US20100322476A1 (en) * 2007-12-13 2010-12-23 Neeraj Krantiveer Kanhere Vision based real time traffic monitoring

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1196375A (en) * 1997-09-22 1999-04-09 Nippon Telegr & Teleph Corp <Ntt> Method and device for measuring time series image moving place and recording medium recorded with time series image moving place measuring program
JP2003085686A (en) * 2001-09-13 2003-03-20 Mitsubishi Electric Corp Traffic flow measurement image processor
US20100322476A1 (en) * 2007-12-13 2010-12-23 Neeraj Krantiveer Kanhere Vision based real time traffic monitoring

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MASAKI SUWA: "ITS -Kodo Kotsu System- Kotsuryu Keisoku no Tameno Stereo Vision", IMAGE LAB, vol. 15, no. 12, December 2004 (2004-12-01), pages 47 - 51 *

Also Published As

Publication number Publication date
JP5783211B2 (en) 2015-09-24
JP2015041187A (en) 2015-03-02

Similar Documents

Publication Publication Date Title
EP3057063B1 (en) Object detection device and vehicle using same
US9177196B2 (en) Vehicle periphery monitoring system
CN110826499A (en) Object space parameter detection method and device, electronic equipment and storage medium
US9025818B2 (en) Vehicle type identification device
JP5804185B2 (en) Moving object position / orientation estimation apparatus and moving object position / orientation estimation method
US20070127778A1 (en) Object detecting system and object detecting method
JP6171593B2 (en) Object tracking method and system from parallax map
CN105551020B (en) A kind of method and device detecting object size
US11783507B2 (en) Camera calibration apparatus and operating method
CN104677330A (en) Small binocular stereoscopic vision ranging system
JP5783211B2 (en) Traffic volume measuring apparatus and traffic volume measuring method
WO2014002692A1 (en) Stereo camera
CN106845552A (en) The low dynamic carrier speed calculation method of fusion light stream and SIFT feature Point matching under the uneven environment of light distribution
EP2926317B1 (en) System and method for detecting pedestrians using a single normal camera
JP6543935B2 (en) PARALLEL VALUE DERIVING DEVICE, DEVICE CONTROL SYSTEM, MOBILE OBJECT, ROBOT, PARALLEL VALUE DERIVING METHOD, AND PROGRAM
CN104471436A (en) Method and device for calculating a change in an image scale of an object
JP6699323B2 (en) Three-dimensional measuring device and three-dimensional measuring method for train equipment
JP2017016460A (en) Traffic flow measurement device and traffic flow measurement method
WO2017154305A1 (en) Image processing device, apparatus control system, imaging device, image processing method, and program
JP2014044730A (en) Image processing apparatus
JP5655038B2 (en) Mobile object recognition system, mobile object recognition program, and mobile object recognition method
KR20160063039A (en) Method of Road Recognition using 3D Data
JP2013148355A (en) Vehicle position calculation device
JP2022002045A (en) Partial image generating device and computer program for partial image generation
WO2022270183A1 (en) Computation device and speed calculation method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14837970

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14837970

Country of ref document: EP

Kind code of ref document: A1