WO2015025555A1 - Dispositif de mesure de volume de trafic et procédé de mesure de volume de trafic - Google Patents

Dispositif de mesure de volume de trafic et procédé de mesure de volume de trafic Download PDF

Info

Publication number
WO2015025555A1
WO2015025555A1 PCT/JP2014/059862 JP2014059862W WO2015025555A1 WO 2015025555 A1 WO2015025555 A1 WO 2015025555A1 JP 2014059862 W JP2014059862 W JP 2014059862W WO 2015025555 A1 WO2015025555 A1 WO 2015025555A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
captured image
traffic volume
traffic
unit
Prior art date
Application number
PCT/JP2014/059862
Other languages
English (en)
Japanese (ja)
Inventor
孝光 渡辺
渡辺 孝弘
Original Assignee
沖電気工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 沖電気工業株式会社 filed Critical 沖電気工業株式会社
Publication of WO2015025555A1 publication Critical patent/WO2015025555A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats

Definitions

  • the present invention relates to a traffic volume measuring device and a traffic volume measuring method.
  • the vehicle silhouette area cannot be stably extracted from the captured image based on the background subtraction method, it may be difficult to accurately track the vehicle silhouette area. In such a situation, it is difficult to accurately count the number of vehicle silhouette regions that have passed through the imaging range, and it may be difficult to accurately measure the traffic volume.
  • the vehicle silhouette region cannot be stably extracted due to a change in weather or a change in sunlight.
  • a phenomenon occurs in which the vehicle on the back side is concealed in the vehicle on the near side, a situation may occur where the vehicle silhouette area overlaps between the vehicle on the near side and the vehicle on the back side. There is a possibility that the vehicle silhouette region cannot be extracted stably.
  • the present invention has been made in view of the above problems, and an object of the present invention is to provide a technique for improving the accuracy of traffic volume measurement when measuring traffic volume based on a captured image. There is.
  • an information acquisition unit that acquires a captured image obtained by capturing a road plane and an imaging time, and a predetermined area based on a vehicle area extracted from the captured image.
  • a traffic volume measuring device comprising: a position detection section that detects a detection position of the camera; and a measurement section that measures traffic volume based on the number of voting peaks obtained for the combination of the detection position and the imaging time.
  • the measurement unit may perform a Hough transform on the combination of the detection position and the imaging time, and measure the number of voting peak points as a traffic volume in the Hough space.
  • the position detection unit may detect an edge feature from the captured image and extract the vehicle region based on the edge feature.
  • the measuring unit may specify the vehicle speed based on the position of the voting peak point.
  • the position detection unit may detect, as the detection position, coordinates on a vehicle travel axis in a real space set in accordance with an actual size.
  • the position detection unit detects a predetermined vertical plane orthogonal to the vehicle travel axis in real space based on the vehicle region, and detects an intersection coordinate between the vehicle travel axis and the vertical plane as the detection position. Good.
  • the position detection unit may detect a vehicle front surface or a vehicle back surface as the predetermined vertical plane.
  • the position detection unit detects the front surface of the vehicle as the vertical plane when the vehicle traveling direction in the real space is from the back to the front, and the vehicle traveling direction in the real space is the direction from the front to the back.
  • the rear surface of the vehicle may be detected as the vertical plane.
  • a step of acquiring a captured image obtained by capturing a road plane and a capturing time, a step of detecting a predetermined detection position based on a vehicle region extracted from the captured image, And measuring the traffic volume based on the peak point of voting obtained for the combination of the detection position and the imaging time, a traffic volume measuring method is provided.
  • a plurality of constituent elements having substantially the same functional configuration may be distinguished by attaching different alphabets or numbers after the same reference numeral.
  • it is not necessary to particularly distinguish each of a plurality of constituent elements having substantially the same functional configuration only the same reference numerals are given.
  • FIG. 1 is a diagram for explaining an outline of an embodiment of the present invention.
  • a traffic volume measuring device 10 incorporating an imaging unit and a road plane exist in real space. Further, the traffic volume measuring device 10 incorporating the imaging unit is installed in a state where the imaging direction is directed to the road plane.
  • the captured image Img ′ captured by the traffic measuring device 10 shows the boundary line of the lane provided on the road. Further, as shown in FIG. 1, the center of the lens of the traffic measuring device 10 is set to the origin O.
  • FIG. 1 shows an example in which an imaging unit is incorporated in the traffic measurement device 10, but the imaging unit is not incorporated in the traffic measurement device 10 and is installed outside the traffic measurement device 10. It may be.
  • the traffic measurement device 10 may acquire the captured image Img ′ by receiving the captured image Img ′ transmitted from the imaging unit. Further, for example, the traffic volume measuring device 10 may acquire the captured image Img ′ by reading the captured image Img ′ recorded on the recording medium.
  • a technique for measuring traffic volume from a captured image Img ′ obtained by imaging a road plane by an imaging unit has been proposed.
  • a vehicle silhouette area is extracted from the captured image Img ′ based on the background difference method, and the number of vehicle silhouette areas that have passed through the imaging range is counted while tracking the vehicle silhouette area. Is generally measured.
  • the imaging unit when the imaging unit is installed outdoors, there is a possibility that the vehicle silhouette region cannot be stably extracted due to a change in weather or a change in sunlight.
  • a phenomenon occurs in which the vehicle on the back side is concealed in the vehicle on the near side, a situation may occur where the vehicle silhouette area overlaps between the vehicle on the near side and the vehicle on the back side. There is a possibility that the vehicle silhouette region cannot be extracted stably.
  • FIG. 2 is a diagram illustrating a functional configuration example of the traffic volume measuring device 10 according to the embodiment of the present invention.
  • the traffic volume measuring device 10 according to the embodiment of the present invention includes a control unit 110, an imaging unit 170, a storage unit 180, and an output unit 190.
  • the control unit 110 has a function of controlling the entire operation of the traffic measuring device 10.
  • the imaging unit 170 has a function of acquiring a captured image by imaging a real space, and is configured by, for example, a monocular camera.
  • the storage unit 180 can store a program and data for operating the control unit 110. In addition, the storage unit 180 can temporarily store various data necessary in the course of the operation of the control unit 110.
  • the output unit 190 has a function of performing output in accordance with control by the control unit 110.
  • the type of the output unit 190 is not particularly limited, and may be a measurement result recording device, a device that transmits a measurement result to another device via a communication line, or a display device. An audio output device may be used.
  • the imaging unit 170, the storage unit 180, and the output unit 190 exist inside the traffic measurement device 10, but all or one of the imaging unit 170, the storage unit 180, and the output unit 190.
  • the unit may be provided outside the traffic measurement device 10.
  • the control unit 110 includes an information acquisition unit 111, a setting unit 112, an output control unit 113, a position detection unit 114, and a measurement unit 115. Details of these functional units included in the control unit 110 will be described later.
  • calibration can be performed by the traffic measuring device 10 according to the embodiment of the present invention. More specifically, a process for calculating a road plane formula (hereinafter also referred to as a “road plane type”) and a process for calculating the traveling direction of the vehicle may be performed as calibration.
  • a process for calculating a road plane formula hereinafter also referred to as a “road plane type”
  • a process for calculating the traveling direction of the vehicle may be performed as calibration.
  • calibration that can be performed by the setting unit 112 will be described with reference to FIGS. 3 and 4.
  • FIG. 3 is a diagram showing parameters used by the setting unit 112.
  • the setting unit 112 first determines the size pix_dot of the captured image Img ′ per unit pixel of the image sensor based on the size of the image sensor that constitutes the image capturing unit 170 and the size of the captured image Img ′ provided to the control unit 110. As a parameter.
  • the captured image Img ′ is generated based on the captured image Img captured on the imaging surface of the image sensor that is separated from the origin O by the focal length. Further, the captured image Img ′ provided to the control unit 110 can be acquired by the information acquisition unit 111 and used by the setting unit 112.
  • the imaging device is a CCD (Charge Coupled Device)
  • the CCD Charge Coupled Device
  • the imaging element may be a CMOS (Complementary Metal Oxide Semiconductor) or the like.
  • the setting unit 112 can calculate pix_dot by the following (Equation 1).
  • the CCD size is represented by the length of the diagonal line of the CCD
  • the CCD size is divided by the square root of the sum of squares in the vertical and horizontal directions of the captured image Img ′ as shown in (Equation 1). Is calculated by
  • the calculation of the parameter pix_dot by such a method is only an example, and the parameter pix_dot may be calculated by another method.
  • the vertical or horizontal length of the CCD may be used instead of the diagonal line of the CCD.
  • the CCD size is easily obtained from the imaging unit 170, for example. Further, the size of the captured image Img ′ is acquired from the storage unit 180, for example. Therefore, the control unit 110 determines, based on these sizes, the three-dimensional coordinates in the real space of the captured image Img captured on the imaging surface of the CCD and the two-dimensional coordinates of the captured image Img ′ provided to the control unit 110.
  • the correspondence can be grasped. That is, the control unit 110 grasps the three-dimensional coordinates in the real space of the captured image Img imaged on the imaging surface of the CCD from the two-dimensional coordinates of the captured image Img ′ provided to the control unit 110 based on this correspondence. can do.
  • Calibration can be performed using the parameters calculated in this way.
  • the calibration performed using the parameters by the setting unit 112 will be described with reference to FIG.
  • FIG. 4 is a diagram for explaining the function of the setting unit 112.
  • an xyz coordinate system real space
  • a traveling direction vector v that is a vector indicating the traveling direction of the vehicle is defined as (vx, vy, vz).
  • a point (focal point) that is separated from the origin O by the focal length f is set on the y axis, and a plane that passes through this focal point and is perpendicular to the y axis is defined as an imaging surface.
  • the captured image Img is captured on the imaging surface, but the setting of each coordinate axis is not limited to such an example.
  • the two points through which the first straight line passes are T1 (x1, y1, z1) and T4 (x4, y4, z4), and the two points through which the second straight line passes are T2.
  • T1 x1, y1, z1
  • T4 x4, y4, z4
  • T2 two points through which the second straight line passes
  • T3 x3, y3, z3
  • the coordinates of the intersection of the straight line connecting each of T1, T2, T3 and T4 and the origin O and the road plane are t1 ⁇ T1, t2 ⁇ T2, t3 ⁇ T3 and t4 ⁇ T4.
  • the setting unit 112 can perform calibration based on the following (Precondition 1).
  • the setting unit 112 can derive the relational expressions shown in the following (Equation 2) and (Equation 3) based on the various data acquired as described above and (Condition 1).
  • the setting unit 112 can derive the relational expression shown in the following (Formula 5) based on the various data acquired as described above and (Condition 3).
  • the setting unit 112 can derive the relational expressions shown in the following (Expression 6) and (Expression 7) based on the various data acquired as described above and (Condition 4).
  • the distance from the origin O to Q1 (xr1, yr1, zr1) on the road plane is a multiple of the distance from the origin O to Q1 ′ (xs1, f, zs1) on the captured image Img. It is a value indicating whether or not
  • the distance from the origin O to Q2 (xr2, yr2, zr2) on the road plane is a multiple of the distance from the origin O to Q2 ′ (xs2, f, zs2) on the captured image Img. It is a value indicating whether or not Therefore, the relational expression shown in the following (Formula 8) can be derived.
  • the setting unit 112 can calculate the measured value Q_dis ′ of the distance between two points (Q1 and Q2) on the road plane from the relational expression shown in (Formula 8) by the following (Formula 9).
  • the setting unit 112 can calculate R1, R2, R3, and R4 when the difference between the measured value Q_dis ′ and the known magnitude Q_dis is the smallest based on (Formula 1) to (Formula 9). .
  • the setting unit 112 can also calculate the road plane equation by another method. For example, if the distance between two parallel straight lines on the road plane is known, the road plane formula is calculated without using (Condition 2) by using the distance between the two parallel straight lines on the road plane. be able to.
  • the setting unit 112 can also calculate the traveling direction vector v (vx, vy, vz). More specifically, the setting unit 112 can calculate the traveling direction vector v by calculating the direction of at least one of two parallel straight lines on the road plane. For example, the setting unit 112 may calculate the difference between the coordinates t2 ⁇ T2 and the coordinates t3 ⁇ T3 as the traveling direction vector v, or the difference between the coordinates t1 ⁇ T1 and the coordinates t4 ⁇ T4 as the traveling direction vector v. It may be calculated.
  • the setting unit 112 can perform calibration by the method described above.
  • the setting unit 112 may set a vehicle travel axis A that is parallel to the traveling direction vector v (vx, vy, vz). If the xyz coordinates are set according to the actual size of the real space as described above, the vehicle travel axis A can also be set according to the actual size in the real space.
  • the setting unit 112 may set the measurement range E1. If it does so, traffic volume may be measured based on the vehicle area
  • the output control unit 113 may cause the output unit 190 to output various information set by the setting unit 112.
  • the captured image Img is provided from the imaging unit 170 to the control unit 110. Furthermore, the imaging unit 170 has a timekeeping function. When the imaging time of the captured image Img is detected by the imaging unit 170, the detected imaging time is provided to the control unit 110. The information acquisition unit 111 acquires the captured image Img and the imaging time provided from the imaging unit 170 in this way.
  • the timekeeping function may be included in the control unit 110, or the time obtained by the control unit 110 may be used.
  • the position detection unit 114 detects a predetermined detection position based on the vehicle area extracted from the captured image Img.
  • the predetermined detection position may be a coordinate on the vehicle travel axis A. That is, the position detection unit 114 may detect coordinates on the vehicle travel axis A as detection positions.
  • the detection by the position detection unit 114 will be described in more detail with reference to FIG.
  • FIG. 5 is a diagram for explaining an example of functions of the position detection unit 114. Note that the example illustrated in FIG. 5 is merely an example of detection by the position detection unit 114, and thus the detection by the position detection unit 114 is not limited to the example illustrated in FIG.
  • the vehicle V is traveling on the road plane.
  • the position detection unit 114 extracts a vehicle region from the captured image Img.
  • the vehicle area may be extracted in any way.
  • the vehicle region may be a region specified from a silhouette extracted by the difference between the captured images Img in the frames before and after the vehicle V is shown in FIG.
  • the silhouette extracted by the difference of a background image and the captured image Img may be sufficient.
  • the position detection unit 114 further detects an edge feature from the vehicle region based on the silhouette extracted by the above processing and detects the vehicle region based on the edge feature, a vehicle with higher accuracy against changes in the imaging environment. Region extraction may be performed.
  • the position detection unit 114 may detect edge features based on the silhouette extracted by the above processing, and extract a collection of detected edge features as a vehicle region. However, if only the edge detection process is performed, there is a possibility that an area in which one vehicle V appears is divided into a plurality of edge areas and extracted. Therefore, the position detection unit 114 may combine edge features that are at a distance less than the threshold into one edge region. Specifically, the position detection unit 114 performs a labeling process on each edge feature, and if the labeled edge features are at a distance less than a threshold, the edge features are combined into one edge region. Good.
  • the position detection unit 114 detects the detection position based on the vehicle area as described above.
  • the position detection unit 114 may detect a predetermined vertical plane orthogonal to the vehicle travel axis A based on the vehicle region, and detect an intersection coordinate between the vehicle travel axis A and the predetermined vertical plane as a detection position.
  • the predetermined vertical plane is not limited, but may be the front surface of the vehicle or the back surface of the vehicle. That is, the position detection unit 114 may detect the front surface or the rear surface of the vehicle as a predetermined vertical plane.
  • the position detection unit 114 may change whether to detect the front surface of the vehicle or the back surface of the vehicle as a predetermined vertical plane depending on the situation. For example, the position detection unit 114 detects the front surface of the vehicle as a vertical plane when the traveling direction vector v is a direction from the back to the front, and when the traveling direction vector v is a direction from the front to the back, The back surface may be detected as a vertical plane. In this way, it is expected that the detection accuracy will be further improved because there is a high possibility that the front side surface is more clearly reflected in the captured image Img than the back side surface.
  • the position detection unit 114 detects the vehicle front lowest point m1 ′ based on the vehicle region extracted from the captured image Img.
  • the vehicle front lowest point m ⁇ b> 1 ′ is a point having the lowest height from the ground in the vehicle body of the vehicle V.
  • the vehicle front lowest point m1 ′ may be detected in any way.
  • the traveling direction vector v is a direction from the right back to the left front
  • the point (for example, the middle point) on the edge line (the lower left line segment of the vehicle area) that is the lowest point is detected as the vehicle front lowest point m1 ′.
  • the position detection unit 114 draws a plane perpendicular to the vehicle travel axis A passing through the intersection point m0 from the lowest point m1 in front of the vehicle in real space to the road plane with a vertical line having the minimum ground height h. It detects as vehicle front F1.
  • the minimum ground height h may be a predetermined value or a value determined based on the minimum ground height detected so far.
  • a predetermined value is used as the minimum ground height h, for example, an average value of the minimum ground heights of a plurality of vehicles may be used as the predetermined value.
  • the distance between the ground contact point D0 where the vehicle V is in contact with the road plane and the vehicle body low plane is shown as the minimum ground height h.
  • the traveling direction vector v is from the left front to the right back.
  • a point for example, a middle point
  • the lower edge line the lower left line segment of the vehicle area
  • the rear surface of the vehicle can be detected by a similar method.
  • the lowest ground point m1 ′ is a point on the lower edge line (the lower right line segment of the vehicle area) on the front of the vehicle (for example, middle Point) is detected as the lowest ground point m1 ′, and the front surface of the vehicle can be detected by the same method.
  • the traveling direction vector v is in the direction from the right front to the left back
  • the lowest ground point m1 ′ is a point on the lower edge line (the lower right line segment of the vehicle area) on the back of the vehicle (for example, The middle point) is detected as the lowest ground point m1 ′, and the rear surface of the vehicle can be detected by the same method.
  • the detection position can be detected based on the vehicle area by the method described in the above example. If the detection position is detected by the position detection unit 114, a combination of the detection position and the imaging time is obtained.
  • FIG. 6 is a diagram illustrating an example of a combination of a detection position and an imaging time. Subsequently, the measurement unit 115 measures the traffic volume from the combination of the detection position and the imaging time obtained in this way.
  • the measurement unit 115 performs voting on a combination in which the detection position and the imaging time are a constant rule, and if the vote count becomes a value (peak point) equal to or greater than a certain value, it is detected based on one vehicle area. It can be considered that the set of combinations is satisfied. Therefore, since it is possible to consider that the vehicle has passed the number of voting peaks, the measurement unit 115 may measure the number of voting peaks as the traffic volume. Note that the peak may be a case where the voting power exceeds a threshold and the voting power is maximized.
  • the measurement unit 115 may measure the number of voting peaks as the traffic volume. Note that the peak point may be when the vote count exceeds a threshold value or when the vote count becomes a maximum.
  • the number of voting peaks may be measured as the traffic volume, there is no need to track the vehicle silhouette area and count the number of vehicle silhouette areas that have been successfully tracked within the imaging range. Therefore, even when a situation in which it is difficult to accurately track the vehicle silhouette region occurs, the traffic volume can be accurately measured. Therefore, according to this method, it is possible to improve the accuracy of traffic volume measurement when measuring the traffic volume based on the captured image.
  • the combination of the detection position and the imaging time should change linearly, and the measurement unit 115 votes for a set of straight lines passing through each combination, and the vote count reaches a peak. What is necessary is just to consider that the set of the combinations detected based on one vehicle area satisfies the straight line. Therefore, the measurement part 115 should just measure the number of the straight lines from which a vote frequency becomes a peak as traffic volume. The measuring unit 115 can also specify the slope of the straight line as the vehicle speed.
  • the measurement unit 115 may perform a Hough transform on the combination of the detection position and the imaging time in order to detect a straight line having a peak vote count.
  • the imaging time is X
  • the detection position is Y
  • the XY plane can be converted to the ⁇ - ⁇ plane by the relationship shown in the following (formula 1).
  • FIG. 7 is a diagram illustrating a result obtained by performing Hough transform on the combination of the detection position and the imaging time illustrated in FIG.
  • voting peak points P ⁇ b> 1 and P ⁇ b> 2 are illustrated, and the measurement unit 115 may measure “2” corresponding to the number of voting peak points as the traffic volume.
  • the measuring unit 115 may specify the vehicle speed based on the position of the voting peak point. More specifically, as described above, if ⁇ is equivalent to the angle between the X axis and the straight line in the XY plane, and the vehicle speed is equivalent to the slope of the straight line in the XY plane, the measurement is performed.
  • the unit 115 can specify the vehicle speed by tan ⁇ using ⁇ of the voting peak point. In the example illustrated in FIG. 7, the measurement unit 115 can specify the speeds of the two vehicles by tan ⁇ using ⁇ of each of the voting peak points P1 and P2.
  • FIG. 8 is a flowchart showing an operation example of the traffic volume measuring apparatus 10 according to the embodiment of the present invention.
  • the flowchart shown in FIG. 8 only showed an example of operation
  • FIG. Therefore, the operation of the traffic volume measuring device 10 is not limited to the operation example shown by the flowchart of FIG.
  • the information acquisition unit 111 acquires a captured image in which a road plane is captured by the imaging unit 170 and an imaging time corresponding to the time at which the captured image is captured (Ste S11). Subsequently, the position detection unit 114 extracts a vehicle region from the captured image (step S12), and detects a predetermined detection position based on the vehicle region (step S13).
  • the measurement unit 115 performs Hough transform on the combination of the detection position and the imaging time, and measures the number of voting peak points on the ⁇ - ⁇ plane as the traffic volume (step S15). Further, the measuring unit 115 identifies the vehicle speed based on the position of the voting peak point on the ⁇ - ⁇ plane (step S16). More specifically, the measurement unit 115 can specify the vehicle speed by tan ⁇ using ⁇ of the voting peak point.
  • the information acquisition unit 111 that acquires the captured image obtained by capturing the road plane and the imaging time, and a predetermined area based on the vehicle area extracted from the captured image.
  • a traffic volume measuring device 10 including a position detection unit 114 that detects a detection position, and a measurement unit 115 that measures the number of voting peaks obtained for a combination of the detection position and an imaging time as a traffic volume.
  • the number of voting peaks may be measured as the traffic volume, it is not necessary to track the vehicle silhouette area that has passed through the imaging range while tracking the vehicle silhouette area. Therefore, even when a situation in which it is difficult to accurately track the vehicle silhouette region occurs, the traffic volume can be accurately measured. Therefore, according to this method, it is possible to improve the accuracy of traffic volume measurement when measuring the traffic volume based on the captured image.
  • the traveling state of the vehicle is not limited to this example, and the vehicle may travel on a curved road plane.
  • the traveling state of the vehicle is not limited to this example, and the vehicle may travel on a curved road plane.
  • the detection position based on one vehicle area is This is because, as in the above example, it is estimated that the image changes along a straight line with a change in imaging time.
  • the measurement range E1 may be set along a curved road plane.
  • the case where the speed of the vehicle changes may be used instead of the case where the vehicle moves at a substantially constant speed.
  • the detection position based on one vehicle area changes substantially along a curve as the imaging time changes. There may be cases.
  • the measurement unit 115 may measure the number of curves having a peak vote frequency as the traffic volume. Moreover, the measurement part 115 can also specify each vehicle speed by the differentiation of each curve where the vote frequency becomes a peak. Furthermore, the measurement unit 115 can also calculate the acceleration of each vehicle by the second-order differentiation of each curve having a peak vote frequency.
  • Each block configuring the control unit 110 includes, for example, a CPU (Central Processing Unit), a RAM (Random Access Memory), and the program stored in the storage unit 180 is expanded and executed by the CPU.
  • a CPU Central Processing Unit
  • RAM Random Access Memory
  • the function can be realized.
  • each block which comprises the control part 110 may be comprised by the hardware for exclusive use, and may be comprised by the combination of several hardware.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Le problème décrit par la présente invention est de pourvoir à une technologie par laquelle la précision de mesure de volume de trafic est améliorée lors d'une mesure de volume de trafic sur la base d'une image capturée. La solution de la présente invention porte sur un dispositif de mesure de volume de trafic (10) comprenant: une unité d'acquisition d'informations (111) qui acquiert une image capturée sur laquelle l'image d'une surface de route est capturée et un instant de capture d'image; une unité de détection d'emplacement (114) qui détecte un emplacement de détection prescrit sur la base d'une région de véhicule qui est extraite de l'image capturée; et une unité de mesure (115) qui mesure, à titre de volume de trafic, un nombre de pics de vote qui sont obtenus relativement à des combinaisons d'emplacements de détection et d'instants de capture d'image.
PCT/JP2014/059862 2013-08-21 2014-04-03 Dispositif de mesure de volume de trafic et procédé de mesure de volume de trafic WO2015025555A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013171166A JP5783211B2 (ja) 2013-08-21 2013-08-21 交通量計測装置および交通量計測方法
JP2013-171166 2013-08-21

Publications (1)

Publication Number Publication Date
WO2015025555A1 true WO2015025555A1 (fr) 2015-02-26

Family

ID=52483344

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/059862 WO2015025555A1 (fr) 2013-08-21 2014-04-03 Dispositif de mesure de volume de trafic et procédé de mesure de volume de trafic

Country Status (2)

Country Link
JP (1) JP5783211B2 (fr)
WO (1) WO2015025555A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101563112B1 (ko) * 2015-05-29 2015-11-02 아마노코리아 주식회사 전방위 카메라와 레이저센서를 이용한 차량 카운트 시스템
KR101573660B1 (ko) * 2015-05-29 2015-12-02 아마노코리아 주식회사 차량 카운트 시스템을 이용한 주차장 정보 제공 방법 및 주차장 정보 제공 시스템
JP2017016460A (ja) * 2015-07-02 2017-01-19 沖電気工業株式会社 交通流計測装置および交通流計測方法
CN107644529A (zh) * 2017-08-03 2018-01-30 浙江浩腾电子科技股份有限公司 一种基于运动检测的车辆排队长度检测方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1196375A (ja) * 1997-09-22 1999-04-09 Nippon Telegr & Teleph Corp <Ntt> 時系列画像動き場計測方法および装置ならび時系列画像動き場計測プログラムを記録した記録媒体
JP2003085686A (ja) * 2001-09-13 2003-03-20 Mitsubishi Electric Corp 交通流計測画像処理装置
US20100322476A1 (en) * 2007-12-13 2010-12-23 Neeraj Krantiveer Kanhere Vision based real time traffic monitoring

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1196375A (ja) * 1997-09-22 1999-04-09 Nippon Telegr & Teleph Corp <Ntt> 時系列画像動き場計測方法および装置ならび時系列画像動き場計測プログラムを記録した記録媒体
JP2003085686A (ja) * 2001-09-13 2003-03-20 Mitsubishi Electric Corp 交通流計測画像処理装置
US20100322476A1 (en) * 2007-12-13 2010-12-23 Neeraj Krantiveer Kanhere Vision based real time traffic monitoring

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MASAKI SUWA: "ITS -Kodo Kotsu System- Kotsuryu Keisoku no Tameno Stereo Vision", IMAGE LAB, vol. 15, no. 12, December 2004 (2004-12-01), pages 47 - 51 *

Also Published As

Publication number Publication date
JP2015041187A (ja) 2015-03-02
JP5783211B2 (ja) 2015-09-24

Similar Documents

Publication Publication Date Title
EP3057063B1 (fr) Dispositif de détection d&#39;objet et véhicule qui l&#39;utilise
US9177196B2 (en) Vehicle periphery monitoring system
CN110826499A (zh) 物体空间参数检测方法、装置、电子设备及存储介质
EP1796043B1 (fr) Détection d&#39;objets
US9025818B2 (en) Vehicle type identification device
JP5804185B2 (ja) 移動物体位置姿勢推定装置及び移動物体位置姿勢推定方法
JP6171593B2 (ja) 視差図からの対象追跡方法及びシステム
CN105551020B (zh) 一种检测目标物尺寸的方法及装置
US11783507B2 (en) Camera calibration apparatus and operating method
CN104677330A (zh) 一种小型双目立体视觉测距系统
JP5783211B2 (ja) 交通量計測装置および交通量計測方法
WO2014002692A1 (fr) Caméra stéréo
JP6543935B2 (ja) 視差値導出装置、機器制御システム、移動体、ロボット、視差値導出方法、およびプログラム
CN104471436A (zh) 用于计算对象的成像比例的变化的方法和设备
JP5981284B2 (ja) 対象物検出装置、及び対象物検出方法
JP6699323B2 (ja) 電車設備の三次元計測装置及び三次元計測方法
JP2017016460A (ja) 交通流計測装置および交通流計測方法
JP2014044730A (ja) 画像処理装置
JPWO2017154305A1 (ja) 画像処理装置、機器制御システム、撮像装置、画像処理方法、及び、プログラム
JP5655038B2 (ja) 移動体認識システム、移動体認識プログラム、及び移動体認識方法
KR20160063039A (ko) 3차원 데이터를 이용한 도로 인식 방법
JP2013148355A (ja) 車両位置算出装置
WO2022270183A1 (fr) Dispositif de calcul et procédé de calcul de vitesse
JP2013142668A (ja) 位置推定装置及び位置推定方法
JP4876676B2 (ja) 位置計測装置、方法及びプログラム、並びに移動量検出装置、方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14837970

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14837970

Country of ref document: EP

Kind code of ref document: A1