US20140147008A1 - Vehicle detection apparatus and vehicle detection method - Google Patents

Vehicle detection apparatus and vehicle detection method Download PDF

Info

Publication number
US20140147008A1
US20140147008A1 US14/170,284 US201414170284A US2014147008A1 US 20140147008 A1 US20140147008 A1 US 20140147008A1 US 201414170284 A US201414170284 A US 201414170284A US 2014147008 A1 US2014147008 A1 US 2014147008A1
Authority
US
United States
Prior art keywords
vehicle
line
segment components
segment
windshield
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US14/170,284
Other versions
US9196160B2 (en
Inventor
Yasuhiro Aoki
Toshio Sato
Yusuke Takahashi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp filed Critical Toshiba Corp
Assigned to KABUSHIKI KAISHA TOSHIBA reassignment KABUSHIKI KAISHA TOSHIBA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AOKI, YASUHIRO, SATO, TOSHIO, TAKAHASHI, YUSUKE
Publication of US20140147008A1 publication Critical patent/US20140147008A1/en
Application granted granted Critical
Publication of US9196160B2 publication Critical patent/US9196160B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/123Traffic control systems for road vehicles indicating the position of vehicles, e.g. scheduled vehicles; Managing passenger vehicles circulating according to a fixed timetable, e.g. buses, trains, trams
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/243Aligning, centring, orientation detection or correction of the image by compensating for image skew or non-uniform image deformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Definitions

  • Embodiments described herein relate generally to a vehicle detection apparatus and a vehicle detection method.
  • passage of a vehicle is generally detected by a pole sensor (a vehicle detection apparatus) of a transmission type and a reflection type using infrared laser.
  • pole sensors require excavation in installment, and require separate equipment to adjust positions of the left and right sensors.
  • pole sensors have a disadvantage of requiring high cost for construction and adjustment.
  • pole sensors may be desirably replaced with vehicle detecting apparatuses of other means.
  • vehicle detection apparatuses using cameras relatively easily achieve conditions for making the vehicle fall within a range of angle of view, by using existing pole sensors.
  • the price of cameras has fallen in recent years, and thus cameras are available at a relatively low price. It is thus preferable to achieve a vehicle detection apparatus using a camera.
  • the feature points of a vehicle differ according to the shape of the vehicle and how the vehicle looks.
  • the feature points that are correlated with one another between a plurality of images obtained by a plurality of cameras change each time, and the criteria for alignment are indistinct.
  • vehicle detection apparatuses adopting a stereoscopic system have a disadvantage that feature points that are correlated with one another between a plurality of images change each time, and the criteria for positioning are indistinct.
  • FIG. 1 is a schematic diagram illustrating a configuration of a vehicle detection system, to which a vehicle detection apparatus according to a first embodiment is applied.
  • FIG. 2 is a block diagram illustrating a configuration of the vehicle detection apparatus according to the first embodiment.
  • FIG. 3 is a schematic diagram illustrating a windshield region and line segment components thereof in the first embodiment.
  • FIG. 4 is a flowchart for explaining operation according to the first embodiment.
  • FIG. 5 (A) is a diagram illustrating a stereoscopic camera model in the first embodiment
  • FIG. 5 (B) is a diagram illustrating a relation between a camera coordinate system and a global coordinate system.
  • FIG. 6 (A) is a schematic diagram illustrating rough approximation of a windshield in a second embodiment
  • FIG. 6 (B) is a schematic diagram illustrating fine approximation thereof.
  • FIG. 7 is a flowchart for explaining operation according to the second embodiment.
  • FIG. 8 is a diagram for explaining operation of measuring a vehicle width and a vehicle height, performed by a vehicle detection apparatus according to a third embodiment.
  • a vehicle detection apparatus comprises a detecting module detecting a vehicle from an image obtained by photographing the vehicle, an extracting module, and a measuring module.
  • the extracting module extracts a plurality of line-segment components indicating a boundary between a specific region of the vehicle and a vehicle body and included in the image.
  • the measuring module measures a position of the vehicle based on coordinate information between the extracted line-segment components and photographing position information of the image.
  • a matter common to the embodiments is a configuration for extracting a windshield region from left and right camera images, and in particular the windshield region (a polygonal approximate region) of the vehicle.
  • Another matter common to the embodiments is a configuration that enables a stereoscopic view of the vehicle, and achieving highly-accurate vehicle detection, by correlating the left and right noted characteristics with each other, using line-segment components forming the windshield region as characteristic amounts.
  • FIG. 1 is a schematic diagram illustrating a configuration of a vehicle detection system, to which a vehicle detection apparatus 100 according to the present embodiment is applied.
  • the vehicle detection system comprises a pole sensor 10 , an electronic camera 20 , an ETC (Electronic Toll Collection) system 30 , and the vehicle detection apparatus 100 .
  • ETC Electronic Toll Collection
  • the pole sensor 10 detects a vehicle 40 entering an ETC lane by using an optical sensor or a tread sensor, and notifies the vehicle detection apparatus 100 of a result of the detection.
  • the electronic camera (hereinafter simply referred to as “camera”) 20 is formed of left and right digital cameras to photograph the vehicle 40 in left and right directions with respect to a running direction of the vehicle 40 .
  • the digital cameras in the camera 20 are installed in a line in the lateral direction and in a position above and in front of the running direction of the vehicle 40 .
  • Each of the digital cameras takes a moving image of the vehicle 40 running on the ETC lane and passing through the pole sensor 10 , at a preset frame rate.
  • the camera 20 takes and outputs a plurality of images for the vehicle 40 running on the ETC lane.
  • a windshield is referred to as an example of a specific region of the vehicle 40 .
  • the camera 20 is installed in a position where it can shoot a panoramic view at least including the windshield of the vehicle 40 .
  • the camera 20 is installed to include the windshield and side surfaces of the vehicle 40 in a visual-field area 200 as a photographing position.
  • Each of the digital cameras of the camera 20 may be arranged side by side along a direction almost parallel with the longitudinal direction of upper and lower sides of the windshield of the vehicle 40 . For example, there are cases where the vehicle 40 enters or leaves the ETC lane inclined with respect to the optical axis of the camera 20 .
  • line-segment components indicating the boundary between the windshield region and the vehicle body of the vehicle 40 are uniformly inclined, and thus the left and right digital cameras may be installed inclined in a direction orthogonal to a representative line-segment component thereof.
  • the camera 20 may be configured to optically acquire left and right images of the vehicle 40 with a lens, instead of the left and right digital cameras.
  • the camera 20 may have a structure capable of simultaneously branching images from two directions with special lenses of a camera.
  • the camera is configured to use a wedge prism polarization system in which two lenses are arranged at, for example, an inlet of light, and a prism that bends the optical path is disposed inside.
  • the camera 20 transmits image data including a time code that indicates the photographing time to the vehicle detection apparatus 100 .
  • the camera 20 , the vehicle detection apparatus 100 and the other device ( 30 ) have synchronized time information.
  • the image data output from the camera 20 may not include any time code indicating the photographing time, in the case of adopting a structure in which the camera 20 and the vehicle detection apparatus 100 and the other device operate in synchronization, and the vehicle detection apparatus 100 and the other device can recognize the photographing time of the camera 20 .
  • the ETC system 30 is a system configured to automatically collect a toll for the vehicle 40 running on a toll road such as an expressway.
  • the ETC system 30 obtains information for identifying the passing vehicle 40 , by wireless communication with an ETC in-vehicle unit mounted on the vehicle 40 .
  • an ETC in-vehicle unit has an antenna for performing wireless communications, and the antenna is installed in a position which can be visually recognized through the windshield of the vehicle 40 .
  • the ETC system 30 can perform highly accurate wireless communication with the ETC in-vehicle unit.
  • the vehicle detection apparatus 100 includes a display module 110 , a user interface 120 , a storage module 130 , a network interface 140 , and a controller 150 .
  • the display module 110 is a display device using an LCD (Liquid Crystal Display) or the like, and displays various information items such as an operation state of the vehicle detection apparatus 100 .
  • the user interface 120 is an interface that receives user instructions via input devices such as a keyboard, a mouse, and a touch panel.
  • the storage module 130 stores control programs and control data for the controller 150 , and is formed of one or a plurality of storage devices, such as HDDs (hard disk drive), RAMS (random access memory), ROMs (read only memory), and flash memories.
  • the network interface 140 is connected to a network such as a LAN, and communicates with each of the pole sensor 10 , the camera 20 , and the ETC system 30 through the network.
  • the controller 150 includes a microprocessor, and operates based on the control programs and control data stored in the storage module 130 .
  • the controller 150 serves as a supervising controller for the vehicle detection apparatus 100 .
  • the controller 150 detects a specific region of the vehicle 40 that is set with the control data in advance, based on the image obtained by the camera 20 , and executes processing of specifying the position of the vehicle 40 in real space.
  • the controller 150 may execute processing of predicting the passage time (the time at which the vehicle 40 passes through the communication area of the ETC system 30 ) in real space, together with specifying the position of the vehicle 40 .
  • the controller 150 has the following functions (f1) to (f4), to execute the processing of specifying the position of the vehicle 40 .
  • the function (f1) is a vehicle detecting function of detecting entry of the vehicle 40 from the left and right images obtained from the camera 20 formed of left and right digital cameras. Background subtraction and pattern matching are used for the vehicle detecting function. However, the vehicle detecting function of the camera 20 can be omitted, since the object detected by the pole sensor 10 is the vehicle 40 in many cases.
  • the function (f2) is a line-segment component extracting function of extracting a plurality of line-segment components indicating the boundary between the windshield region and the vehicle body of the vehicle 40 and included in the image, for each of the left and right images obtained by photographing the vehicle 40 .
  • the line-segment component extracting function is a function of extracting a plurality of line-segment components e1 to e6 indicating the boundary between the windshield region and the vehicle body as illustrated in FIG. 3 (B), for a photograph image illustrated in FIG. 3 (A).
  • the function (f3) is a polygon approximation function of performing approximation with a polygon forming a closed loop using some of the line-segment components extracted from the image, for each of the left and right images.
  • the closed loop corresponding to the windshield region is formed of a combination of the line-segment components e1 to e6 extracted by the line-segment component extracting function.
  • the line-segment component extracting function and the polygon approximation function can be achieved by, for example, the method disclosed in a Japanese Patent Application Publication (Jpn. Pat. Appln. No. 2010-208539).
  • the function (f4) is a function of aligning at least one line-segment component, in the line-segment components e1 to e6 forming the closed loop for each of the left and right images, with each other between the left and right images.
  • the function (f4) is a measuring function of measuring the position of the closed loop as the position of the vehicle, based on coordinate information between the aligned line-segment components and photographing position information (the position where the camera 20 is placed) indicating the photographing position where the left and right images are photographed.
  • the measuring function of the function (f4) may include processing of aligning all the line-segment components forming the closed loop.
  • the processing involves matching all the windshield regions of the left and right images to correlate the left and right representative points with each other, in the case where the windshield of the vehicle 40 has many curved faces and it is difficult to determine a representative line-segment component (at least one line-segment component).
  • the vehicle detection apparatus 100 When the vehicle detection apparatus 100 is powered and started, the vehicle detection apparatus 100 repeatedly executes the steps (ST 2 to ST 5 ) illustrated in FIG. 4 until the power is turned off. These steps are executed by operation of the controller 150 .
  • the pole sensor 10 and the camera 20 Prior to startup of the vehicle detection apparatus 100 , the pole sensor 10 and the camera 20 are started. Thereby, the pole sensor 10 starts monitoring entry of the vehicle 40 into the ETC lane, and notifies the vehicle detection apparatus 100 of a detection result, until the power is turned off.
  • the camera 20 starts photographing at a predetermined frame rate, and transmits the obtained image data to the vehicle detection apparatus 100 until the power is turned off (ST 1 ). Thereby, the controller 150 receives image data obtained by the camera 20 .
  • the controller 150 executes the vehicle detecting function of determining whether the object detected by the pole sensor 10 is a vehicle or not, in response to notification from the pole sensor 10 through the network interface 140 . Specifically, the controller 150 detects entry of the vehicle 40 , based on results of background subtraction processing and pattern matching processing performed for the left and right images obtained from the camera 20 (ST 2 ).
  • the left and right digital cameras of the camera 20 are placed such that the windshield region can be photographed with at least a plurality of frames for the vehicle 40 running in the angle of view of the camera.
  • the controller 150 presets on the screen an observed region, for which it is determined whether a vehicle has entered or not, and detects changes in image in the region by background subtraction or the like.
  • the controller 150 determines whether the detected object is a vehicle or not, by setting a part including the observed region as a search region, and matching the search region with a vehicle front pattern (lights, front grille, and number plate) prepared in advance. As another method, a number plate being a characteristic of the vehicle may be detected.
  • the controller 150 When entry of the vehicle 40 is detected, the controller 150 goes to the step of extracting line segments of the windshield region as follows (YES of ST 2 , and ST 3 ). When no entry of vehicle 40 is detected, the controller 150 continues monitoring of vehicle entry (NO of ST 2 ).
  • the controller 150 extracts an image data item of a frame photographed at a predetermined time among a plurality of image data items transmitted from the camera 20 , through the network interface 140 .
  • the extracted image data item is referred to as image data to be processed.
  • the predetermined time is determined in consideration of the positional relation (distance) between the position where the pole sensor 10 is installed and the angle of view (photographing range) of the camera 20 , and the assumed passing speed of the vehicle, such that image data including the specific region of the vehicle 40 can be extracted.
  • the controller 150 performs preprocessing for the image data to be processed.
  • the preprocessing includes noise reduction performed for the purpose of improving the S/N ratio.
  • the image is sharpened by the noise reduction.
  • the preprocessing also includes filtering to improve image contrast.
  • the preprocessing also includes image distortion correction for the purpose of correcting the image.
  • the controller 150 extracts a plurality of line-segment components e1 to e6 indicating the boundary between the windshield region and the vehicle body of the vehicle 40 , from the image data to be processed that has been subjected to preprocessing, using a method such as Hough transform (ST 3 ). Specifically, the controller 150 extracts a plurality of line-segment components e1 to e6 indicating the boundary included in the image, for each of the left and right images, for which entry of the vehicle 40 has been detected.
  • Step ST 3 For example, eight-direction line-segment components based on horizontal and vertical directions in the image are extracted, when the vehicle 40 is photographed from above. Thereby, a number of line-segment components including a windshield boundary part are extracted. Since a part of a windshield around wipers is curved in many cases, it is difficult to extract the boundary with one line-segment component. Thus, generally, it is possible to perform processing of approximating the shape of the windshield by extracting the boundary as a polygon or lines obtained by combining a plurality of line-segment components.
  • a circle when a circle is approximated with line-segment components, it can be approximated with an inscribed equilateral octagon.
  • An error in this case corresponds to a difference in area between the circle and the inscribed equilateral octagon, and is permissible as an error in practical design.
  • the controller 150 performs Hough transform for the image data to be processed and image data items of previous and following frames, and extracts time-successive line-segment components from the images.
  • the controller 150 may execute geometric change prediction with movement of the vehicle 40 from the line-segment components, and obtain line-segment components at the predetermined time (the photographing time of the image data to be processed). Using image data items of a plurality of frames like this can improve the extraction accuracy.
  • the processing of extracting the line-segment components using a method such as Hough transform has been explained, when color information is included in the image data to be processed, the image based on the image data to be processed may be divided into regions of similar colors based on the color information, and boundaries between the regions may be extracted as line-segment components. Also by such a method, a boundary between the windshield and the other regions of the vehicle can be extracted as line-segment components.
  • the controller 150 executes approximation with a polygon that forms the closed loop using some of the line-segment components extracted in Step ST 3 for each of the left and right images, and generates candidates for the windshield region.
  • Elements extracted from the image by polygon approximation are the windshield region, a shadow region cast on the windshield, a reflection region including reflected light from the sun, pillars being a part of the vehicle, and windows for the driver's seat and the front seat next to the driver.
  • a closed loop with a complicated shape is formed with a plurality of line-segment components.
  • the windshield region can be approximated to a shape including curves according to the shape of the windshield. Even if the windshield has a simple shape, when it is photographed from the side of the vehicle, a depth occurs between the left and right sides of the windshield and asymmetry is generated.
  • the controller 150 performs evaluation for a plurality of closed loops generated by polygon approximation, using an evaluation function, and narrows down the candidates to one that accurately approximates the windshield region.
  • the evaluation may be performed after performing approximation to supplement the loss part with a line-segment component.
  • one of the pillars may be concealed by the windshield.
  • the windshield end part on the side of the concealed pillar is supplemented with a line-segment component, to complete the closed loop.
  • various pillar patterns are stored in advance in the storage module 130 , and a plurality of windshield region candidates are correlated with each pattern and stored in the storage module 130 . Then, a closed loop similar to the pillar may be detected from polygon approximation, the pillar may be detected by pattern matching between the detected closed loop and the information stored in the storage module 130 , and windshield region candidates correlated with the detected pillar may be obtained.
  • the controller 150 performs a plurality of different evaluations for each of the windshield region candidates obtained as described above, and determines a total of scores of the evaluations for each windshield region candidate.
  • the evaluations include a first method of providing the candidates with scores in consideration of the position and size of the windshield in the image.
  • the evaluations include a second method of providing the candidates with scores based on brightness distributions around the line-segment components forming the windshield region.
  • the evaluations also include a third method of providing the candidates with scores based on the degree of matching with a windshield template stored in advance in the storage module 130 . When a polygon appears in the windshield region due to reflection or shadows, the polygon is provided with low scores by the first method and the third method.
  • the controller 150 selects an optimum windshield region based on the totals of the scores as described above.
  • the controller 150 checks the positional relation between the selected windshield region and a front mask part (lights, grille, number plate) included in the image data to be processed, and examines whether there are any contradictions (for example, whether there is a large shift in a lateral direction between the windshield region and the front mask part). When there is a contradiction, the same examination is performed for the windshield region having the second highest total score.
  • the position of the vehicle front mask part is determined by pattern matching of elements forming the front mask part.
  • the controller 150 when the controller 150 extracts a plurality of line-segment components indicating the boundary between the windshield region and the vehicle body for each of the left and right images, the controller 150 performs processing of aligning at least one line-segment component between the left and right images (ST 4 ). Specifically, the controller 150 aligns at least one line-segment component among line-segment components forming a closed loop for each of images of the image data to be processed between the left and right images.
  • the controller 150 measures the position of the closed loop as the position of the vehicle 40 , based on coordinate information between the aligned line-segment components and photographing information indicating the photographing position where the left and right images are photographed (ST 5 ).
  • the controller 150 converts camera coordinates (coordinates on a virtual image plane) obtained by obtaining a difference between coordinate information items into a global coordinate system (position in real space on the ETC lane), and thereby calculates the position of the line-segment components of the windshield region.
  • the calculated position corresponds to the position of the vehicle to be measured in Step ST 5 .
  • Reference symbols C 1 and C 2 illustrated in FIG. 5 (A) indicate photographing positions (intersection points between the X axis of the photographing plane coordinate system and the optical axes of the left and right cameras) of the left and right cameras in the camera coordinate system using an intermediate point between the left and right cameras of the camera 20 as an origin O.
  • the left and right cameras are two cameras having the same specifications, and placed in parallel with each other and at equivalent positions.
  • the optical axes of the two cameras are made parallel so that their imaging planes agree with each other, and so that the horizontal axes of the imaging planes agree with each other.
  • a reference symbol b indicates an interval between the left and right cameras.
  • a reference symbol f indicates a focal distance.
  • Reference symbols [x l , y l ] and [x r , y r ] are coordinates of points (such as end points) corresponding to each other in the aligned line segment in imaging plane coordinate systems on the left and right images (“Image Plane” in the drawing), using intersection points with the optical axes (broken-line arrows in the drawing) of the right and left cameras as origins.
  • Objects [X, Y, Z] are coordinates of the corresponding points in the camera coordinate system, and have the following relation.
  • Reference symbols R and t illustrated in FIG. 5 (B) are external parameters to associate the camera coordinates with the global coordinates.
  • the reference symbol R indicates a rotation matrix.
  • Reference numeral t indicates a parallel movement vector.
  • the controller 150 notifies the ETC system 30 of the vehicle position (coordinates) measured in Step ST 5 through the network interface 140 .
  • the ETC system 30 transmits and receives wireless signals to and from an antenna of the ETC in-vehicle unit mounted on the windshield, based on the position (coordinates) of the vehicle 40 and the assumed passing speed of the vehicle 40 .
  • the controller 150 aligns at least one of line-segment components of the left and right images among line-segment components forming a closed loop using line-segment components indicating the boundary between the windshield region and the vehicle body of the vehicle 40 .
  • the alignment standard can be clarified by determining feature points that are to be correlated with each other between images from the camera 20 , and thus the accuracy of detecting the vehicle 40 can be improved.
  • line-segment components of the vehicle windshield region boundary part are extracted from the left and right camera images, the representative line-segment components of the left and right images are correlated with each other, and thereby three-dimensional measurement of the vehicle 40 can be achieved with high accuracy.
  • the ETC in-vehicle unit is placed in the vicinity of the windshield, and thus the positional relation with the in-vehicle unit to communicate with and the distance from the in-vehicle unit can be accurately extracted.
  • information corresponding to the width and the height of the vehicle can be deductively determined, based on positional information of the windshield.
  • the controller 150 extracts a plurality of line-segment components forming a vehicle image from image data obtained by photographing the vehicle, and executes approximation with a polygon for generating a closed loop by using the line-segment components. Thereby, the controller 150 generates a plurality of candidates for a region of a specific part (for example, windshield) of the vehicle 40 , performs a plurality of different evaluations for the candidates, and specifies a most probable vehicle specific part region.
  • the camera 20 can be installed with a high degree of freedom, and a high detection accuracy can be obtained.
  • FIG. 6 and FIG. 7 are diagrams for explaining a second embodiment.
  • a vehicle detection apparatus 100 and a vehicle detection system according to the present embodiment are the same as those of the first embodiment explained with reference to FIG. 1 and FIG. 2 , and thus explanation thereof will be omitted.
  • a controller 150 has a configuration that includes a re-executing function (f2-1) in a line-segment extracting function (f2) to extract line-segment components of a windshield region.
  • the re-executing function (f2-1) is a function of classifying extracted line-segment components according to directions, and re-executing the processing of extracting a plurality of line-segment components based on the number and length of the line-segment components for each direction.
  • the re-executing function (f2-1) includes changing parameters used in extraction of the line-segment components to increase the number of the line-segment components and reduce the length of the line-segment components, and re-executing the processing of extracting the line-segment components based on the changed parameters.
  • the extraction rate for the line-segment components can be determined based on an extraction result (the number and length of line-segment components for each direction).
  • the extraction rate for the line-segment components can be calculated as a proportion of the actual extraction result in the expected extraction result, with respect to the number and length of line-segment components for each direction.
  • the re-executing function (f2-1) when the size of the windshield region is greater than a reference size, the re-executing function (f2-1) reduces the number of line-segment components and increases the length of the line-segment components.
  • the re-executing function (f2-1) when the size of the windshield region is equal to or less than the reference size, the re-executing function (f2-1) may be configured to include a function of changing the parameters used in extraction of line-segment components to increase the number of the line-segment components and reduce the length of the line-segment components.
  • processing to change the resolution for the object to be photographed may be performed according to the resolution for the photographed windshield, such that rough approximation is executed for the windshield region with a camera with high resolution and fine approximation is executed for the windshield region with a camera with low resolution.
  • Distinction between large-sized cars and small cars can be made by extracting headlight regions of the vehicle 40 by labeling processing or the like from the image data to be processed after preprocessing, and estimating the size of the vehicle based on the left and right headlight positions and an interval between the left and right headlights indicated by the extracted regions.
  • a camera 20 transmits photographed image data to the vehicle detection apparatus 100 (ST 11 ).
  • the controller 150 of the vehicle detection apparatus 100 receives image data photographed by the camera 20 .
  • the controller 150 executes a vehicle detecting function and detects entry of the vehicle 40 (YES of ST 12 ).
  • the controller 150 extracts a plurality of line-segment components indicating a boundary between the windshield region and the vehicle body of the vehicle 40 (ST 13 ).
  • the controller 150 detects no entry of the vehicle 40 , the controller 150 continues monitoring of vehicle entry (NO of ST 12 ).
  • the controller 150 classifies the extracted line-segment components according to directions, and determines whether an extraction result is good or not, based on the number and length of line-segment components for each direction (ST 15 ). The determination is performed based on whether an extraction rate of the line-segment components is higher than a predetermined extraction rate or not.
  • the controller 150 changes parameters used in extraction of line-segment components, to increase the number of the line-segment components and reduce the length of the line-segment components (NO of ST 15 , ST 14 ).
  • the controller 150 re-executes the processing of extracting the line-segment components, based on the changed parameters (ST 13 ).
  • the controller 150 executes approximation with a polygon forming a closed loop, and generates candidates for the windshield region, using some of the line-segment components extracted for each of the left and right images, as described above. Then, the controller 150 executes processing of aligning at least one line-segment component between the left and right images, in the same manner as the above (ST 16 ). Thereafter, the controller 150 measures and estimates the position of the vehicle 40 as described above (ST 17 ).
  • the controller 150 of the vehicle detection apparatus 100 includes a re-executing function (f2-1) of changing parameters used in extraction of line-segment components based on the number and length of line-segment components for each extracted direction, and re-executing the processing of extracting line-segment components.
  • f2-1 re-executing function of changing parameters used in extraction of line-segment components based on the number and length of line-segment components for each extracted direction, and re-executing the processing of extracting line-segment components.
  • the present embodiment can also achieve assured extraction of line segments of the windshield region as described above, even with the configuration in which parameters used in extraction of line-segment components are changed based on the size of the windshield region and processing of extracting line-segment components is re-executed.
  • the present embodiment may be configured to use rectangle detection information of the number plate detected upon vehicle entry, and performing processing of correlation for information of four sides of the rectangle, when main line-segment components cannot be extracted even when detection of the windshield region has been attempted a plurality of times with changed processing parameters.
  • the re-executing function (f2-1) of the controller 150 may be configured to include a first line-segment component extracting function (f2-1-1) of extracting a plurality of line-segment components forming the windshield region of the vehicle, and a second line-segment component extracting function (f2-1-2) of extracting a plurality of line-segment components forming a rectangular region of the number plate frame of the vehicle.
  • the polygon approximation function (f3) of the controller 150 can execute polygon approximation processing using line-segment components extracted by the second line-segment component extracting function (f2-1-2), when no closed loop can be formed based on line-segment components extracted by the first line-segment component extracting function (f-2-1-1).
  • the second line-segment component extracting function (f2-1-2) may include extracting a headlight region of the vehicle by labeling from an image of the image data to be processed that has been subjected to preprocessing, and extracting a rectangular shape similar to the number plate from a range estimated from the position thereof.
  • a number plate exists under the center between the left and right headlights and under a line connecting the headlights.
  • the controller 150 executes polygon approximation processing using line-segment components extracted by the second line-segment component extracting function (f2-1-2), when no closed loop can be formed based on line-segment components extracted by the first line-segment component extracting function (f-2-1-1).
  • the position of the vehicle can be measured by extracting line-segment components of the number plate frame.
  • FIG. 8 is a diagram for explaining a third embodiment.
  • a vehicle detection apparatus 100 and a vehicle detection system according to the present embodiment are the same as those of the first embodiment explained with reference to FIG. 1 and FIG. 2 , and explanation thereof will be omitted.
  • the vehicle detection apparatus 100 of the present embodiment is configured to calculate a vehicle width 410 and a vehicle height 420 of a vehicle 40 .
  • a controller 150 is configured to include a measuring function (f4) of measuring the vehicle position, which includes at least one of a vehicle height calculating function (f4-1) of calculating the vehicle height 420 and a vehicle width calculating function (f4-2) of calculating the vehicle width 410 .
  • the vehicle height calculating function (f4-1) calculates a height of and upper side of a windshield region 400 of the vehicle 40 as the vehicle height 420 , based on coordinate information between the line-segment components used for the measuring function (f4).
  • the vehicle width calculating function (f4-2) calculates a length of a bottom side of the windshield region 400 of the vehicle 40 as the vehicle width 410 , based on the coordinate information.
  • the vehicle height calculating function (f4-1) uses information (coordinate information between line-segment components) of line-segment components associated with each other between left and right cameras for line-segment components forming the windshield region 400 .
  • the controller 150 calculates coordinate information of end points e1 and e2 of the line-segment component corresponding to the upper side of the windshield.
  • the controller 150 performs matching for the line-segment component corresponding to the upper side of the windshield based on correlation between the coordinates, and thereby calculates the height 420 from the upper side of the windshield 400 .
  • the vehicle height calculating function (f4-1) may include calculating the height of the upper side of the windshield, by changing the configuration of the camera parameters and directly determining the correlation of a difference (e1 ⁇ e2) between coordinate information items of the end points e1 and e2 of the upper side of the windshield.
  • the controller 150 when the controller 150 performs matching for vertical lines on the left and right side surface parts of the vehicle between the left and right cameras by the vehicle width calculating function (f4-2), the controller 150 calculates coordinate information of end points e6 and e3 of the line-segment component corresponding to the bottom side of the windshield.
  • the controller 150 performs matching for the line-segment component corresponding to the bottom side of the windshield based on correlation between the coordinates, and thereby calculates a distance
  • the vehicle width calculating function (f4-2) may include calculating the distance “(e6 ⁇ e5)+(e5 ⁇ e4)+(e4 ⁇ e3)” between the end points of the bottom side of the windshield, by changing the configuration of the camera parameters and directly determining differences (e6 ⁇ e5), (e5 ⁇ e4), and (e4 ⁇ e3) between coordinate information items of the lower side of the windshield.
  • the present embodiment is configured to add processing of calculating at least one of the vehicle height 420 and the vehicle width 410 , in the processing of Step ST 5 illustrated in FIG. 4 .
  • the controller 150 executes the steps of Steps ST 1 to ST 5 illustrated in FIG. 4 .
  • the controller 150 measures the position of a closed loop as the vehicle position, based on coordinate information between the line-segment components aligned in the processing of Step ST 4 and photographing position information (camera placement information) indicating the photographing position where the left and right images have been photographed (ST 5 ).
  • photographing position information camera placement information
  • the controller 150 calculates the height of the upper side of the windshield region 400 of the vehicle 40 as the vehicle height 420 by the vehicle height calculating function (f4-1), as illustrated in FIG. 8 , based on coordinate information of the line-segment components aligned in the processing of Step ST 4 .
  • the controller 150 calculates the length of the bottom side of the windshield region 400 of the vehicle 40 as the vehicle width 410 by the vehicle width calculating function (f4-2), as illustrated in FIG. 8 , based on coordinate information of the line-segment components aligned in the processing of Step ST 4 .
  • the controller 150 may execute one of the vehicle height calculating function (f4-1) and the vehicle width calculating function (f4-2).
  • the controller 150 notifies the ETC system 30 of the vehicle position (coordinates) measured by the processing of Step ST 5 , and the vehicle height 420 and/or vehicle width 410 , through the network interface 140 .
  • the ETC system 30 transmits and receives wireless signals to and from an antenna of an ETC in-vehicle unit mounted onto the windshield, based on the position (coordinates) of the vehicle 40 and the assumed passing speed of the vehicle 40 .
  • the ETC system 30 can obtain additional information of the vehicle height 420 and/or the vehicle width 410 , and calculate, for example, statistics of the size of vehicles passing through the ETC lane.
  • the height of the upper side of the windshield region 400 of the vehicle 40 can be calculated as the vehicle height 420 , based on the coordinate information.
  • the length of the bottom side of the windshield region 400 of the vehicle 40 can be calculated as the vehicle width 410 , based on the coordinate information.
  • At least one line-segment component is aligned among a plurality of line-segment components indicating the boundary between the windshield region and the vehicle body of the vehicle between the left and right images.
  • This structure can determine feature points which are correlated with each other between a plurality of images, clarify the alignment standard, and improve the vehicle detection accuracy.
  • Each of the methods described in the above embodiments can be stored and distributed as a computer-executable program in storage media such as magnetic disks (such as floppy (registered trademark) disks and hard disks), optical disks (such as CD-ROMs and DVDs), magneto-optical disks (MO), and semiconductor memories.
  • storage media such as magnetic disks (such as floppy (registered trademark) disks and hard disks), optical disks (such as CD-ROMs and DVDs), magneto-optical disks (MO), and semiconductor memories.
  • the storage media may adopt any storage form, as long as it is a storage medium that is capable of storing a program and readable by a computer.
  • An OS operating system
  • MW middleware
  • database management software and network software may execute part of the above processing to achieve the above embodiments, based on instructions of the program installed in the computer from the storage medium.
  • the storage medium in each embodiment is not limited to a medium independent of the computer, but includes a storage medium which stores or temporarily stores a downloaded program transmitted through a LAN or the Internet.
  • the storage medium is not limited to one, and the storage medium in the present invention also includes the case where the processing in each of the above embodiments is performed from a plurality of media.
  • the medium structure may be any of the above structures.
  • the computer in each embodiment executes the processing in each of the above embodiments based on a program stored in the storage medium, and may be any of a device such as a personal computer or the like, and a system formed by connecting a plurality of devices through a network.
  • the computer in each embodiment is not limited to a personal computer, but also includes a processing unit and a microcomputer included in an information processing apparatus, and is a general term for apparatuses and devices that are capable of achieving the functions of the present invention by a program.

Abstract

A vehicle detection apparatus according to an embodiment comprises a controller detecting a vehicle from an image obtained by photographing the vehicle. The controller extracts a plurality of line-segment components indicating a boundary between a specific region of the vehicle and a vehicle body and included in the image. The controller measures a position of the vehicle based on coordinate information between the extracted line-segment components and photographing position information of the image.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a Continuation application of PCT Application No. PCT/JP2012/069171, filed Jul. 27, 2012 and based upon and claiming the benefit of priority from prior Japanese Patent Application No. 2011-170307, filed Aug. 3, 2011, the entire contents of all of which are incorporated herein by reference.
  • FIELD
  • Embodiments described herein relate generally to a vehicle detection apparatus and a vehicle detection method.
  • BACKGROUND
  • For example, in tollgates of expressways, passage of a vehicle is generally detected by a pole sensor (a vehicle detection apparatus) of a transmission type and a reflection type using infrared laser.
  • However, there are various types of vehicles having different shapes, and the length from the distal end portion of the vehicle detected by the pole sensor to a specific region (for example, a part around a windshield on which an on-board device is installed) differs for each type of vehicle. Thus, it is difficult to detect a specific region by the pole sensor.
  • In addition, pole sensors require excavation in installment, and require separate equipment to adjust positions of the left and right sensors. Thus, pole sensors have a disadvantage of requiring high cost for construction and adjustment. Thus, pole sensors may be desirably replaced with vehicle detecting apparatuses of other means.
  • On the other hand, vehicle detection apparatuses using cameras relatively easily achieve conditions for making the vehicle fall within a range of angle of view, by using existing pole sensors. The price of cameras has fallen in recent years, and thus cameras are available at a relatively low price. It is thus preferable to achieve a vehicle detection apparatus using a camera.
  • A vehicle detection apparatus of a stereoscopic type using a plurality of cameras will now be discussed. Generally, the feature points of a vehicle differ according to the shape of the vehicle and how the vehicle looks. Thus, in a stereoscopic system, the feature points that are correlated with one another between a plurality of images obtained by a plurality of cameras change each time, and the criteria for alignment are indistinct.
  • As described above, vehicle detection apparatuses adopting a stereoscopic system have a disadvantage that feature points that are correlated with one another between a plurality of images change each time, and the criteria for positioning are indistinct.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic diagram illustrating a configuration of a vehicle detection system, to which a vehicle detection apparatus according to a first embodiment is applied.
  • FIG. 2 is a block diagram illustrating a configuration of the vehicle detection apparatus according to the first embodiment.
  • FIG. 3 is a schematic diagram illustrating a windshield region and line segment components thereof in the first embodiment.
  • FIG. 4 is a flowchart for explaining operation according to the first embodiment.
  • FIG. 5 (A) is a diagram illustrating a stereoscopic camera model in the first embodiment, and FIG. 5 (B) is a diagram illustrating a relation between a camera coordinate system and a global coordinate system.
  • FIG. 6 (A) is a schematic diagram illustrating rough approximation of a windshield in a second embodiment, and FIG. 6 (B) is a schematic diagram illustrating fine approximation thereof.
  • FIG. 7 is a flowchart for explaining operation according to the second embodiment.
  • FIG. 8 is a diagram for explaining operation of measuring a vehicle width and a vehicle height, performed by a vehicle detection apparatus according to a third embodiment.
  • DETAILED DESCRIPTION
  • One embodiment is to provide a vehicle detection apparatus that can clarify the criteria for alignment by determining feature points that are correlated with one another between a plurality of images, and improve vehicle detection accuracy. According to one embodiment, a vehicle detection apparatus according to an embodiment comprises a detecting module detecting a vehicle from an image obtained by photographing the vehicle, an extracting module, and a measuring module. The extracting module extracts a plurality of line-segment components indicating a boundary between a specific region of the vehicle and a vehicle body and included in the image. The measuring module measures a position of the vehicle based on coordinate information between the extracted line-segment components and photographing position information of the image.
  • Embodiments will now be explained hereinafter, with reference to drawings.
  • A matter common to the embodiments is a configuration for extracting a windshield region from left and right camera images, and in particular the windshield region (a polygonal approximate region) of the vehicle. Another matter common to the embodiments is a configuration that enables a stereoscopic view of the vehicle, and achieving highly-accurate vehicle detection, by correlating the left and right noted characteristics with each other, using line-segment components forming the windshield region as characteristic amounts.
  • First Embodiment
  • FIG. 1 is a schematic diagram illustrating a configuration of a vehicle detection system, to which a vehicle detection apparatus 100 according to the present embodiment is applied. As illustrated in FIG. 1, the vehicle detection system comprises a pole sensor 10, an electronic camera 20, an ETC (Electronic Toll Collection) system 30, and the vehicle detection apparatus 100.
  • The pole sensor 10 detects a vehicle 40 entering an ETC lane by using an optical sensor or a tread sensor, and notifies the vehicle detection apparatus 100 of a result of the detection.
  • The electronic camera (hereinafter simply referred to as “camera”) 20 is formed of left and right digital cameras to photograph the vehicle 40 in left and right directions with respect to a running direction of the vehicle 40. Specifically, the digital cameras in the camera 20 are installed in a line in the lateral direction and in a position above and in front of the running direction of the vehicle 40. Each of the digital cameras takes a moving image of the vehicle 40 running on the ETC lane and passing through the pole sensor 10, at a preset frame rate. Thus, the camera 20 takes and outputs a plurality of images for the vehicle 40 running on the ETC lane.
  • In the following explanation, a windshield is referred to as an example of a specific region of the vehicle 40. Thus, the camera 20 is installed in a position where it can shoot a panoramic view at least including the windshield of the vehicle 40. In addition, the camera 20 is installed to include the windshield and side surfaces of the vehicle 40 in a visual-field area 200 as a photographing position. Each of the digital cameras of the camera 20 may be arranged side by side along a direction almost parallel with the longitudinal direction of upper and lower sides of the windshield of the vehicle 40. For example, there are cases where the vehicle 40 enters or leaves the ETC lane inclined with respect to the optical axis of the camera 20. In such a case, line-segment components indicating the boundary between the windshield region and the vehicle body of the vehicle 40 are uniformly inclined, and thus the left and right digital cameras may be installed inclined in a direction orthogonal to a representative line-segment component thereof.
  • The camera 20 may be configured to optically acquire left and right images of the vehicle 40 with a lens, instead of the left and right digital cameras. For example, the camera 20 may have a structure capable of simultaneously branching images from two directions with special lenses of a camera. Specifically, the camera is configured to use a wedge prism polarization system in which two lenses are arranged at, for example, an inlet of light, and a prism that bends the optical path is disposed inside.
  • The camera 20 transmits image data including a time code that indicates the photographing time to the vehicle detection apparatus 100. The camera 20, the vehicle detection apparatus 100 and the other device (30) have synchronized time information. The image data output from the camera 20 may not include any time code indicating the photographing time, in the case of adopting a structure in which the camera 20 and the vehicle detection apparatus 100 and the other device operate in synchronization, and the vehicle detection apparatus 100 and the other device can recognize the photographing time of the camera 20.
  • The ETC system 30 is a system configured to automatically collect a toll for the vehicle 40 running on a toll road such as an expressway. The ETC system 30 obtains information for identifying the passing vehicle 40, by wireless communication with an ETC in-vehicle unit mounted on the vehicle 40. Generally, an ETC in-vehicle unit has an antenna for performing wireless communications, and the antenna is installed in a position which can be visually recognized through the windshield of the vehicle 40. Thus, since the position of the windshield is accurately specified, the ETC system 30 can perform highly accurate wireless communication with the ETC in-vehicle unit.
  • Next, as illustrated in FIG. 2, the vehicle detection apparatus 100 includes a display module 110, a user interface 120, a storage module 130, a network interface 140, and a controller 150.
  • The display module 110 is a display device using an LCD (Liquid Crystal Display) or the like, and displays various information items such as an operation state of the vehicle detection apparatus 100. The user interface 120 is an interface that receives user instructions via input devices such as a keyboard, a mouse, and a touch panel. The storage module 130 stores control programs and control data for the controller 150, and is formed of one or a plurality of storage devices, such as HDDs (hard disk drive), RAMS (random access memory), ROMs (read only memory), and flash memories. The network interface 140 is connected to a network such as a LAN, and communicates with each of the pole sensor 10, the camera 20, and the ETC system 30 through the network.
  • The controller 150 includes a microprocessor, and operates based on the control programs and control data stored in the storage module 130. The controller 150 serves as a supervising controller for the vehicle detection apparatus 100. The controller 150 detects a specific region of the vehicle 40 that is set with the control data in advance, based on the image obtained by the camera 20, and executes processing of specifying the position of the vehicle 40 in real space. The controller 150 may execute processing of predicting the passage time (the time at which the vehicle 40 passes through the communication area of the ETC system 30) in real space, together with specifying the position of the vehicle 40.
  • The controller 150 has the following functions (f1) to (f4), to execute the processing of specifying the position of the vehicle 40.
  • The function (f1) is a vehicle detecting function of detecting entry of the vehicle 40 from the left and right images obtained from the camera 20 formed of left and right digital cameras. Background subtraction and pattern matching are used for the vehicle detecting function. However, the vehicle detecting function of the camera 20 can be omitted, since the object detected by the pole sensor 10 is the vehicle 40 in many cases.
  • The function (f2) is a line-segment component extracting function of extracting a plurality of line-segment components indicating the boundary between the windshield region and the vehicle body of the vehicle 40 and included in the image, for each of the left and right images obtained by photographing the vehicle 40. Specifically, for example, the line-segment component extracting function is a function of extracting a plurality of line-segment components e1 to e6 indicating the boundary between the windshield region and the vehicle body as illustrated in FIG. 3 (B), for a photograph image illustrated in FIG. 3 (A).
  • The function (f3) is a polygon approximation function of performing approximation with a polygon forming a closed loop using some of the line-segment components extracted from the image, for each of the left and right images. As a supplementary explanation, the closed loop corresponding to the windshield region is formed of a combination of the line-segment components e1 to e6 extracted by the line-segment component extracting function. The line-segment component extracting function and the polygon approximation function can be achieved by, for example, the method disclosed in a Japanese Patent Application Publication (Jpn. Pat. Appln. No. 2010-208539).
  • The function (f4) is a function of aligning at least one line-segment component, in the line-segment components e1 to e6 forming the closed loop for each of the left and right images, with each other between the left and right images. In addition, the function (f4) is a measuring function of measuring the position of the closed loop as the position of the vehicle, based on coordinate information between the aligned line-segment components and photographing position information (the position where the camera 20 is placed) indicating the photographing position where the left and right images are photographed.
  • The measuring function of the function (f4) may include processing of aligning all the line-segment components forming the closed loop. For example, the processing involves matching all the windshield regions of the left and right images to correlate the left and right representative points with each other, in the case where the windshield of the vehicle 40 has many curved faces and it is difficult to determine a representative line-segment component (at least one line-segment component).
  • Next, operation of a vehicle detection system, to which the vehicle detection apparatus 100 of the present embodiment is applied, will be explained hereinafter with reference to the flowchart of FIG. 4.
  • When the vehicle detection apparatus 100 is powered and started, the vehicle detection apparatus 100 repeatedly executes the steps (ST2 to ST5) illustrated in FIG. 4 until the power is turned off. These steps are executed by operation of the controller 150.
  • Prior to startup of the vehicle detection apparatus 100, the pole sensor 10 and the camera 20 are started. Thereby, the pole sensor 10 starts monitoring entry of the vehicle 40 into the ETC lane, and notifies the vehicle detection apparatus 100 of a detection result, until the power is turned off. The camera 20 starts photographing at a predetermined frame rate, and transmits the obtained image data to the vehicle detection apparatus 100 until the power is turned off (ST1). Thereby, the controller 150 receives image data obtained by the camera 20.
  • The controller 150 executes the vehicle detecting function of determining whether the object detected by the pole sensor 10 is a vehicle or not, in response to notification from the pole sensor 10 through the network interface 140. Specifically, the controller 150 detects entry of the vehicle 40, based on results of background subtraction processing and pattern matching processing performed for the left and right images obtained from the camera 20 (ST2).
  • The left and right digital cameras of the camera 20 are placed such that the windshield region can be photographed with at least a plurality of frames for the vehicle 40 running in the angle of view of the camera. The controller 150 presets on the screen an observed region, for which it is determined whether a vehicle has entered or not, and detects changes in image in the region by background subtraction or the like. The controller 150 determines whether the detected object is a vehicle or not, by setting a part including the observed region as a search region, and matching the search region with a vehicle front pattern (lights, front grille, and number plate) prepared in advance. As another method, a number plate being a characteristic of the vehicle may be detected.
  • When entry of the vehicle 40 is detected, the controller 150 goes to the step of extracting line segments of the windshield region as follows (YES of ST2, and ST3). When no entry of vehicle 40 is detected, the controller 150 continues monitoring of vehicle entry (NO of ST2).
  • The controller 150 extracts an image data item of a frame photographed at a predetermined time among a plurality of image data items transmitted from the camera 20, through the network interface 140. The extracted image data item is referred to as image data to be processed. The predetermined time is determined in consideration of the positional relation (distance) between the position where the pole sensor 10 is installed and the angle of view (photographing range) of the camera 20, and the assumed passing speed of the vehicle, such that image data including the specific region of the vehicle 40 can be extracted.
  • The controller 150 performs preprocessing for the image data to be processed. Specifically, the preprocessing includes noise reduction performed for the purpose of improving the S/N ratio. The image is sharpened by the noise reduction. The preprocessing also includes filtering to improve image contrast. The preprocessing also includes image distortion correction for the purpose of correcting the image.
  • The controller 150 extracts a plurality of line-segment components e1 to e6 indicating the boundary between the windshield region and the vehicle body of the vehicle 40, from the image data to be processed that has been subjected to preprocessing, using a method such as Hough transform (ST3). Specifically, the controller 150 extracts a plurality of line-segment components e1 to e6 indicating the boundary included in the image, for each of the left and right images, for which entry of the vehicle 40 has been detected.
  • As a specific extraction algorithm of Step ST3, for example, eight-direction line-segment components based on horizontal and vertical directions in the image are extracted, when the vehicle 40 is photographed from above. Thereby, a number of line-segment components including a windshield boundary part are extracted. Since a part of a windshield around wipers is curved in many cases, it is difficult to extract the boundary with one line-segment component. Thus, generally, it is possible to perform processing of approximating the shape of the windshield by extracting the boundary as a polygon or lines obtained by combining a plurality of line-segment components. For example, when a circle is approximated with line-segment components, it can be approximated with an inscribed equilateral octagon. An error in this case corresponds to a difference in area between the circle and the inscribed equilateral octagon, and is permissible as an error in practical design.
  • The controller 150 performs Hough transform for the image data to be processed and image data items of previous and following frames, and extracts time-successive line-segment components from the images. The controller 150 may execute geometric change prediction with movement of the vehicle 40 from the line-segment components, and obtain line-segment components at the predetermined time (the photographing time of the image data to be processed). Using image data items of a plurality of frames like this can improve the extraction accuracy.
  • Although the processing of extracting the line-segment components using a method such as Hough transform has been explained, when color information is included in the image data to be processed, the image based on the image data to be processed may be divided into regions of similar colors based on the color information, and boundaries between the regions may be extracted as line-segment components. Also by such a method, a boundary between the windshield and the other regions of the vehicle can be extracted as line-segment components.
  • The controller 150 executes approximation with a polygon that forms the closed loop using some of the line-segment components extracted in Step ST3 for each of the left and right images, and generates candidates for the windshield region. Elements extracted from the image by polygon approximation are the windshield region, a shadow region cast on the windshield, a reflection region including reflected light from the sun, pillars being a part of the vehicle, and windows for the driver's seat and the front seat next to the driver. Actually, a closed loop with a complicated shape is formed with a plurality of line-segment components.
  • Although the simplest shape of the windshield region can be approximated to a rectangle, the windshield region can be approximated to a shape including curves according to the shape of the windshield. Even if the windshield has a simple shape, when it is photographed from the side of the vehicle, a depth occurs between the left and right sides of the windshield and asymmetry is generated.
  • At this point in time, it has not been recognized which line-segment component is a part of the windshield region. An optimum solution exists in a combination of closed loops including the line-segment components indicating the boundary between the windshield and the vehicle body by the polygon approximation. Thus, the controller 150 performs evaluation for a plurality of closed loops generated by polygon approximation, using an evaluation function, and narrows down the candidates to one that accurately approximates the windshield region.
  • Actually, there may be parts with high curvature or parts with insufficient contrast of boundaries, and there may be candidates that partly lack line-segment components. Thus, the evaluation may be performed after performing approximation to supplement the loss part with a line-segment component. For example, according to the photographing angle, one of the pillars may be concealed by the windshield. In such a case, the windshield end part on the side of the concealed pillar is supplemented with a line-segment component, to complete the closed loop.
  • In addition, various pillar patterns are stored in advance in the storage module 130, and a plurality of windshield region candidates are correlated with each pattern and stored in the storage module 130. Then, a closed loop similar to the pillar may be detected from polygon approximation, the pillar may be detected by pattern matching between the detected closed loop and the information stored in the storage module 130, and windshield region candidates correlated with the detected pillar may be obtained.
  • The controller 150 performs a plurality of different evaluations for each of the windshield region candidates obtained as described above, and determines a total of scores of the evaluations for each windshield region candidate. The evaluations include a first method of providing the candidates with scores in consideration of the position and size of the windshield in the image. The evaluations include a second method of providing the candidates with scores based on brightness distributions around the line-segment components forming the windshield region. The evaluations also include a third method of providing the candidates with scores based on the degree of matching with a windshield template stored in advance in the storage module 130. When a polygon appears in the windshield region due to reflection or shadows, the polygon is provided with low scores by the first method and the third method.
  • The controller 150 selects an optimum windshield region based on the totals of the scores as described above. The controller 150 checks the positional relation between the selected windshield region and a front mask part (lights, grille, number plate) included in the image data to be processed, and examines whether there are any contradictions (for example, whether there is a large shift in a lateral direction between the windshield region and the front mask part). When there is a contradiction, the same examination is performed for the windshield region having the second highest total score. The position of the vehicle front mask part is determined by pattern matching of elements forming the front mask part.
  • Next, when the controller 150 extracts a plurality of line-segment components indicating the boundary between the windshield region and the vehicle body for each of the left and right images, the controller 150 performs processing of aligning at least one line-segment component between the left and right images (ST4). Specifically, the controller 150 aligns at least one line-segment component among line-segment components forming a closed loop for each of images of the image data to be processed between the left and right images.
  • Thereafter, the controller 150 measures the position of the closed loop as the position of the vehicle 40, based on coordinate information between the aligned line-segment components and photographing information indicating the photographing position where the left and right images are photographed (ST5).
  • Specifically, as illustrated in FIG. 5 (A) and FIG. 5 (B), the controller 150 converts camera coordinates (coordinates on a virtual image plane) obtained by obtaining a difference between coordinate information items into a global coordinate system (position in real space on the ETC lane), and thereby calculates the position of the line-segment components of the windshield region. The calculated position corresponds to the position of the vehicle to be measured in Step ST5.
  • Reference symbols C1 and C2 illustrated in FIG. 5 (A) indicate photographing positions (intersection points between the X axis of the photographing plane coordinate system and the optical axes of the left and right cameras) of the left and right cameras in the camera coordinate system using an intermediate point between the left and right cameras of the camera 20 as an origin O. The left and right cameras are two cameras having the same specifications, and placed in parallel with each other and at equivalent positions. Thus, the optical axes of the two cameras are made parallel so that their imaging planes agree with each other, and so that the horizontal axes of the imaging planes agree with each other. A reference symbol b indicates an interval between the left and right cameras. A reference symbol f indicates a focal distance. Reference symbols [xl, yl] and [xr, yr] are coordinates of points (such as end points) corresponding to each other in the aligned line segment in imaging plane coordinate systems on the left and right images (“Image Plane” in the drawing), using intersection points with the optical axes (broken-line arrows in the drawing) of the right and left cameras as origins.
  • Objects [X, Y, Z] are coordinates of the corresponding points in the camera coordinate system, and have the following relation.

  • Z=b·f/(x l −x r)

  • X=Z·x r /f

  • Y=b·y r /f
  • Reference symbols R and t illustrated in FIG. 5 (B) are external parameters to associate the camera coordinates with the global coordinates. The reference symbol R indicates a rotation matrix. Reference numeral t indicates a parallel movement vector.
  • The controller 150 notifies the ETC system 30 of the vehicle position (coordinates) measured in Step ST5 through the network interface 140. The ETC system 30 transmits and receives wireless signals to and from an antenna of the ETC in-vehicle unit mounted on the windshield, based on the position (coordinates) of the vehicle 40 and the assumed passing speed of the vehicle 40.
  • As described above, according to the present embodiment, the controller 150 aligns at least one of line-segment components of the left and right images among line-segment components forming a closed loop using line-segment components indicating the boundary between the windshield region and the vehicle body of the vehicle 40. Thereby, the alignment standard can be clarified by determining feature points that are to be correlated with each other between images from the camera 20, and thus the accuracy of detecting the vehicle 40 can be improved.
  • Specifically, line-segment components of the vehicle windshield region boundary part are extracted from the left and right camera images, the representative line-segment components of the left and right images are correlated with each other, and thereby three-dimensional measurement of the vehicle 40 can be achieved with high accuracy. In addition, when it is applied to the ETC system 30, the ETC in-vehicle unit is placed in the vicinity of the windshield, and thus the positional relation with the in-vehicle unit to communicate with and the distance from the in-vehicle unit can be accurately extracted. Further, information corresponding to the width and the height of the vehicle can be deductively determined, based on positional information of the windshield.
  • In addition, according to the present embodiment, the controller 150 extracts a plurality of line-segment components forming a vehicle image from image data obtained by photographing the vehicle, and executes approximation with a polygon for generating a closed loop by using the line-segment components. Thereby, the controller 150 generates a plurality of candidates for a region of a specific part (for example, windshield) of the vehicle 40, performs a plurality of different evaluations for the candidates, and specifies a most probable vehicle specific part region. Thus, since the above specific part can be detected by image processing as long as the specific part of the target vehicle 40 is imaged, the camera 20 can be installed with a high degree of freedom, and a high detection accuracy can be obtained.
  • Second Embodiment
  • FIG. 6 and FIG. 7 are diagrams for explaining a second embodiment. A vehicle detection apparatus 100 and a vehicle detection system according to the present embodiment are the same as those of the first embodiment explained with reference to FIG. 1 and FIG. 2, and thus explanation thereof will be omitted.
  • In the present embodiment, a controller 150 has a configuration that includes a re-executing function (f2-1) in a line-segment extracting function (f2) to extract line-segment components of a windshield region.
  • The re-executing function (f2-1) is a function of classifying extracted line-segment components according to directions, and re-executing the processing of extracting a plurality of line-segment components based on the number and length of the line-segment components for each direction. Specifically, as illustrated in FIG. 6 (A) and FIG. 6 (B), the re-executing function (f2-1) includes changing parameters used in extraction of the line-segment components to increase the number of the line-segment components and reduce the length of the line-segment components, and re-executing the processing of extracting the line-segment components based on the changed parameters.
  • By adding the re-executing function (f2-1) as described above, parameters such as the size of the edge filter are changed in the processing of extracting line-segment components being constituent elements of the windshield region, and thereby the extraction rate for the line-segment components can be improved. The extraction rate for the line-segment components can be determined based on an extraction result (the number and length of line-segment components for each direction). The extraction rate for the line-segment components can be calculated as a proportion of the actual extraction result in the expected extraction result, with respect to the number and length of line-segment components for each direction.
  • In addition, as illustrated in FIG. 6 (A), when the size of the windshield region is greater than a reference size, the re-executing function (f2-1) reduces the number of line-segment components and increases the length of the line-segment components. On the other hand, as illustrated in FIG. 6 (B), when the size of the windshield region is equal to or less than the reference size, the re-executing function (f2-1) may be configured to include a function of changing the parameters used in extraction of line-segment components to increase the number of the line-segment components and reduce the length of the line-segment components.
  • Specifically, when there is a large difference in the size of the windshield, such as windshields of compact cars and windshields of large-sized buses, processing to change the resolution for the object to be photographed may be performed according to the resolution for the photographed windshield, such that rough approximation is executed for the windshield region with a camera with high resolution and fine approximation is executed for the windshield region with a camera with low resolution. Distinction between large-sized cars and small cars can be made by extracting headlight regions of the vehicle 40 by labeling processing or the like from the image data to be processed after preprocessing, and estimating the size of the vehicle based on the left and right headlight positions and an interval between the left and right headlights indicated by the extracted regions.
  • Next, operation of the vehicle detection apparatus 100 and the vehicle detection system according to the present embodiment will be explained hereinafter, with reference to the flowchart of FIG. 7.
  • First, in the same manner as the first embodiment, a camera 20 transmits photographed image data to the vehicle detection apparatus 100 (ST11). The controller 150 of the vehicle detection apparatus 100 receives image data photographed by the camera 20. The controller 150 executes a vehicle detecting function and detects entry of the vehicle 40 (YES of ST12). Then, when the controller 150 detects entry of the vehicle 40, the controller 150 extracts a plurality of line-segment components indicating a boundary between the windshield region and the vehicle body of the vehicle 40 (ST13). When the controller 150 detects no entry of the vehicle 40, the controller 150 continues monitoring of vehicle entry (NO of ST12).
  • Next, the controller 150 classifies the extracted line-segment components according to directions, and determines whether an extraction result is good or not, based on the number and length of line-segment components for each direction (ST15). The determination is performed based on whether an extraction rate of the line-segment components is higher than a predetermined extraction rate or not.
  • When the extraction result is not good, the controller 150 changes parameters used in extraction of line-segment components, to increase the number of the line-segment components and reduce the length of the line-segment components (NO of ST15, ST14). The controller 150 re-executes the processing of extracting the line-segment components, based on the changed parameters (ST13).
  • On the other hand, when the extraction result is good, the controller 150 executes approximation with a polygon forming a closed loop, and generates candidates for the windshield region, using some of the line-segment components extracted for each of the left and right images, as described above. Then, the controller 150 executes processing of aligning at least one line-segment component between the left and right images, in the same manner as the above (ST16). Thereafter, the controller 150 measures and estimates the position of the vehicle 40 as described above (ST17).
  • As described above, according to the present embodiment, the controller 150 of the vehicle detection apparatus 100 includes a re-executing function (f2-1) of changing parameters used in extraction of line-segment components based on the number and length of line-segment components for each extracted direction, and re-executing the processing of extracting line-segment components. This structure enables improvement in the extraction rate in the case of extracting the line-segment components of the windshield region, in addition to the effect of the first embodiment.
  • The present embodiment can also achieve assured extraction of line segments of the windshield region as described above, even with the configuration in which parameters used in extraction of line-segment components are changed based on the size of the windshield region and processing of extracting line-segment components is re-executed. The present embodiment may be configured to use rectangle detection information of the number plate detected upon vehicle entry, and performing processing of correlation for information of four sides of the rectangle, when main line-segment components cannot be extracted even when detection of the windshield region has been attempted a plurality of times with changed processing parameters.
  • Specifically, the re-executing function (f2-1) of the controller 150 may be configured to include a first line-segment component extracting function (f2-1-1) of extracting a plurality of line-segment components forming the windshield region of the vehicle, and a second line-segment component extracting function (f2-1-2) of extracting a plurality of line-segment components forming a rectangular region of the number plate frame of the vehicle. Together with this, the polygon approximation function (f3) of the controller 150 can execute polygon approximation processing using line-segment components extracted by the second line-segment component extracting function (f2-1-2), when no closed loop can be formed based on line-segment components extracted by the first line-segment component extracting function (f-2-1-1).
  • The second line-segment component extracting function (f2-1-2) may include extracting a headlight region of the vehicle by labeling from an image of the image data to be processed that has been subjected to preprocessing, and extracting a rectangular shape similar to the number plate from a range estimated from the position thereof. Generally, a number plate exists under the center between the left and right headlights and under a line connecting the headlights. According to such a structure, the controller 150 executes polygon approximation processing using line-segment components extracted by the second line-segment component extracting function (f2-1-2), when no closed loop can be formed based on line-segment components extracted by the first line-segment component extracting function (f-2-1-1). Thus, even when line-segment components of the windshield region cannot be securely extracted, the position of the vehicle can be measured by extracting line-segment components of the number plate frame.
  • Third Embodiment
  • FIG. 8 is a diagram for explaining a third embodiment. A vehicle detection apparatus 100 and a vehicle detection system according to the present embodiment are the same as those of the first embodiment explained with reference to FIG. 1 and FIG. 2, and explanation thereof will be omitted.
  • As illustrated in FIG. 8, the vehicle detection apparatus 100 of the present embodiment is configured to calculate a vehicle width 410 and a vehicle height 420 of a vehicle 40. In the present embodiment, a controller 150 is configured to include a measuring function (f4) of measuring the vehicle position, which includes at least one of a vehicle height calculating function (f4-1) of calculating the vehicle height 420 and a vehicle width calculating function (f4-2) of calculating the vehicle width 410.
  • As illustrated in FIG. 8, the vehicle height calculating function (f4-1) calculates a height of and upper side of a windshield region 400 of the vehicle 40 as the vehicle height 420, based on coordinate information between the line-segment components used for the measuring function (f4). The vehicle width calculating function (f4-2) calculates a length of a bottom side of the windshield region 400 of the vehicle 40 as the vehicle width 410, based on the coordinate information.
  • Specifically, the vehicle height calculating function (f4-1) uses information (coordinate information between line-segment components) of line-segment components associated with each other between left and right cameras for line-segment components forming the windshield region 400. For example, as illustrated in FIG. 3 (B), when the controller 150 performs matching for vertical lines on the left and right side surface parts of the vehicle between the left and right cameras by the vehicle height calculating function (f4-1), the controller 150 calculates coordinate information of end points e1 and e2 of the line-segment component corresponding to the upper side of the windshield. Thus, the controller 150 performs matching for the line-segment component corresponding to the upper side of the windshield based on correlation between the coordinates, and thereby calculates the height 420 from the upper side of the windshield 400. The vehicle height calculating function (f4-1) may include calculating the height of the upper side of the windshield, by changing the configuration of the camera parameters and directly determining the correlation of a difference (e1−e2) between coordinate information items of the end points e1 and e2 of the upper side of the windshield.
  • On the other hand, as illustrated in FIG. 3 (B), when the controller 150 performs matching for vertical lines on the left and right side surface parts of the vehicle between the left and right cameras by the vehicle width calculating function (f4-2), the controller 150 calculates coordinate information of end points e6 and e3 of the line-segment component corresponding to the bottom side of the windshield. Thus, the controller 150 performs matching for the line-segment component corresponding to the bottom side of the windshield based on correlation between the coordinates, and thereby calculates a distance |e6−e3| between the end points of the bottom side of the windshield 400. The vehicle width calculating function (f4-2) may include calculating the distance “(e6−e5)+(e5−e4)+(e4−e3)” between the end points of the bottom side of the windshield, by changing the configuration of the camera parameters and directly determining differences (e6−e5), (e5−e4), and (e4−e3) between coordinate information items of the lower side of the windshield.
  • Next, operation of the vehicle detection system including the vehicle detection apparatus 100 according to the present embodiment will be explained hereinafter. The present embodiment is configured to add processing of calculating at least one of the vehicle height 420 and the vehicle width 410, in the processing of Step ST5 illustrated in FIG. 4.
  • Specifically, the controller 150 executes the steps of Steps ST1 to ST5 illustrated in FIG. 4. In such successive processing, the controller 150 measures the position of a closed loop as the vehicle position, based on coordinate information between the line-segment components aligned in the processing of Step ST4 and photographing position information (camera placement information) indicating the photographing position where the left and right images have been photographed (ST5).
  • In this case, the controller 150 calculates the height of the upper side of the windshield region 400 of the vehicle 40 as the vehicle height 420 by the vehicle height calculating function (f4-1), as illustrated in FIG. 8, based on coordinate information of the line-segment components aligned in the processing of Step ST4. In addition, the controller 150 calculates the length of the bottom side of the windshield region 400 of the vehicle 40 as the vehicle width 410 by the vehicle width calculating function (f4-2), as illustrated in FIG. 8, based on coordinate information of the line-segment components aligned in the processing of Step ST4. In the present embodiment, the controller 150 may execute one of the vehicle height calculating function (f4-1) and the vehicle width calculating function (f4-2).
  • The controller 150 notifies the ETC system 30 of the vehicle position (coordinates) measured by the processing of Step ST5, and the vehicle height 420 and/or vehicle width 410, through the network interface 140. The ETC system 30 transmits and receives wireless signals to and from an antenna of an ETC in-vehicle unit mounted onto the windshield, based on the position (coordinates) of the vehicle 40 and the assumed passing speed of the vehicle 40. In addition, the ETC system 30 can obtain additional information of the vehicle height 420 and/or the vehicle width 410, and calculate, for example, statistics of the size of vehicles passing through the ETC lane.
  • As described above, according to the present embodiment, the height of the upper side of the windshield region 400 of the vehicle 40 can be calculated as the vehicle height 420, based on the coordinate information. In addition, the length of the bottom side of the windshield region 400 of the vehicle 40 can be calculated as the vehicle width 410, based on the coordinate information. Thus, it is possible to estimate additional information of the vehicle height 420 and/or the vehicle width 410, in addition to the effect of the first or second embodiment.
  • According to at least one of the embodiments explained above, at least one line-segment component is aligned among a plurality of line-segment components indicating the boundary between the windshield region and the vehicle body of the vehicle between the left and right images. This structure can determine feature points which are correlated with each other between a plurality of images, clarify the alignment standard, and improve the vehicle detection accuracy.
  • Each of the methods described in the above embodiments can be stored and distributed as a computer-executable program in storage media such as magnetic disks (such as floppy (registered trademark) disks and hard disks), optical disks (such as CD-ROMs and DVDs), magneto-optical disks (MO), and semiconductor memories.
  • The storage media may adopt any storage form, as long as it is a storage medium that is capable of storing a program and readable by a computer.
  • An OS (operating system) operating on a computer, and MW (middleware), such as database management software and network software, may execute part of the above processing to achieve the above embodiments, based on instructions of the program installed in the computer from the storage medium.
  • In addition, the storage medium in each embodiment is not limited to a medium independent of the computer, but includes a storage medium which stores or temporarily stores a downloaded program transmitted through a LAN or the Internet.
  • The storage medium is not limited to one, and the storage medium in the present invention also includes the case where the processing in each of the above embodiments is performed from a plurality of media. The medium structure may be any of the above structures.
  • The computer in each embodiment executes the processing in each of the above embodiments based on a program stored in the storage medium, and may be any of a device such as a personal computer or the like, and a system formed by connecting a plurality of devices through a network.
  • The computer in each embodiment is not limited to a personal computer, but also includes a processing unit and a microcomputer included in an information processing apparatus, and is a general term for apparatuses and devices that are capable of achieving the functions of the present invention by a program.
  • Although some embodiments of the present invention have been explained above, these embodiments are presented as examples, and are not aimed at limiting the scope of the invention. These new embodiments can be carried out in other various forms, and various omissions, replacements, and changes are possible within the range not departing from the gist of the invention. These embodiments and modifications thereof are included in the scope and gist of the invention, and included in the inventions recited in the claims and the scope equal to them.
  • While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.

Claims (10)

What is claimed is:
1. A vehicle detection apparatus comprising:
a detecting module detecting a vehicle from an image obtained by photographing the vehicle, the image including a left image and a right image that are photographed from left and right directions with respect to the vehicle;
an extracting module extracting a plurality of line-segment components indicating a boundary between a specific region of the vehicle and a vehicle body and included in the image for each of the left and right images; and
a measuring module aligning at least one line-segment component between the left and right images among the line-segment components forming a closed loop for each of the left and right images, and measuring a position of the closed loop as the position of the vehicle based on coordinate information between the aligned line-segment components and photographing position information where the left and right images have been photographed.
2. The vehicle detection apparatus according to claim 1, further comprising:
a polygon approximation module executing approximation with a polygon forming the closed loop using some of the line-segment components extracted from the image, for each of the left and right images.
3. The vehicle detection apparatus according to claim 1, wherein
the measuring module aligns all the line-segment components forming the closed loop.
4. The vehicle detection apparatus according to claim 1, wherein
left and right photographing positions of the photographing module are placed to include a windshield of the vehicle and side surfaces of the vehicle, and arranged along a direction almost parallel with a longitudinal direction of an upper side and a lower side of the windshield.
5. The vehicle detection apparatus according to claim 1, wherein
the extracting module includes:
a re-executing module classifying the extracted line-segment components according to directions, changing parameters used in extraction of the line-segment components based on number and length of the line-segment components for each of the directions to increase the number of the line-segment components and reduce the length of the line-segment components, and re-executing processing of extracting the line-segment components based on the changed parameters.
6. The vehicle detection apparatus according to claim 2, wherein
the extracting module includes:
a first line-segment extracting module extracting a plurality of line-segment components forming the windshield region of the vehicle; and
a second line-segment extracting module extracting a plurality of line-segment components forming a rectangular region of a number plate frame of the vehicle,
and the polygon approximation module executes the polygon approximation using the line-segment components extracted by the second line-segment component extracting module, when the closed loop cannot be formed based on the line-segment components extracted by the first line-segment component extracting module.
7. The vehicle detection apparatus according to claim 5, wherein
the re-executing module changes the parameters used in extraction of the line-segment components to reduce the number of the line-segment components and increase the length of the line-segment components when the windshield region has a size greater than a reference size, and increase the number of the line-segment components and reduce the length of the line-segment components when the windshield region has a size equal to or less than the reference size.
8. The vehicle detection apparatus according to claim 1, wherein
the measuring module includes a vehicle width calculating module calculating a length of a bottom side of the windshield region of the vehicle as a width of the vehicle, based on the coordinate information.
9. The vehicle detection apparatus according to claim 1, wherein
the measuring module includes a vehicle height calculating module calculating a height of an upper side of the windshield region of the vehicle as a height of the vehicle, based on the coordinate information.
10. A vehicle detection method applied to a vehicle detection apparatus detecting a vehicle from an image obtained by photographing the vehicle, the image including a left image and a right image that are photographed from left and right directions with respect to the vehicle, comprising:
extracting a plurality of line-segment components indicating a boundary between a specific region of the vehicle and a vehicle body and included in the image for each of the left and right images; and
aligning at least one line-segment component between the left and right images among the line-segment components forming a closed loop for each of the left and right images, and measuring a position of the closed loop as the position of the vehicle based on coordinate information between the aligned line-segment components and photographing position information where the left and right images have been photographed.
US14/170,284 2011-08-03 2014-01-31 Vehicle detection apparatus and vehicle detection method Expired - Fee Related US9196160B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2011-170307 2011-08-03
JP2011170307A JP5740241B2 (en) 2011-08-03 2011-08-03 Vehicle detection device
PCT/JP2012/069171 WO2013018708A1 (en) 2011-08-03 2012-07-27 Vehicle detection device and vehicle detection method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/069171 Continuation WO2013018708A1 (en) 2011-08-03 2012-07-27 Vehicle detection device and vehicle detection method

Publications (2)

Publication Number Publication Date
US20140147008A1 true US20140147008A1 (en) 2014-05-29
US9196160B2 US9196160B2 (en) 2015-11-24

Family

ID=47629232

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/170,284 Expired - Fee Related US9196160B2 (en) 2011-08-03 2014-01-31 Vehicle detection apparatus and vehicle detection method

Country Status (4)

Country Link
US (1) US9196160B2 (en)
EP (1) EP2741267B1 (en)
JP (1) JP5740241B2 (en)
WO (1) WO2013018708A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574429A (en) * 2015-02-06 2015-04-29 北京明兰网络科技有限公司 Automatic selection method for intersection hot spots in panorama roaming
US20150286883A1 (en) * 2014-04-04 2015-10-08 Xerox Corporation Robust windshield detection via landmark localization
US9175966B2 (en) * 2013-10-15 2015-11-03 Ford Global Technologies, Llc Remote vehicle monitoring
US9558408B2 (en) 2013-10-15 2017-01-31 Ford Global Technologies, Llc Traffic signal prediction
CN107464214A (en) * 2017-06-16 2017-12-12 理光软件研究所(北京)有限公司 The method for generating solar power station panorama sketch
US20170357881A1 (en) * 2015-01-08 2017-12-14 Sony Semiconductor Solutions Corporation Image processing device, imaging device, and image processing method
CN107643049A (en) * 2017-09-26 2018-01-30 沈阳理工大学 Vehicle position detection system and method on weighbridge based on monocular structure light
CN110147088A (en) * 2019-06-06 2019-08-20 珠海广通汽车有限公司 Test system for electric vehicle controller
US10507550B2 (en) * 2016-02-16 2019-12-17 Toyota Shatai Kabushiki Kaisha Evaluation system for work region of vehicle body component and evaluation method for the work region
CN111127541A (en) * 2018-10-12 2020-05-08 杭州海康威视数字技术股份有限公司 Vehicle size determination method and device and storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6021689B2 (en) * 2013-02-26 2016-11-09 三菱重工メカトロシステムズ株式会社 Vehicle specification measurement processing apparatus, vehicle specification measurement method, and program
CN105164549B (en) 2013-03-15 2019-07-02 优步技术公司 Method, system and the equipment of more sensing stereoscopic visions for robot
JP6136564B2 (en) * 2013-05-23 2017-05-31 日産自動車株式会社 Vehicle display device
US10077007B2 (en) * 2016-03-14 2018-09-18 Uber Technologies, Inc. Sidepod stereo camera system for an autonomous vehicle
US10967862B2 (en) 2017-11-07 2021-04-06 Uatc, Llc Road anomaly detection for autonomous vehicle

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5487116A (en) * 1993-05-25 1996-01-23 Matsushita Electric Industrial Co., Ltd. Vehicle recognition apparatus
US20060165277A1 (en) * 2004-12-03 2006-07-27 Ying Shan Method and apparatus for unsupervised learning of discriminative edge measures for vehicle matching between non-overlapping cameras
US20070127779A1 (en) * 2005-12-07 2007-06-07 Visteon Global Technologies, Inc. System and method for range measurement of a preceding vehicle
US20080219505A1 (en) * 2007-03-07 2008-09-11 Noboru Morimitsu Object Detection System
US7831098B2 (en) * 2006-11-07 2010-11-09 Recognition Robotics System and method for visual searching of objects using lines
US8108119B2 (en) * 2006-04-21 2012-01-31 Sri International Apparatus and method for object detection and tracking and roadway awareness using stereo cameras
US20140146176A1 (en) * 2011-08-02 2014-05-29 Nissan Motor Co., Ltd. Moving body detection device and moving body detection method
US8897497B2 (en) * 2009-05-19 2014-11-25 Toyota Jidosha Kabushiki Kaisha Object detecting device
US8965056B2 (en) * 2010-08-31 2015-02-24 Honda Motor Co., Ltd. Vehicle surroundings monitoring device

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3019901B2 (en) * 1993-02-22 2000-03-15 松下電器産業株式会社 Vehicle specification automatic measurement device
JPH0883390A (en) * 1994-09-13 1996-03-26 Omron Corp Vehicle recognizing device
JP3475700B2 (en) * 1997-02-19 2003-12-08 オムロン株式会社 Object recognition method, object recognition device, and vehicle recognition device
JP3808727B2 (en) * 2001-06-11 2006-08-16 株式会社東芝 Object detection apparatus and method
JP3893981B2 (en) 2002-01-11 2007-03-14 オムロン株式会社 Vehicle recognition method and traffic flow measuring apparatus using this method
JP4333683B2 (en) * 2006-03-28 2009-09-16 住友電気工業株式会社 Windshield range detection device, method and program
JP2007265322A (en) * 2006-03-30 2007-10-11 Matsushita Electric Ind Co Ltd Gate illegal passing recording system and gate illegal passing information collecting system
JP4882577B2 (en) * 2006-07-31 2012-02-22 オムロン株式会社 Object tracking device and control method thereof, object tracking system, object tracking program, and recording medium recording the program
JP5057750B2 (en) * 2006-11-22 2012-10-24 株式会社東芝 Toll collection system
JP2009212810A (en) * 2008-03-04 2009-09-17 Mitsubishi Electric Corp Vehicle imaging apparatus
JP2009290278A (en) * 2008-05-27 2009-12-10 Toshiba Corp Photographing apparatus and photographing method
JP5391749B2 (en) 2009-03-11 2014-01-15 新神戸電機株式会社 Battery diagnostic device
JP2010256040A (en) * 2009-04-21 2010-11-11 Toyota Motor Corp Vehicle detecting device
JP5651414B2 (en) * 2010-09-16 2015-01-14 株式会社東芝 Vehicle detection device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5487116A (en) * 1993-05-25 1996-01-23 Matsushita Electric Industrial Co., Ltd. Vehicle recognition apparatus
US20060165277A1 (en) * 2004-12-03 2006-07-27 Ying Shan Method and apparatus for unsupervised learning of discriminative edge measures for vehicle matching between non-overlapping cameras
US20070127779A1 (en) * 2005-12-07 2007-06-07 Visteon Global Technologies, Inc. System and method for range measurement of a preceding vehicle
US8108119B2 (en) * 2006-04-21 2012-01-31 Sri International Apparatus and method for object detection and tracking and roadway awareness using stereo cameras
US7831098B2 (en) * 2006-11-07 2010-11-09 Recognition Robotics System and method for visual searching of objects using lines
US20080219505A1 (en) * 2007-03-07 2008-09-11 Noboru Morimitsu Object Detection System
US8897497B2 (en) * 2009-05-19 2014-11-25 Toyota Jidosha Kabushiki Kaisha Object detecting device
US8965056B2 (en) * 2010-08-31 2015-02-24 Honda Motor Co., Ltd. Vehicle surroundings monitoring device
US20140146176A1 (en) * 2011-08-02 2014-05-29 Nissan Motor Co., Ltd. Moving body detection device and moving body detection method

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9175966B2 (en) * 2013-10-15 2015-11-03 Ford Global Technologies, Llc Remote vehicle monitoring
US9558408B2 (en) 2013-10-15 2017-01-31 Ford Global Technologies, Llc Traffic signal prediction
US9633267B2 (en) * 2014-04-04 2017-04-25 Conduent Business Services, Llc Robust windshield detection via landmark localization
US20150286883A1 (en) * 2014-04-04 2015-10-08 Xerox Corporation Robust windshield detection via landmark localization
US20170357881A1 (en) * 2015-01-08 2017-12-14 Sony Semiconductor Solutions Corporation Image processing device, imaging device, and image processing method
US10217034B2 (en) * 2015-01-08 2019-02-26 Sony Semiconductor Solutions Corporation Image processing device, imaging device, and image processing method
US20190147306A1 (en) * 2015-01-08 2019-05-16 Sony Semiconductor Solutions Corporation Image processing device, imaging device, and image processing method
US10885403B2 (en) * 2015-01-08 2021-01-05 Sony Semiconductor Solutions Corporation Image processing device, imaging device, and image processing method
US11244209B2 (en) 2015-01-08 2022-02-08 Sony Semiconductor Solutions Corporation Image processing device, imaging device, and image processing method
CN104574429A (en) * 2015-02-06 2015-04-29 北京明兰网络科技有限公司 Automatic selection method for intersection hot spots in panorama roaming
US10507550B2 (en) * 2016-02-16 2019-12-17 Toyota Shatai Kabushiki Kaisha Evaluation system for work region of vehicle body component and evaluation method for the work region
CN107464214A (en) * 2017-06-16 2017-12-12 理光软件研究所(北京)有限公司 The method for generating solar power station panorama sketch
CN107643049A (en) * 2017-09-26 2018-01-30 沈阳理工大学 Vehicle position detection system and method on weighbridge based on monocular structure light
CN111127541A (en) * 2018-10-12 2020-05-08 杭州海康威视数字技术股份有限公司 Vehicle size determination method and device and storage medium
CN110147088A (en) * 2019-06-06 2019-08-20 珠海广通汽车有限公司 Test system for electric vehicle controller

Also Published As

Publication number Publication date
US9196160B2 (en) 2015-11-24
WO2013018708A1 (en) 2013-02-07
EP2741267B1 (en) 2016-08-17
JP2013037394A (en) 2013-02-21
EP2741267A4 (en) 2015-04-01
JP5740241B2 (en) 2015-06-24
EP2741267A1 (en) 2014-06-11

Similar Documents

Publication Publication Date Title
US9196160B2 (en) Vehicle detection apparatus and vehicle detection method
US10753758B2 (en) Top-down refinement in lane marking navigation
US10984261B2 (en) Systems and methods for curb detection and pedestrian hazard assessment
US11320833B2 (en) Data processing method, apparatus and terminal
US20120069183A1 (en) Vehicle detection apparatus
CN106575473B (en) Method and device for non-contact axle counting of vehicle and axle counting system
EP3196863B1 (en) System and method for aircraft docking guidance and aircraft type identification
US7899211B2 (en) Object detecting system and object detecting method
JP4406381B2 (en) Obstacle detection apparatus and method
JP4246766B2 (en) Method and apparatus for locating and tracking an object from a vehicle
US20210342620A1 (en) Geographic object detection apparatus and geographic object detection method
JPWO2015052896A1 (en) Passenger number measuring device, passenger number measuring method, and passenger number measuring program
CN109141347A (en) Vehicle-mounted vidicon distance measuring method and device, storage medium and electronic equipment
JP6139088B2 (en) Vehicle detection device
CN113316706A (en) Landmark position estimation apparatus and method, and computer-readable recording medium storing computer program programmed to execute the method
JP2004355139A (en) Vehicle recognition system
JP2013134667A (en) Vehicle detection device
JP2003208692A (en) Vehicle recognition method and traffic flow measurement device using the method
CN111539279A (en) Road height limit height detection method, device, equipment and storage medium
Eriksson et al. Lane departure warning and object detection through sensor fusion of cellphone data
Pan et al. Fast road detection based on a dual-stage structure
Kurdziel A monocular color vision system for road intersection detection
JP2000182184A (en) Method and device for detecting antenna on vehicle

Legal Events

Date Code Title Description
AS Assignment

Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:AOKI, YASUHIRO;SATO, TOSHIO;TAKAHASHI, YUSUKE;REEL/FRAME:032110/0309

Effective date: 20131118

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20231124