WO2010047054A1 - 車両の周辺監視装置 - Google Patents
車両の周辺監視装置 Download PDFInfo
- Publication number
- WO2010047054A1 WO2010047054A1 PCT/JP2009/005258 JP2009005258W WO2010047054A1 WO 2010047054 A1 WO2010047054 A1 WO 2010047054A1 JP 2009005258 W JP2009005258 W JP 2009005258W WO 2010047054 A1 WO2010047054 A1 WO 2010047054A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- bicycle
- vehicle
- value
- pair
- Prior art date
Links
- 238000012544 monitoring process Methods 0.000 title claims description 7
- 238000003384 imaging method Methods 0.000 claims description 6
- 238000012806 monitoring device Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 238000000034 method Methods 0.000 description 58
- 238000012545 processing Methods 0.000 description 18
- 238000004364 calculation method Methods 0.000 description 9
- 238000013459 approach Methods 0.000 description 8
- 238000012937 correction Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 7
- 230000014509 gene expression Effects 0.000 description 7
- 230000005484 gravity Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 210000002414 leg Anatomy 0.000 description 5
- 238000005070 sampling Methods 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005452 bending Methods 0.000 description 1
- 230000037237 body shape Effects 0.000 description 1
- 230000012447 hatching Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/22—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
- B60R1/23—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
- B60R1/24—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view in front of the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/30—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles providing vision in the non-visible spectrum, e.g. night or infrared vision
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/143—Sensing or illuminating at different wavelengths
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/20—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
- B60R2300/205—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used using a head-up display
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/30—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
- B60R2300/304—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using merged images, e.g. merging camera image with stored images
- B60R2300/305—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using merged images, e.g. merging camera image with stored images merging camera image with lines or icons
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/30—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
- B60R2300/307—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing virtually distinguishing relevant parts of a scene from the background of the scene
- B60R2300/308—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing virtually distinguishing relevant parts of a scene from the background of the scene by overlaying the real scene, e.g. through a head-up display on the windscreen
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/70—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by an event-triggered choice to display a specific image among a selection of captured images
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/80—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
- B60R2300/8093—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for obstacle warning
Definitions
- the present invention relates to an apparatus for monitoring the periphery of a vehicle in order to recognize a bicycle existing around the vehicle.
- Patent Document 1 describes a method of recognizing an object such as a bicycle existing in front of the host vehicle and specifying the direction of the driver's line of sight. When the direction of the line of sight is not directed toward the host vehicle, an alarm is generated.
- Objects that may come into contact with the vehicle are not limited to pedestrians, and include bicycles. Bicycles often travel along roadways, and their presence can affect the running of the vehicle. Therefore, it is desirable to distinguish the bicycle from other objects and inform the vehicle driver.
- a bicycle is recognized as an object, but no specific method for the recognition is suggested.
- an object of the present invention is to provide a method for determining a bicycle in a captured image.
- a vehicle periphery monitoring device for recognizing a bicycle mounted on a vehicle and driven by a driver around the vehicle images the periphery of the vehicle and responds to the temperature of the object.
- a captured image having a luminance value is acquired.
- An image region having a luminance value representing a temperature higher than a background temperature by a predetermined value or more is extracted from the captured image.
- the pair of first object portions that are equal to or less than a predetermined value in the direction are detected from the extracted image region, and the brightness different from the first object portion is between the pair of first object portions. If there is a second object part having a length equal to or greater than a predetermined value in the vertical direction, the object including the first object part and the second object part is determined as the bicycle.
- the bicycle it is possible to determine whether or not the bicycle is based on the shape and arrangement of the object captured in the captured image. That is, when the pair of first object parts is detected and the second object part exists between the pair of first object parts, both feet are detected and between the legs. Since there is a portion that can be regarded as a tire portion of the bicycle, it can be determined as a bicycle. Thus, the driver of the vehicle can recognize that the bicycle exists around the vehicle. Moreover, since a bicycle is determined based on a shape and arrangement
- the figure for demonstrating the attachment position of the camera according to one Example of this invention. 3 is a flowchart illustrating a process in an image processing unit according to an embodiment of the present invention.
- the figure for demonstrating the principle of bicycle determination according to one Example of this invention. 2 is a flowchart of a bicycle determination process according to one embodiment of the present invention.
- FIG. 1 is a block diagram showing a configuration of a vehicle periphery monitoring device according to an embodiment of the present invention.
- the apparatus is mounted on a vehicle and includes two infrared cameras 1R and 1L capable of detecting far-infrared rays, a yaw rate sensor 5 that detects the yaw rate of the vehicle, and a vehicle speed sensor 6 that detects a travel speed (vehicle speed) VCAR of the vehicle.
- the head-up display (hereinafter referred to as the speaker 3) that displays an image obtained by the speaker 3 and the camera 1R or 1L and that causes the driver to recognize an object determined to have a high possibility of a collision. , Called HUD).
- the cameras 1R and 1L are disposed in the front part of the vehicle 10 at positions symmetrical with respect to the central axis passing through the center of the vehicle width.
- the two cameras 1R and 1L are fixed to the vehicle so that their optical axes are parallel to each other and their height from the road surface is equal.
- the infrared cameras 1R and 1L have a characteristic that the level of the output signal becomes higher (that is, the luminance in the captured image becomes higher) as the temperature of the object is higher.
- the image processing unit 2 includes an A / D conversion circuit that converts an input analog signal into a digital signal, an image memory that stores a digitized image signal, a central processing unit (CPU) that performs various arithmetic processing, and a data RAM (Random Access Memory) used to store data, ROM (Read Only Memory) that stores programs executed by the CPU and data used (including tables and maps), driving signals for the speaker 3, display signals for the HUD 4, etc. Is provided.
- the output signals of the cameras 1R and 1L and the output signals of the sensors 5 to 7 are converted into digital signals and input to the CPU.
- the HUD 4 is provided so that a screen 4 a is displayed at a front position of the driver on the front window of the vehicle 10. Thus, the driver can visually recognize the screen displayed on the HUD 4.
- FIG. 3 is a flowchart showing a process executed by the image processing unit 2. The process is performed at predetermined time intervals.
- the output signals of the cameras 1R and 1L are received as input, A / D converted, and stored in the image memory.
- the stored image data is a grayscale image, and has a higher luminance value (a luminance value closer to white) as the temperature of the object is higher than the background temperature.
- the horizontal position on the image of the same object is shifted between the image captured by the camera 1R, that is, the right image, and the image captured by the camera 1L, that is, the left image. The distance can be calculated.
- the right image is used as a reference image (alternatively, the left image may be used as a reference image), and the image signal is binarized.
- a process is performed in which a region brighter than the luminance threshold ITH determined in advance by simulation or the like is set to “1” (white) and a dark region is set to “0” (black).
- the threshold value ITH is set to a value that distinguishes an object having a temperature higher than a predetermined value, such as a human being and an animal, from the background (including the road surface).
- a predetermined value such as a human being and an animal
- step S15 the binarized image data is converted into run-length data.
- FIG. 4 is a diagram for explaining this process.
- whitened areas by binarization are represented by L1 to L8.
- Each of the lines L1 to L8 has a width of one pixel in the y direction.
- the lines L1 to L8 are actually arranged without gaps in the y direction, but are shown separated in the figure for convenience.
- the lines L1 to L8 have lengths of 2 pixels, 2 pixels, 3 pixels, 8 pixels, 7 pixels, 8 pixels, 8 pixels, and 8 pixels, respectively, in the x direction.
- the run-length data indicates the lines L1 to L8 by the coordinates of the start point of each line (leftmost pixel of each line) and the length from the start point to the end point (rightmost pixel of each line) (number of pixels). )).
- the line L3 includes three pixels (x3, y5), (x4, y5), and (x5, y5), the line L3 is represented by run-length data (x3, y5, 3).
- steps S16 and S17 as shown in FIG. 4B, the object is labeled and a process of extracting the object is performed. That is, of the lines L1 to L8 that are converted to run length data, the lines L1 to L3 that overlap in the y direction are regarded as one object 1, the lines L4 to L8 are regarded as one object 2, and the run length data The object labels 1 and 2 are given.
- step S18 as shown in FIG. 4C, the center G of the extracted object, the area S, and the aspect ratio ASPECT of the circumscribed rectangle circumscribing the extracted object as shown by the broken line. Is calculated.
- the area S is calculated by integrating the lengths of run length data for the same object.
- the coordinates of the center of gravity G are calculated as the x coordinate of a line that bisects the area S in the x direction and the y coordinate of a line that bisects the area S in the y direction.
- the aspect ratio ASPECT is calculated as the ratio Dy / Dx between the length Dy in the y direction and the length Dx in the x direction of the circumscribed square. Note that the position of the center of gravity G may be substituted by the position of the center of gravity of the circumscribed rectangle.
- step S19 tracking of the target object for the time (tracking), that is, recognition of the same target object is performed at every predetermined sampling period.
- the sampling period may be the same as the period in which the process of FIG. 3 is performed.
- the time obtained by discretizing the time t as an analog quantity with the sampling period is k and the object A is extracted at the time k, the object A and the time at the next sampling period are considered.
- the identity with the object B extracted at (k + 1) is determined. The identity determination can be performed according to a predetermined condition.
- the difference between the X and Y coordinates of the position of the center of gravity G on the images of the objects A and B is smaller than a predetermined allowable value
- the area on the image of the object B on the image of the object A If the ratio of the aspect ratio of the circumscribed rectangle of the object B to the aspect ratio of the circumscribed rectangle of the object A is smaller than the predetermined allowable value, the objects A and B Can be determined to be the same.
- the position of the object (in this embodiment, the position coordinates of the center of gravity G) is stored in the memory as time-series data together with the assigned label.
- step S20 the vehicle speed VCAR detected by the vehicle speed sensor 6 and the yaw rate YR detected by the yaw rate sensor 5 are read, and the turning angle ⁇ r (described later) of the vehicle 10 is calculated by integrating the yaw rate YR over time.
- steps S31 to S33 a process of calculating the distance z from the vehicle 10 to the object is performed in parallel with the processes of steps S19 and S20. Since this calculation requires a longer time than steps S19 and S20, it may be executed in a cycle longer than steps S19 and S20 (for example, a cycle about three times the execution cycle of steps S11 to S20).
- step S31 one of the objects to be tracked by the binarized image of the reference image (in this example, the right image) is selected, and this is surrounded by a search image R1 (here, a circumscribed rectangle). Let the image area be a search image).
- step S32 an image of the same object as the search image R1 (hereinafter referred to as a corresponding image) is searched for in the left image. Specifically, it can be performed by executing a correlation calculation between the search image R1 and the left image. The correlation calculation is performed according to the following formula (1). This correlation calculation is performed using a grayscale image, not a binary image.
- the search image R1 has M ⁇ N pixels, and IR (m, n) is a luminance value at the position of the coordinates (m, n) in the search image R1, and IL (a + m ⁇ M , B + n ⁇ N) is a luminance value at the position of the coordinates (m, n) in the local area having the same shape as the search image R1 with the predetermined coordinates (a, b) in the left image as a base point.
- the position of the corresponding image is specified by determining the position where the luminance difference sum C (a, b) is minimized by changing the coordinates (a, b) of the base point.
- a region to be searched may be set in advance, and a correlation calculation may be performed between the search image R1 and the region.
- step S33 the distance dR (number of pixels) between the centroid position of the search image R1 and the image center line (the line that bisects the captured image in the x direction) LCTR of the captured image, the centroid position of the corresponding image, and the image center line LCTR Distance dL (number of pixels) is obtained and applied to Equation (2) to calculate the distance z to the object of the vehicle 10.
- B is the base line length, that is, the distance in the x direction (horizontal direction) between the center position of the image sensor of the camera 1R and the center position of the image sensor of the camera 1L (that is, the distance between the optical axes of both cameras).
- F denotes the focal length of the lenses 12R and 12L provided in the cameras 1R and 1L
- p denotes the pixel interval in the image sensors 11R and 11L.
- step S21 the coordinates (x, y) in the image of the position of the object (as described above, the position of the center of gravity G in this embodiment) and the distance z calculated by Expression (2) are expressed in Expression (3).
- the real space coordinates (X, Y, Z) are, as shown in FIG. 5A, with the origin O as the midpoint position (position fixed to the vehicle) of the camera 1R and 1L attachment positions.
- it is expressed in a coordinate system in which the X axis is defined in the vehicle width direction of the vehicle 10
- the Y axis is defined in the vehicle height direction
- the Z axis is defined in the traveling direction of the vehicle 10.
- FIG. 5B the coordinates on the image are represented by a coordinate system in which the center of the image is the origin, the horizontal direction is the x axis, and the vertical direction is the y axis.
- (xc, yc) is the coordinate (x, y) on the right image based on the relative position relationship between the mounting position of the camera 1R and the origin O of the real space coordinate system and the image. Are converted into coordinates in a virtual image that coincides with the center of the image.
- F is a ratio between the focal length F and the pixel interval p.
- step S22 a turning angle correction for correcting a positional deviation on the image due to the turning of the vehicle 10 is performed.
- the vehicle 10 turns, for example, to the left by the turning angle ⁇ r, it shifts in the x direction (positive direction) by ⁇ x on the image obtained by the camera. Therefore, this is corrected.
- the real space coordinates (X, Y, Z) are applied to Equation (4) to calculate the corrected coordinates (Xr, Yr, Zr).
- the calculated real space position data (Xr, Yr, Zr) is stored in the memory in time series in association with each object.
- the corrected coordinates are indicated as (X, Y, Z).
- 1) indicating the direction of the approximate straight line LMV, the straight line represented by the equation (5) is obtained.
- u is a parameter that takes an arbitrary value.
- Xav, Yav, and Zav are the average value of the X coordinate, the average value of the Y coordinate, and the average value of the Z coordinate of the real space position data string, respectively.
- FIG. 6 is a diagram for explaining the approximate straight line LMV.
- P (0), P (1), P (2),..., P (N-2), P (N-1) represent time-series data after turning angle correction
- the numerical value in () attached to P indicating the coordinates of each data point indicates that the data is past data as the value increases.
- P (0) indicates the latest position coordinates
- P (1) indicates the position coordinates one sample period before
- P (2) indicates the position coordinates two sample periods before.
- X (j), Y (j), Z (j) and the like in the following description.
- a more detailed method for calculating the approximate straight line LMV is described in Japanese Patent Laid-Open No. 2001-6096.
- a vector from the position coordinates Pv (N ⁇ 1) calculated by Expression (6) toward Pv (0) is calculated as a relative movement vector.
- step S ⁇ b> 24 a collision possibility determination process is performed to determine whether or not there is a possibility of collision with the object.
- the relative speed Vs in the Z direction is calculated by the equation (7), and it is determined whether the equations (8) and (9) are satisfied.
- Vs (Zv (N ⁇ 1) ⁇ Zv (0)) / ⁇ T (7)
- Zv (0) is attached to indicate that the latest distance detection value (v is data after correction by the approximate straight line LMV, but the Z coordinate is the same value as before correction.
- Zv (N ⁇ 1) is a distance detection value before time ⁇ T.
- T is an allowance time, and is intended to determine the possibility of a collision by a time T before the predicted collision time, and is set to about 2 to 5 seconds, for example.
- H is a predetermined height that defines a range in the Y direction, that is, the height direction, and is set to about twice the vehicle height of the host vehicle 10, for example.
- the approach determination process it is determined whether or not the object exists in a predetermined approach determination area (that is, whether or not the latest position Pv (0) of the object is in the approach determination area). For example, it is determined that there is a possibility of a collision between the object and the host vehicle 10.
- a predetermined approach determination area that is, whether or not the latest position Pv (0) of the object is in the approach determination area.
- the approach determination area AR1 is an area corresponding to a range obtained by adding a margin ⁇ (for example, about 50 to 100 cm) on both sides of the vehicle width ⁇ of the vehicle 10, in other words, the center of the vehicle 10 in the vehicle width direction. It is a region having a width of ( ⁇ / 2 + ⁇ ) on both sides of the shaft, and a region having a high possibility of collision if the object continues to exist as it is.
- This intrusion determination processing can be realized by any appropriate technique, and for example, details thereof are described in Japanese Patent Laid-Open No. 2001-6096.
- step S25 processing for determining the target object is performed. If the determined object is determined to be an alerting target, the process proceeds to step S26, and an alarm determination process is performed.
- the alarm determination process it is determined whether or not an alarm is actually output to the driver. If the determination result is affirmative, an alarm is output.
- This invention relates to a technique for determining a bicycle existing in front of a vehicle, and the bicycle determination process is performed in step S25. If it is determined to be a bicycle, it is determined to be a target for alerting as described above. Of course, in addition to the bicycle, determination of a pedestrian or the like may be performed in step 25. If it is determined that the person is a pedestrian, the target is determined to be alerted. In step S25, a process for determining whether the object is an artificial structure may be performed. If it is determined as an artificial structure, it may be excluded from the alerting target.
- the determination process for a pedestrian and an artificial structure can be performed by any appropriate method (for example, described in JP-A-2006-185434).
- FIG. 8 includes (a) a grayscale image acquired by imaging a bicycle being driven by a driver (acquired in step S13 in FIG. 3), and (b) binarizing the grayscale image.
- a diagram schematically showing a binary image (obtained in step S14 in FIG. 3) is shown.
- the main difference in gradation is represented by the difference in the type of hatching.
- the hatched area represents a black area.
- the bicycle that the bicycle driver is driving to face the vehicle 10 is determined.
- the driver 101 driving the bicycle 103 with both feet 111A and 111B has a high temperature, it is imaged as an image area having a high luminance value in the grayscale image. Since the background 105 (including the road surface) has a low temperature, it is captured as an image region having a low luminance value.
- the bicycle 103 includes a handle 103a extending in the horizontal direction (x direction), a tire 103b extending in the vertical direction (y direction) between the driver's feet, and a frame 103c between the handle and the tire.
- the temperature of the tire 103 b is lower than that of the driver 101 and lower than that of the background 105. Therefore, at least a portion of the tire 103 b in the bicycle 103 is captured as an image area having a luminance value lower than the luminance value of the driver 101 and lower than the luminance value of the background 105.
- the bicycle tire 103b is always in contact with the road surface, and the temperature of the road surface can be considered to be substantially uniform, the road surface can be considered as an image region having a substantially uniform luminance value.
- the driver 101 is extracted as a white area, and the background 105 is represented by a black area.
- the portion of the tire 103b of the bicycle 103 is represented by a black region because the temperature is lower than that of the background 105.
- the portion of the bicycle 103 other than the tire 103b is represented by a black region, but may be extracted as a white region depending on the value of the threshold ITH used in the binarization process.
- the portions 111A and 111B of both feet of the driver 101 are separated from each other by the width of the driver's torso. While being imaged as a pair of vertically long image areas (hereinafter, the pair of image areas are represented by the same symbols 111A and 111B as those used for the feet), between the pair of image areas 111A and 111B, An image region representing the tire 103b of the bicycle 103 extends in the vertical direction.
- the present invention has been made based on this finding, and a pair of vertically long image areas 111A and 111B (that is, the vertical length is larger than the horizontal width) are regarded as a portion of the tire 103b. If the objects are detected so as to be separated from each other in the horizontal direction so as to sandwich the image area between them, it is determined that the object is a bicycle being driven by the driver.
- FIG. 9 shows a flowchart of the bicycle determination process based on the above knowledge, which is performed in step S25 of FIG. This determination process will be described with reference to FIG.
- step S41 from the captured image, an image region satisfying the following conditions 1) and 2) (in this example, the image region 111A in FIG. 10A) is selected from one foot (a pair of the driver's feet) One of the first object portions) is detected. 1) It has a luminance value higher than the background luminance value by a predetermined value or more. 2) A vertically long region.
- the above condition 1) can be realized by setting the threshold value ITH to a value higher than the background luminance value by a predetermined value or more in the above-described binarization process (step S14 in FIG. 3).
- This condition is processing for extracting a high-temperature object such as the driver 101 separately from the background (including the road surface) 105. Therefore, by performing the binarization process using the threshold value ITH set in this way in step S14, the extracted image area (white area) includes an image area corresponding to the driver.
- the luminance value of the background may be set in advance through a simulation or the like, or the luminance value with the highest frequency is used as the luminance value of the background from the luminance value histogram of the captured grayscale image. Also good. This is because the area occupied by the background is generally the highest in the captured image.
- the predetermined value may be set in advance through simulation or the like. For example, using a known mode method, a peak representing a background and a peak representing a high-temperature object appearing in a luminance value histogram. The predetermined value can be determined so that the threshold value ITH is set in between (this detailed method is described in, for example, Japanese Patent Application Laid-Open No. 2003-216949).
- the above condition 2) is based on the knowledge that, as described above with reference to FIG. 8, the foot driving the bicycle is imaged as a vertically long image region. This condition is satisfied when the length in the vertical direction (y direction) is greater than the width in the horizontal direction (x direction).
- step S41 in addition to the conditions 1) and 2), an image area that satisfies these conditions is detected from the captured image. To do. 3) It has a length equal to or less than a predetermined value in the vertical direction. 4) It has a width within a predetermined range in the horizontal direction. 5) It has linearity in the vertical direction.
- the predetermined value of the above 3) is set in advance according to the height of a general pedestrian's foot part (the part below the waist) with respect to the road surface (for example, according to an adult standard figure) ). Since the driver drives the bicycle while bending the knee, the length of the foot portion to be imaged is equal to or less than the length of the walking human foot.
- the condition 3) is set based on this finding.
- the predetermined range of 4) above is set in advance according to the width of the foot portion of a general pedestrian (for example, it may be set according to the standard figure of an adult). For example, it can be set by adding a predetermined margin value to the knee width based on the standard body shape of an adult. With this condition, the driver's foot portion can be extracted with better separation with respect to the body portion above the foot.
- the condition 5) above is a condition set based on the knowledge that the contour of the foot of the driver who is driving the bicycle is substantially perpendicular to the road surface.
- the image area of the foot portion has a substantially square shape as shown in the image areas 111A and 111B, and the side parallel to the y-axis of the square represents the contour of the foot. Yes. By examining this side, that is, the vertical edge of the rectangle, the linearity can be examined.
- the grayscale image is binarized so as to satisfy the condition 1).
- the image area of the object extracted by binarization is converted into run-length data. Accordingly, by examining the run length in the width direction (x direction), an area having a width within a predetermined range is detected from the extracted image area of the object. Further, it is examined how long the region having the width continues in the y direction. If the length of the region having the width in the y direction is larger than the width and not more than a predetermined value, the region is extracted.
- a straight line Judged to have sex If the inclination of the vertical edge of the extracted region (for example, the arrangement in the y direction of the right end and / or left end pixels of the region) with respect to the y axis is calculated and the magnitude of the inclination is equal to or smaller than a predetermined value, a straight line Judged to have sex.
- the comparison between the predetermined value and the predetermined range in the above 3) and 4) may be performed in the image coordinate system (FIG. 5B) or in the real space coordinate system (FIG. 5A). Also good.
- the former is adopted. Therefore, the predetermined value and the predetermined range in the determinations of 3) and 4) described above are values in the real space coordinate system according to the distance value z of the image area of the object.
- the value converted into the image coordinate system is used (can be converted based on the above-described equation (3)). Further, if the vehicle is turning, it may be converted by equation (3) after performing correction based on the turning angle based on equation (4).
- the values corresponding to the height from the road surface of the pedestrian's foot and the width of the foot are set in the predetermined value and the predetermined range in the above determinations 3) and 4), respectively. Is done.
- the length in the y direction and the width in the x direction (both expressed by the number of pixels) of the image area of the object are based on the length in the Y direction and the X direction in the real space coordinate system based on Equation (3). It is converted to a width, which is compared with a predetermined value and a predetermined range, respectively. Also in this case, the turning angle correction based on the equation (4) may be performed.
- the driver's one foot (one of the first object portions) is detected as an image area 111A separately from the background, the bicycle portion, or the like. If one of the first object portions is detected, the process proceeds to step S42.
- step S42 it is determined whether or not an image area satisfying the above conditions 1) to 5) is detected within a predetermined range in the horizontal direction (x direction) from the image area 111A detected in step S41.
- This is a process for detecting an image region 111B corresponding to the other leg of the driver (the other of the first object portions) as shown in FIG. 10 (b).
- the predetermined range in the horizontal direction is set in advance in accordance with the width of a general pedestrian's torso (for example, it may be set in accordance with an adult standard figure). That is, since one leg and the other leg are considered to be separated by a distance corresponding to the width of the trunk, the image region 111B is detected using this.
- the image region 111B can be detected by any method. For example, as shown in FIG. 10B, in the binary image satisfying the condition 1), the x coordinate value xa at the right end of the image region 111A (for example, the average value of the pixels constituting the right side of the image region 111A). Therefore, a predetermined range in the x direction (both left and right directions) is examined, and it is determined whether or not an image region satisfying the above conditions 2) to 5) is detected by the same method as in step S41.
- a condition for detecting the image area 111B in addition to the condition that the image area 111B exists within a predetermined range in the horizontal direction from the image area 111A, a condition that the image area 111B overlaps with the image area 111A in the vertical direction (y direction) Further, it may be included. This is because both feet should be detected in substantially the same range in the vertical direction.
- step S43 if the other leg portion (the other of the first object portions) is detected as the image region 111B, the process proceeds to step S43.
- step S43 As shown in FIG. 10C, between the pair of first object portions (image regions 111A and 111B) detected in steps S41 and S42, It is determined whether or not there is an image region R having different luminance values and having a length equal to or greater than a predetermined value in the vertical direction.
- This is a process for examining whether or not there is a portion that can be regarded as a bicycle tire 103b (referred to as a second object portion in FIG. 8) between both feet of the driver. If the image region R exists, it can be determined that there is a portion that can be regarded as the tire 103b. Therefore, an object including the image region R and the pair of image regions 111A and 111B can be regarded as a bicycle.
- an image region R lower than the luminance value of the background 105 is detected in order to increase detection accuracy. This is based on the knowledge that the temperature of the tire 103b is lower than the temperature of the background 105, as described above with reference to FIG. Under this condition, the image region R corresponding to the tire 103b can be extracted with better discrimination from the background.
- This process is performed on grayscale images. An example of specific processing will be described.
- the luminance value of the region between the pair of image regions 111A and 111B is examined, and an image region having a luminance value lower than the luminance value of the background 105 is extracted.
- the length in the y direction of the image area thus extracted is examined. If this length is equal to or greater than a predetermined value, it is assumed that the image area R representing the tire portion is detected.
- the predetermined value can be set according to the height of the top of the tire of the bicycle from the road surface, for example, assuming the size of a general bicycle.
- a region R that can be regarded as a bicycle tire 103b is extracted as shown by a thick frame in FIG. 10C.
- steps S44 and S45 are further provided in order to increase the determination accuracy of the bicycle, and these are determinations taking the road surface 107 into consideration.
- step S44 it is determined whether the lowest vertical point of the region R is lower than the lowest vertical point of the image region 111A and the lowest lowest point of 111B. This means that if the region R represents a tire, the tire is in contact with the road surface, so that the bottom of the region R should be lower than the bottom of the image regions 111A and 111B of both feet. Is based.
- the lowest point indicates the lowest position in the vertical direction of the captured image.
- the xy coordinate system is set so that the y coordinate value increases as the captured image is moved downward.
- the “lowermost point” corresponds to the point having the largest y coordinate value.
- the largest y coordinate value is selected (as shown in FIG. 10D, the y coordinate value is yr).
- the largest y standard value is selected from the bottom pixels of the image area 111A (y coordinate value is ya), and the largest y coordinate value is selected from the bottom pixels of 111B (y coordinate value is yb).
- step S44 determines whether yr> ya and yr> yb. If yr> ya and yr> yb, the determination in step S44 is Yes and the process proceeds to step S45.
- the y coordinate values of the pixels constituting the base may be averaged and compared.
- step S45 it is determined whether an image region having an area equal to or larger than a predetermined value and having a uniform luminance value exists in the vicinity of the lowest point of the region R. This is based on the knowledge that if the region R represents a tire, the image region S of the road surface with a relatively uniform luminance value should expand in the vicinity of the lowest point of the tire.
- the bottom of the region R (y coordinate value is yr) and a position below the bottom by a predetermined value h1 Examine the brightness value of the area in between.
- the predetermined value h1 can be a value set according to a value of about several centimeters in the real space coordinate system. Which range of luminance values to check in the x direction can be arbitrarily set. If a pixel having a luminance value that falls within a predetermined range in the examined area is detected over a predetermined area, it is determined that there is an area S having a uniform luminance value, that is, an image area S representing a road surface.
- the predetermined range is set to a size such that the luminance value can be regarded as substantially uniform.
- the predetermined value h1 is, for example, FIG. You may set so that the lower end part of the area
- the tire temperature is lower than the background temperature
- the image area S has a luminance value lower than the background luminance value. It may be added to the condition.
- Steps S41 to S45 it is determined that the object including the pair of first object parts (image areas 111A and 111B) and the second object part (image area R) is a bicycle. Then, it is determined that it is a target for alerting (S46). On the other hand, if any of Steps S41 to S45 is No, it is determined that the bicycle is not a bicycle (S47). If it is determined that the object is not a bicycle, whether or not the object is a pedestrian may be determined by another method.
- the threshold value GTH can be determined as shown in Expression (10). This is a value corresponding to a condition in which the vehicle 10 stops at a travel distance equal to or less than the distance Zv (0) when the brake acceleration Gs is maintained as it is.
- the alarm output issues a voice alarm through the speaker 3 and, as shown in FIG. 11, displays an image obtained by the camera 1R on the screen 4a, for example, by the HUD 4, and highlights the bicycle.
- the highlighting may be performed by any method, for example, it can be highlighted by surrounding it with a colored frame.
- Alarm output may be performed using either one of the alarm and the image display. In this way, the driver can more reliably recognize a moving animal with a high possibility of collision.
- the bicycle is determined based on whether or not the features of the driver's feet, the shape of the bicycle, and the arrangement relationship between the two are extracted from a single captured image.
- it is not necessary to track (track) images taken in time series for a time period. Therefore, the bicycle determination can be performed while reducing the time required for image processing and the calculation load.
- tracking may be used together in the bicycle determination described in the above embodiment.
- the above-described bicycle determination is performed for each captured image, and if a bicycle is determined in a predetermined number of consecutive captured images, the final determination is made that the object is a bicycle.
- the above alert processing may be performed.
- a relative movement vector is calculated in steps S23 and S24 in FIG. 3 to determine an object with a possibility of collision, and the object determination process in step S25 is performed on this object. Yes.
- the bicycle determination process of FIG. 9 may be performed following the run length data conversion of the object extraction process of step S17 of FIG. Good.
- the processing of steps S41 and S42 may be executed by directly extracting an image area satisfying the condition 1) in step S41 of FIG. 9 from the grayscale image. In this case, the gray of step S13 of FIG.
- the bicycle determination process of FIG. 9 may be performed following the acquisition of the scale image.
- the bicycle driver is judged to be driven so as to face the vehicle 10, but this determination is also applicable to bicycles in which the driver is on the bicycle but the progress is stopped. included.
- the vehicle is stopped, at least one of the feet is considered to be in contact with the road surface in the same manner as the tire, but even in this case, both feet are extracted so as to satisfy the conditions described with reference to step S41. Can do.
- the tire portion of the bicycle is imaged so as to extend in the vertical direction between both feet, it is possible to detect an image region R having a length greater than or equal to a predetermined value in the vertical direction.
- the present invention is also applicable to a bicycle in which a bicycle driver is moving forward or stopped with his back to the vehicle 10 because the rear wheel (tire) is imaged between both feet.
- an infrared camera is used as the imaging means.
- a television camera that can detect only normal visible light may be used (for example, JP-A-2-26490).
- JP-A-2-26490 a television camera that can detect only normal visible light
- the object extraction process can be further simplified, and the calculation load can be reduced.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mechanical Engineering (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Description
(X-Xav)/lx=(Y-Yav)/ly=(Z-Zav)/lz (5a)
Vs=(Zv(N-1)―Zv(0))/ΔT (7)
Zv(0)/Vs≦T (8)
|Yv(0)|≦H (9)
1)背景の輝度値よりも所定値以上高い輝度値を有する。
2)縦長の領域である。
3)垂直方向において、所定値以下の長さを有する。
4)水平方向において、所定範囲内の幅を有する。
5)垂直方向において、直線性を持つ。
2 画像処理ユニット
3 スピーカ
4 ヘッドアップディスプレイ
Claims (4)
- 車両に搭載され、車両周辺において運転者により運転されている自転車を認識するための車両周辺監視装置であって、
前記車両周辺を撮像し、対象物の温度に応じた輝度値を有する撮像画像を取得する撮像手段と、
前記撮像画像から、背景の温度よりも所定値以上高い温度を表す輝度値を有する画像領域を抽出する抽出手段と、
それぞれが水平方向の幅よりも垂直方向の長さの方が大きい一対の第1対象物部分であって、一方の第1対象物部分と他方の第1対象物部分との間の距離が水平方向において所定値以下である該一対の第1対象物部分が、前記抽出された画像領域から検出されると共に、前記一対の第1対象物部分の間に、該第1対象物部分と異なる輝度を持ち、かつ垂直方向に所定値以上の長さを有する第2対象物部分が存在するならば、該一対の第1対象物部分および該第2対象物部分を含む対象物を前記自転車と判定する自転車判定手段と、
を備える、車両周辺監視装置。 - 前記自転車判定手段は、前記第2対象物部分が、前記背景の温度よりも低い温度を表す輝度値を有していれば、該第1対象物部分および該第2対象物部分を含む対象物を前記自転車と判定する、
請求項1に記載の車両周辺監視装置。 - 前記自転車判定手段は、さらに、前記一対の第1対象物部分の最下点の位置よりも、前記第2対象物部分の最下点の位置の方が、垂直方向において低いことが検出されたならば、前記対象物を自転車と判定する、
請求項1または2に記載の車両周辺監視装置。 - 前記自転車判定手段は、さらに、前記第2対象物部分の最下点近傍に、所定値以上の面積を有し、かつ輝度値が一様な画像領域を検出したならば、前記対象物を自転車と判定する、
請求項1から3のいずれかに記載の車両周辺監視装置。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200980141757.XA CN102187377B (zh) | 2008-10-20 | 2009-10-08 | 车辆的周围监测装置 |
US13/123,997 US8648912B2 (en) | 2008-10-20 | 2009-10-08 | Vehicle periphery monitoring apparatus |
EP09821754A EP2343695A4 (en) | 2008-10-20 | 2009-10-08 | DEVICE FOR MONITORING THE VEHICLE ENVIRONMENT |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008269636A JP4410292B1 (ja) | 2008-10-20 | 2008-10-20 | 車両の周辺監視装置 |
JP2008-269636 | 2008-10-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010047054A1 true WO2010047054A1 (ja) | 2010-04-29 |
Family
ID=41739244
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2009/005258 WO2010047054A1 (ja) | 2008-10-20 | 2009-10-08 | 車両の周辺監視装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US8648912B2 (ja) |
EP (1) | EP2343695A4 (ja) |
JP (1) | JP4410292B1 (ja) |
CN (1) | CN102187377B (ja) |
WO (1) | WO2010047054A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011027907A1 (en) * | 2009-09-03 | 2011-03-10 | Honda Motor Co., Ltd. | Vehicle vicinity monitoring apparatus |
JP2018048921A (ja) * | 2016-09-22 | 2018-03-29 | 株式会社デンソー | 物体検知装置及び物体検知方法 |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5548667B2 (ja) * | 2011-11-24 | 2014-07-16 | 富士重工業株式会社 | 車外環境認識装置 |
US20130177237A1 (en) * | 2012-01-09 | 2013-07-11 | Gregory Gerhard SCHAMP | Stereo-vision object detection system and method |
US8935082B2 (en) * | 2012-01-24 | 2015-01-13 | Xerox Corporation | Vehicle speed determination via infrared imaging |
JP5774770B2 (ja) * | 2012-03-12 | 2015-09-09 | 本田技研工業株式会社 | 車両周辺監視装置 |
JP5648655B2 (ja) * | 2012-04-27 | 2015-01-07 | 株式会社デンソー | 対象物識別装置 |
JP5809751B2 (ja) * | 2012-06-26 | 2015-11-11 | 本田技研工業株式会社 | 対象物認識装置 |
JP5700263B2 (ja) * | 2013-01-22 | 2015-04-15 | 株式会社デンソー | 衝突傷害予測システム |
JP6278688B2 (ja) * | 2013-12-20 | 2018-02-14 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法及びプログラム |
TWI561418B (en) * | 2014-03-19 | 2016-12-11 | Altek Autotronics Corp | Obstacle detection device |
CN107255470B (zh) * | 2014-03-19 | 2020-01-10 | 能晶科技股份有限公司 | 障碍物检测装置 |
DE102014017910B4 (de) * | 2014-12-04 | 2023-02-16 | Audi Ag | Verfahren zur Auswertung von Umfelddaten eines Kraftfahrzeugs und Kraftfahrzeug |
US9168869B1 (en) * | 2014-12-29 | 2015-10-27 | Sami Yaseen Kamal | Vehicle with a multi-function auxiliary control system and heads-up display |
JP6543050B2 (ja) * | 2015-03-06 | 2019-07-10 | 株式会社デンソーテン | 障害物検出装置および障害物検出方法 |
EP3136288A1 (en) * | 2015-08-28 | 2017-03-01 | Autoliv Development AB | Vision system and method for a motor vehicle |
JP6443318B2 (ja) * | 2015-12-17 | 2018-12-26 | 株式会社デンソー | 物体検出装置 |
CN107924465B (zh) * | 2016-03-18 | 2021-09-10 | Jvc 建伍株式会社 | 物体识别装置、物体识别方法以及存储介质 |
US10643391B2 (en) * | 2016-09-23 | 2020-05-05 | Apple Inc. | Immersive virtual display |
KR101899396B1 (ko) * | 2016-11-24 | 2018-09-18 | 현대자동차주식회사 | 차량 및 그 제어방법 |
JP7071102B2 (ja) * | 2017-11-28 | 2022-05-18 | 株式会社Subaru | 車外環境認識装置 |
CN111942426B (zh) * | 2020-08-14 | 2023-04-25 | 宝武集团鄂城钢铁有限公司 | 一种基于热红外的铁水运输机车车轴测温方法 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0226490A (ja) | 1988-07-15 | 1990-01-29 | Nec Corp | 加入者線信号変換器 |
JP2000097963A (ja) * | 1998-09-21 | 2000-04-07 | Mitsubishi Electric Corp | 移動体の識別装置 |
JP2001006096A (ja) | 1999-06-23 | 2001-01-12 | Honda Motor Co Ltd | 車両の周辺監視装置 |
JP2003216949A (ja) | 2002-01-18 | 2003-07-31 | Honda Motor Co Ltd | 赤外線画像処理装置 |
JP2003226211A (ja) * | 2002-02-04 | 2003-08-12 | Nissan Motor Co Ltd | 車両用保護装置 |
JP2005165422A (ja) | 2003-11-28 | 2005-06-23 | Denso Corp | 衝突可能性判定装置 |
JP2006185434A (ja) | 2004-11-30 | 2006-07-13 | Honda Motor Co Ltd | 車両周辺監視装置 |
JP2008046947A (ja) * | 2006-08-18 | 2008-02-28 | Alpine Electronics Inc | 周辺監視システム |
JP2008090748A (ja) * | 2006-10-04 | 2008-04-17 | Toyota Motor Corp | 車輌用警報装置 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4392886B2 (ja) * | 1999-01-22 | 2010-01-06 | キヤノン株式会社 | 画像抽出方法及び装置 |
JP3987048B2 (ja) * | 2003-03-20 | 2007-10-03 | 本田技研工業株式会社 | 車両周辺監視装置 |
JP3922245B2 (ja) * | 2003-11-20 | 2007-05-30 | 日産自動車株式会社 | 車両周辺監視装置および方法 |
JP2007187618A (ja) * | 2006-01-16 | 2007-07-26 | Omron Corp | 物体識別装置 |
US7786898B2 (en) * | 2006-05-31 | 2010-08-31 | Mobileye Technologies Ltd. | Fusion of far infrared and visible images in enhanced obstacle detection in automotive applications |
-
2008
- 2008-10-20 JP JP2008269636A patent/JP4410292B1/ja not_active Expired - Fee Related
-
2009
- 2009-10-08 CN CN200980141757.XA patent/CN102187377B/zh not_active Expired - Fee Related
- 2009-10-08 US US13/123,997 patent/US8648912B2/en not_active Expired - Fee Related
- 2009-10-08 WO PCT/JP2009/005258 patent/WO2010047054A1/ja active Application Filing
- 2009-10-08 EP EP09821754A patent/EP2343695A4/en not_active Withdrawn
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0226490A (ja) | 1988-07-15 | 1990-01-29 | Nec Corp | 加入者線信号変換器 |
JP2000097963A (ja) * | 1998-09-21 | 2000-04-07 | Mitsubishi Electric Corp | 移動体の識別装置 |
JP2001006096A (ja) | 1999-06-23 | 2001-01-12 | Honda Motor Co Ltd | 車両の周辺監視装置 |
JP2003216949A (ja) | 2002-01-18 | 2003-07-31 | Honda Motor Co Ltd | 赤外線画像処理装置 |
JP2003226211A (ja) * | 2002-02-04 | 2003-08-12 | Nissan Motor Co Ltd | 車両用保護装置 |
JP2005165422A (ja) | 2003-11-28 | 2005-06-23 | Denso Corp | 衝突可能性判定装置 |
JP2006185434A (ja) | 2004-11-30 | 2006-07-13 | Honda Motor Co Ltd | 車両周辺監視装置 |
JP2008046947A (ja) * | 2006-08-18 | 2008-02-28 | Alpine Electronics Inc | 周辺監視システム |
JP2008090748A (ja) * | 2006-10-04 | 2008-04-17 | Toyota Motor Corp | 車輌用警報装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2343695A4 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011027907A1 (en) * | 2009-09-03 | 2011-03-10 | Honda Motor Co., Ltd. | Vehicle vicinity monitoring apparatus |
JP2013504098A (ja) * | 2009-09-03 | 2013-02-04 | 本田技研工業株式会社 | 車両周辺監視装置 |
JP2018048921A (ja) * | 2016-09-22 | 2018-03-29 | 株式会社デンソー | 物体検知装置及び物体検知方法 |
Also Published As
Publication number | Publication date |
---|---|
CN102187377A (zh) | 2011-09-14 |
EP2343695A4 (en) | 2012-04-18 |
EP2343695A1 (en) | 2011-07-13 |
US8648912B2 (en) | 2014-02-11 |
US20110234804A1 (en) | 2011-09-29 |
JP2010097541A (ja) | 2010-04-30 |
JP4410292B1 (ja) | 2010-02-03 |
CN102187377B (zh) | 2014-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4410292B1 (ja) | 車両の周辺監視装置 | |
JP3764086B2 (ja) | 車両用情報提供装置 | |
US7233233B2 (en) | Vehicle surroundings monitoring apparatus | |
JP4173901B2 (ja) | 車両周辺監視装置 | |
JP5706874B2 (ja) | 車両の周辺監視装置 | |
JP4203512B2 (ja) | 車両周辺監視装置 | |
JP4173902B2 (ja) | 車両周辺監視装置 | |
US7969466B2 (en) | Vehicle surroundings monitoring apparatus | |
JP4528283B2 (ja) | 車両周辺監視装置 | |
US20060126897A1 (en) | Vehicle surroundings monitoring apparatus | |
JP4128562B2 (ja) | 車両周辺監視装置 | |
US20060115122A1 (en) | Vehicle surroundings monitoring apparatus | |
JP4813304B2 (ja) | 車両周辺監視装置 | |
JP4425852B2 (ja) | 車両周辺監視装置 | |
JP3970876B2 (ja) | 車両周辺監視装置 | |
US9030560B2 (en) | Apparatus for monitoring surroundings of a vehicle | |
JP4943403B2 (ja) | 車両の周辺監視装置 | |
JP3898157B2 (ja) | 赤外線画像認識装置 | |
JP4627305B2 (ja) | 車両周辺監視装置、車両周辺監視方法、及び車両周辺監視用プログラム | |
JP4372746B2 (ja) | 車両周辺監視装置 | |
JP4358183B2 (ja) | 車両周辺監視装置 | |
JP4283266B2 (ja) | 車両周辺監視装置 | |
JP4472623B2 (ja) | 車両周辺監視装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200980141757.X Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09821754 Country of ref document: EP Kind code of ref document: A1 |
|
REEP | Request for entry into the european phase |
Ref document number: 2009821754 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2009821754 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13123997 Country of ref document: US |