WO2010029592A1 - Vehicle periphery monitoring apparatus - Google Patents

Vehicle periphery monitoring apparatus Download PDF

Info

Publication number
WO2010029592A1
WO2010029592A1 PCT/JP2008/002484 JP2008002484W WO2010029592A1 WO 2010029592 A1 WO2010029592 A1 WO 2010029592A1 JP 2008002484 W JP2008002484 W JP 2008002484W WO 2010029592 A1 WO2010029592 A1 WO 2010029592A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional object
shadow
image
lamp
vehicle
Prior art date
Application number
PCT/JP2008/002484
Other languages
French (fr)
Japanese (ja)
Inventor
原田雅之
都丸義広
三木洋平
藤本仁志
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2008/002484 priority Critical patent/WO2010029592A1/en
Priority to JP2010528535A priority patent/JP5295254B2/en
Publication of WO2010029592A1 publication Critical patent/WO2010029592A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/103Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using camera systems provided with artificial illumination device, e.g. IR light source
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8093Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for obstacle warning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the present invention relates to a vehicle surrounding monitoring device that detects a three-dimensional object existing around a vehicle.
  • a conventional vehicle periphery monitoring device is provided with a camera that captures an area that becomes a blind spot from the driver's seat of the vehicle, and the user can visually recognize the blind spot area by viewing an image captured by the camera. At that time, it is important that the user can determine whether or not an object present on the image is in contact with the vehicle, but it is difficult to grasp the height of the object only by looking at a single image. Therefore, it is difficult to determine whether to contact the vehicle.
  • FIG. 16B when any one of objects A, B, and C having different heights exists around the vehicle, the object is photographed by a camera attached to the vehicle. All images taken by the camera are as shown in FIG.
  • the actual object is C
  • the object C is on the road surface and is less likely to come into contact with the vehicle.
  • the object is A
  • the vehicle will come into contact with the vehicle as it moves backward.
  • it is important to determine whether or not the object D on the image is in contact with the vehicle.
  • the objects A, B, and C are displayed as the object D having the same length. Therefore, it is difficult to determine whether to contact the vehicle.
  • Patent Document 1 discloses a method for detecting that an object on an image is a three-dimensional object by comparing the moving distance of a vehicle and the moving distance of an object on an image captured by a camera. ing.
  • the conventional vehicle periphery monitoring device is configured as described above, it is necessary to use the moving distance of the vehicle to detect that the object on the image is a three-dimensional object, and the vehicle is stopped. However, there is a problem that it cannot be detected that the object on the image is a three-dimensional object.
  • the present invention has been made to solve the above-described problems, and an object of the present invention is to obtain a vehicle periphery monitoring device that can detect a three-dimensional object even when the vehicle is stopped.
  • the vehicle surrounding monitoring apparatus acquires a first image taken by a camera while the lamp is turned off, and also obtains a second image taken by the camera while the lamp is turned on.
  • a shadow for detecting a shadow of a three-dimensional object by a lamp projected on the second image by comparing the first image acquired by the image acquiring unit for acquiring an image and the second image acquired by the image acquiring unit.
  • Detecting means, and the three-dimensional object detecting means detects the three-dimensional object using the shadow of the three-dimensional object detected by the shadow detecting means.
  • FIG. It is a block diagram which shows the vehicle periphery monitoring apparatus by Embodiment 2 of this invention. It is explanatory drawing which shows a mode that the circumference
  • FIG. 1 is a block diagram showing a vehicle surrounding monitoring apparatus according to Embodiment 1 of the present invention.
  • FIG. 2 is an explanatory diagram showing a situation in which the surroundings of the vehicle are monitored by the vehicle surrounding monitoring apparatus according to Embodiment 1 of the present invention.
  • the camera 2 is installed, for example, at the rear portion of the vehicle 1 and photographs the surroundings of the vehicle 1.
  • the lamp 3 is installed, for example, at the rear of the vehicle 1 and irradiates light toward the periphery of the vehicle 1 under the instruction of the control unit 7.
  • the image holding unit 4 acquires and holds the image P1 (first image) taken by the camera 2 with the lamp 3 turned off, and the lamp 3 is turned on. In this state, a process of acquiring and holding the image P2 (second image) taken by the camera 2 is performed.
  • the image holding unit 4 and the control unit 7 constitute an image acquisition unit.
  • the shadow detection unit 5 compares the image P1 and the image P2 held in the image holding unit 4, and determines the shadow S of the three-dimensional object A by the lamp 3 projected on the image P2. Perform the detection process.
  • the shadow detection unit 5 and the control unit 7 constitute a shadow detection unit.
  • the three-dimensional object detection unit 6 performs processing for detecting the three-dimensional object A using the shadow S of the three-dimensional object A detected by the shadow detection unit 5 under the instruction of the control unit 7. That is, the process of calculating the three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A is performed using the shadow S of the three-dimensional object A detected by the shadow detection unit 5.
  • the three-dimensional object detection unit 6 and the control unit 7 constitute a three-dimensional object detection unit.
  • the control unit 7 is a processing unit that controls the operations of the lamp 3, the image holding unit 4, the shadow detection unit 5, the three-dimensional object detection unit 6, and the display unit 8, and the control unit 7 is a three-dimensional object detected by the three-dimensional object detection unit 6.
  • the display unit 8 is composed of a liquid crystal display or the like, and displays an image photographed by the camera 2 or information necessary for the user under the instruction of the control unit 7.
  • the coordinates (x 1 , y 1 , z 1 ) are coordinates indicating the position where the lamp 3 is installed, and the coordinates (x 0 , y 0 , z 0 ) are the coordinates of the vertex of the three-dimensional object A. is there.
  • the coordinates (x b , y b , z b ) are the coordinates of the point where the solid object A intersects the road surface, and the coordinates (x s , y s , z s ) are shadows corresponding to the vertices of the solid object A.
  • FIG. 3 is a flowchart showing the processing contents of the vehicle periphery monitoring apparatus according to Embodiment 1 of the present invention.
  • control unit 7 outputs an image acquisition command to the image holding unit 4 while the lamp 3 is turned off.
  • image holding unit 4 acquires and holds the image P1 captured by the camera 2 (step ST1).
  • FIG. 4 is an explanatory view showing an image P1 taken by the camera 2 in a state where the lamp 3 is turned off.
  • coordinates (u 0 , v 0 ) are coordinates on the image P1 of points corresponding to the coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A in FIG.
  • the coordinates (u b , v b ) are the coordinates on the image P1 of the point corresponding to the coordinates (x b , y b , z b ) of the intersection of the solid object A and the road surface in FIG.
  • the shadow S of the three-dimensional object A by the lamp 3 is not projected on the image P1 in FIG.
  • control unit 7 turns on the lamp 3 (step ST2), generates a shadow S of the three-dimensional object A by the lamp 3 on the road surface, and then outputs an image acquisition command to the image holding unit 4.
  • image holding unit 4 acquires and holds the image P2 captured by the camera 2 (step ST3).
  • FIG. 5 is an explanatory diagram showing an image P2 taken by the camera 2 in a state where the lamp 3 is lit.
  • coordinates (u 0 , v 0 ) are coordinates on the image P2 of points corresponding to the coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A in FIG.
  • Coordinates (u b, v b) is a three-dimensional object A and the road surface at the intersection of the coordinates (x b, y b, z b) the coordinates of the image P2 in a point corresponding to the in Figure 2.
  • the coordinates (u s , v s ) are the coordinates on the image P2 of the point corresponding to the coordinates (x s , y s , z s ) of the end point of the shadow S in FIG.
  • the control unit 7 When the image holding unit 4 acquires the images P ⁇ b> 1 and P ⁇ b> 2, the control unit 7 outputs a detection command for the shadow S of the three-dimensional object A by the lamp 3 to the shadow detection unit 5.
  • the shadow detection unit 5 receives the detection command of the shadow S from the control unit 7, the shadow detection unit 5 compares the image P1 and the image P2 held by the image holding unit 4 and compares the image P1 with the lamp 3 projected on the image P2.
  • the shadow S of the object A is detected (step ST4).
  • each pixel value of the image P2 is subtracted from each pixel value of the image P1, and a portion where a pixel whose subtraction value is equal to or larger than a predetermined threshold exists is detected as the shadow S of the three-dimensional object A.
  • the method for detecting the shadow S is not particularly limited.
  • the user may compare the image P1 and the image P2 and specify the position at which the shadow S is determined to exist.
  • the shadow detection unit 5 detects the shadow S of the three-dimensional object A by the lamp 3, the coordinates of the points in FIG. 5 (x s , y s , z s ) corresponding to the coordinates (x s , y s , z s ) of the shadow S in FIG. u s , v s ).
  • the control unit 7 When the shadow detection unit 5 detects the shadow S of the three-dimensional object A by the lamp 3, the control unit 7 outputs a detection command for the three-dimensional object A to the three-dimensional object detection unit 6.
  • the solid object detection unit 6 receives the detection command for the solid object A from the control unit 7, the solid object detection unit 6 detects the solid object A using the shadow S of the solid object A detected by the shadow detection unit 5. That is, the three-dimensional object detection unit 6 uses the shadow S of the three-dimensional object A detected by the shadow detection unit 5 to calculate the coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A in FIG. (Step ST5).
  • the three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A in the three-dimensional object detection unit 6 will be specifically described.
  • the Z axis that is the coordinate axis is the line-of-sight direction of the camera 2
  • the Y axis is orthogonal to the Z axis, and upwards
  • the X axis is orthogonal to the Y and Z axes
  • the three-dimensional object detection unit 6 previously measures the position where the lamp 3 is installed.
  • the coordinates (x l , y l , z l ) of the position where the lamp 3 is installed are obtained in advance.
  • the three-dimensional object detection unit 6 also has coordinates (x b , y b , z b ) of the point where the three-dimensional object A intersects the road surface, and coordinates (x s) of the end point of the shadow S corresponding to the vertex of the three-dimensional object A.
  • Y s , z s exist on the same plane (the road surface of the vehicle), the relationship between the points on the image captured by the camera 2 in advance and the points on the road surface of the vehicle 1 (hereinafter, , Referred to as “camera image / road surface relationship”) by plane projective transformation or the like.
  • the three-dimensional object detection unit 6 determines the intersection of the three-dimensional object A and the road surface in FIG. 2 from the coordinates (u b , v b ) on the image P2 according to the camera image / road surface relationship. coordinates (x b, y b, z b) is calculated.
  • the three-dimensional object detection unit 6 determines the coordinates (x s , y s , z s ) of the end point of the shadow S in FIG. 2 from the coordinates (u s , v s ) on the image P2 according to the camera image / road surface relationship. Is calculated. However, the coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A in FIG. 2 do not exist on the road surface of the vehicle 1, and are coordinates on the image P 2 according to the camera image / road surface relationship ( can not be calculated from u 0, v 0), the three-dimensional object detection unit 6 is calculated as below.
  • the three-dimensional object detection unit 6 uses coordinates (x 1 , y 1 , z 1 ) and coordinates (x s , y s ) that have already been calculated for the coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A. , since the position in the middle of the z s), using the coordinates (x l, y l, z l) and the coordinates (x s, y s, z s), as shown in the following formula (1)
  • the coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A are defined.
  • coordinates x 0, for y 0 is the focal length f and the coordinates of the camera 2 (x 0, y 0, z 0) to the coordinates on the image P2 corresponding (u 0, v 0 ),
  • the coordinates x 0 and y 0 are defined as in the following formula (2).
  • the focal length f is an eigenvalue of the camera 1, it is determined by measuring in advance.
  • Expression (3) is derived from Expression (1) and Expression (2).
  • the control unit 7 converts the three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A into information necessary for the user and displays the information on the display unit 8. (Step ST6). For example, when the height h of the three-dimensional object A is displayed, the height h is calculated from the three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A as shown in the following equation (4). And displayed on the display unit 8.
  • FIG. 6 is an explanatory diagram illustrating a display example of the height h of the three-dimensional object A.
  • FIG. 6 shows an example in which the height h of the three-dimensional object A is 25 cm.
  • the specific height h of the three-dimensional object A is displayed, but the information presentation method is not particularly limited. For example, if it is sufficient to display information on whether or not to contact the vehicle 1, a three-dimensional object A having a height h or higher that the vehicle 1 cannot get over may be highlighted.
  • the image P1 photographed by the camera 2 is acquired and held while the lamp 3 is turned off, and the lamp 3 is turned on.
  • the image holding unit 4 that acquires and holds the image P2 captured by the camera 2 is compared with the image P1 and the image P2 that are held in the image holding unit 4, and the lamp that is projected on the image P2 3 to detect the shadow S of the three-dimensional object A, and the three-dimensional object detection unit 6 uses the shadow S of the three-dimensional object A detected by the shadow detection unit 5 to detect the three-dimensional object A.
  • the three-dimensional object A can be detected even when the vehicle 1 is stopped.
  • FIG. FIG. 7 is a block diagram showing a vehicle periphery monitoring apparatus according to Embodiment 2 of the present invention.
  • FIG. 8 is an explanatory diagram showing a situation in which the surroundings of the vehicle are monitored by the vehicle surroundings monitoring apparatus according to Embodiment 2 of the present invention. 7 and 8, the same reference numerals as those in FIGS. 1 and 2 indicate the same or corresponding parts, and thus description thereof is omitted.
  • the shadow detection unit 11 compares the image P1 held in the image holding unit 4 with the image P2, and detects the shadow S of the three-dimensional object A due to the sun projected on the image P1. Perform the process.
  • the shadow detection unit 11 and the control unit 14 constitute a shadow detection unit.
  • the solar information input unit 12 obtains the position of the vehicle 1 using, for example, GPS, calculates the position of the sun from the position of the vehicle 1 and the current time, and uses a unit vector indicating the direction of sunlight as the solar information.
  • the process given to 13 is executed.
  • the three-dimensional object detection unit 13 uses the shadow S of the three-dimensional object A by the sun detected by the shadow detection unit 11 and the sun information provided from the sun information input unit 12 under the instruction of the control unit 14 to convert the three-dimensional object A. Perform the detection process.
  • the sun information input unit 12, the three-dimensional object detection unit 13, and the control unit 14 constitute a three-dimensional object detection unit.
  • the control unit 14 is a processing unit that controls operations of the lamp 3, the image holding unit 4, the shadow detection unit 11, the three-dimensional object detection unit 13, and the display unit 8, and the control unit 14 is a three-dimensional object detected by the three-dimensional object detection unit 13.
  • FIG. 8 a vector (x v , y v , z v ) is a unit vector indicating the direction of sunlight.
  • FIG. 9 is a flowchart showing the processing contents of the vehicle periphery monitoring apparatus according to Embodiment 2 of the present invention.
  • the shadow detection unit 5 detects the shadow S of the three-dimensional object A by the lamp 3
  • the three-dimensional object detection unit 6 detects the three-dimensional object A by using the shadow S of the three-dimensional object A by the lamp 3.
  • the shadow detection part 11 detects the shadow S of the solid object A by the sun
  • the solid object detection part 13 is the solid object A by the sun.
  • the three-dimensional object A may be detected using the shadow S.
  • control unit 14 outputs an image acquisition command to the image holding unit 4 while the lamp 3 is turned off.
  • image holding unit 4 acquires and holds the image P1 captured by the camera 2 (step ST11).
  • FIG. 10 is an explanatory view showing an image P1 taken by the camera 2 in a state where the lamp 3 is turned off.
  • coordinates (u 0 , v 0 ) are coordinates on the image P1 of points corresponding to the coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A in FIG.
  • the coordinates (u b , v b ) are the coordinates on the image P1 of the point corresponding to the coordinates (x b , y b , z b ) of the intersection of the solid object A and the road surface in FIG.
  • the coordinates (u s , v s ) are the coordinates on the image P1 of points corresponding to the coordinates (x s , y s , z s ) of the end points of the shadow S in FIG.
  • control unit 14 turns on the lamp 3 (step ST12), and outputs an image acquisition command to the image holding unit 4.
  • the image holding unit 4 acquires and holds the image P2 captured by the camera 2 (step ST13).
  • the light irradiated from the lamp 3 is illuminated on the three-dimensional object A, so that the shadow S of the three-dimensional object A caused by the sun becomes inconspicuous.
  • FIG. 11 is an explanatory diagram showing an image P2 taken by the camera 2 in a state where the lamp 3 is lit.
  • coordinates (u 0 , v 0 ) are coordinates on the image P2 of points corresponding to the coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A in FIG.
  • the coordinates (u b , v b ) are the coordinates on the image P2 of the points corresponding to the coordinates (x b , y b , z b ) of the intersection of the solid object A and the road surface in FIG.
  • the control unit 14 When the image holding unit 4 acquires the images P ⁇ b> 1 and P ⁇ b> 2, the control unit 14 outputs a detection command for the shadow S of the three-dimensional object A by the sun to the shadow detection unit 11.
  • the shadow detection unit 11 receives the detection command of the shadow S from the control unit 14, the shadow detection unit 11 compares the image P ⁇ b> 1 held in the image holding unit 4 with the image P ⁇ b> 2, and the solid object by the sun projected on the image P ⁇ b> 1.
  • a shadow S of A is detected (step ST14).
  • each pixel value of the image P2 is subtracted from each pixel value of the image P1, and a portion where a pixel whose subtraction value is equal to or larger than a predetermined threshold exists is detected as the shadow S of the three-dimensional object A.
  • the method for detecting the shadow S is not particularly limited.
  • the user may compare the image P1 and the image P2 and specify the position at which the shadow S is determined to exist.
  • the shadow detection unit 11 detects the shadow S of the three-dimensional object A by the sun, the coordinates (u) of the point in FIG. 10 corresponding to the coordinates (x s , y s , z s ) of the shadow S in FIG. s , v s ).
  • the solar information input unit 12 obtains the position (latitude and longitude) of the vehicle 1 using, for example, GPS, calculates the position of the sun from the position of the vehicle 1 and the current time, and indicates the direction of the sunlight as solar information.
  • a vector (x v , y v , z v ) is calculated (step ST15).
  • a detection command for the three-dimensional object A is output to the three-dimensional object detection unit 13.
  • the solid object detection unit 13 receives a detection command for the solid object A from the control unit 14, the shadow S of the solid object A detected by the shadow detection unit 11 and the unit vector (x v ) calculated by the sun information input unit 12. , Y v , z v ) to detect the three-dimensional object A.
  • the three-dimensional object detection unit 13 uses the shadow S and the unit vector (x v , y v , z v ) of the three-dimensional object A detected by the shadow detection unit 5 to coordinate the vertex of the three-dimensional object A in FIG. (X 0 , y 0 , z 0 ) is calculated (step ST16).
  • the Z axis that is the coordinate axis is the line-of-sight direction of the camera 2
  • the Y axis is orthogonal to the Z axis, and upwards
  • the X axis is orthogonal to the Y and Z axes
  • the three-dimensional object detection unit 13 coordinates the coordinates (x b , y b , z b ) of the point where the three-dimensional object A intersects the road surface, and the coordinates (x s , y) of the shadow S corresponding to the vertex of the three-dimensional object A. s 1 , z s ) exist on the same plane (road surface of the vehicle), so the camera image / road surface relationship is obtained in advance by plane projective transformation or the like.
  • the three-dimensional object detection unit 13 determines the intersection of the three-dimensional object A and the road surface in FIG. 8 from the coordinates (u b , v b ) on the image P1 according to the camera image / road surface relationship.
  • the coordinates (x b , y b , z b ) are calculated.
  • the three-dimensional object detection unit 13 determines the coordinates (x s , y s , z s ) of the end point of the shadow S in FIG. 8 from the coordinates (u s , v s ) on the image P1 according to the camera image / road surface relationship. Is calculated. However, the coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A in FIG. 8 do not exist on the road surface of the vehicle 1, and the coordinates on the image P 1 according to the camera image / road surface relationship ( Since it cannot be calculated from u 0 , v 0 ), the three-dimensional object detection unit 13 calculates as follows.
  • the three-dimensional object detection unit 13 has the same coordinates in the vertex (x 0 , y 0 , z 0 ) of the three-dimensional object A and the already calculated coordinates (x s , y s , z s ), that is, solar information.
  • the coordinates (x s , y s , z s ) and the unit vector (x v , y v , z v ) is used to define the coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A as shown in the following equation (5).
  • the three-dimensional object detection unit 13 defines the vertex coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A as in equation (5), and performs the same process as the three-dimensional object detection unit 6 of the first embodiment. Then, the equation (2) is defined, and the value of “t” in the equation (5) is obtained using the equations (5) and (2), whereby the coordinates (x 0 , y 0, z 0) is calculated.
  • the control unit 14 uses the three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A to the user as in the control unit 7 of the first embodiment. Is converted into necessary information and displayed on the display unit 8 (step ST17). For example, when displaying the height h of the three-dimensional object A, the height h is calculated from the three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A using the above equation (4). To display.
  • the image P1 photographed by the camera 2 is acquired and held while the lamp 3 is turned off, and the lamp 3 is turned on.
  • the image holding unit 4 that acquires and holds the image P2 captured by the camera 2 is compared with the image P1 and the image P2 that are held in the image holding unit 4, and the sun projected on the image P1.
  • the shadow detection unit 11 for detecting the shadow S of the three-dimensional object A is provided, and the three-dimensional object detection unit 13 calculates the shadow S of the three-dimensional object A detected by the shadow detection unit 11 and the unit vector calculated by the sun information input unit 12. Since the three-dimensional object A is detected using (x v , y v , z v ), the three-dimensional object A can be detected even when the vehicle 1 is stopped. .
  • FIG. 12 is a block diagram showing a vehicle surrounding monitoring apparatus according to Embodiment 3 of the present invention.
  • FIG. 13 is an explanatory diagram showing a situation in which the surroundings of the vehicle are monitored by the vehicle surrounding monitoring apparatus according to Embodiment 3 of the present invention. 12 and FIG. 13, the same reference numerals as those in FIG. 1 and FIG.
  • the lamp 3a which is the first lamp, is installed at the rear of the vehicle 1, for example, and irradiates light toward the periphery of the vehicle 1 under the instruction of the control unit 24.
  • the lamp 3b which is the second lamp, is installed at a position different from the lamp 3a at the rear part of the vehicle 1, for example, and irradiates light from the position different from the lamp 3a toward the periphery of the vehicle 1 under the instruction of the control unit 24. .
  • the image holding unit 21 acquires and holds the image P1 (first image) taken by the camera 2 with the lamps 3a and 3b turned off, and the lamp 3a With the lamp 3b turned off, the image P2 (second image) taken by the camera 2 is acquired and held, and the lamp 3a is turned off and the lamp 3b is turned on. In this state, a process of acquiring and holding the image P3 (third image) taken by the camera 2 is performed.
  • the image holding unit 21 and the control unit 24 constitute an image acquisition unit.
  • the shadow detection unit 22 compares the image P1 and the image P2 held in the image holding unit 21, and calculates the shadow Sa of the three-dimensional object A by the lamp 3a projected on the image P2. While detecting, the image P1 currently hold
  • the shadow detection unit 22 and the control unit 24 constitute a shadow detection unit.
  • the three-dimensional object detection unit 23 detects the three-dimensional object A using the shadow Sa of the three-dimensional object A detected by the shadow detection unit 22 under the instruction of the control unit 24, and the three-dimensional object detected by the shadow detection unit 22.
  • the process of detecting the three-dimensional object A is performed using the shadow Sb of A.
  • the three-dimensional object detection unit 23 and the control unit 24 constitute a three-dimensional object detection unit.
  • the control unit 24 is a processing unit that controls operations of the lamps 3 a and 3 b, the image holding unit 21, the shadow detection unit 22, the three-dimensional object detection unit 23, and the display unit 25, and the control unit 24 is operated by the three-dimensional object detection unit 23.
  • the three-dimensional object detected by using the shadow Sa of the three-dimensional object A by the lamp 3b and the three-dimensional object detected by using the shadow Sb of the three-dimensional object A by the lamp 3b are collated, and the validity of the detection result of the three-dimensional object detection unit 23 Implement a process to evaluate That is, if the coordinates of the three-dimensional object detected using the shadow Sa of the three-dimensional object A by the lamp 3a and the coordinates of the three-dimensional object detected using the shadow Sb of the three-dimensional object A by the lamp 3b are different, the vehicle 1
  • the certification that the road surface R1 that exists and the road surface R2 on which the shadow S of the three-dimensional object A exists does not match is performed.
  • the control unit 24 constitutes a detection result evaluation
  • the display unit 25 is composed of a liquid crystal display or the like. Under the instruction of the control unit 24, the display unit 25 displays images taken by the camera 2 and information necessary for the user. When it is recognized that the existing road surface R1 and the road surface R2 on which the shadow S of the three-dimensional object A does not match, for example, information for calling attention is displayed.
  • the road surface R1 on which the vehicle 1 exists and the shadow S of the three-dimensional object A are present. It is necessary that the road surface R ⁇ b> 2 where the road is present matches. For example, as shown in FIG. 13, when the road surface R1 on which the vehicle 1 exists and the road surface R2 on which the shadow S of the three-dimensional object A is not coplanar, the shadow of the three-dimensional object A projected on the road surface R2 Since S is shorter than the shadow S of the three-dimensional object A in FIG. 2, the accurate three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A cannot be calculated.
  • FIG. 14 is an explanatory diagram used for detection result evaluation of the vehicle periphery monitoring device according to Embodiment 3 of the present invention.
  • a point D1 is an end point of the shadow Sa projected by the lamp 3a if the shadow S of the three-dimensional object A exists on the road surface R1.
  • a point D2 is a point calculated from an image photographed by the camera 2 based on the shadow Sa of the three-dimensional object A when the shadow Sa of the three-dimensional object A is actually projected on the road surface R2 by the lamp 3a.
  • the point D3 is an end point of the shadow Sa of the three-dimensional object A that is actually projected on the road surface R2 by the lamp 3a.
  • the point D4 is an end point of the shadow Sb projected on the road surface by the lamp 3b if the shadow S of the three-dimensional object A exists on the road surface R1.
  • the point D5 is an end point of the shadow Sb of the three-dimensional object A when it is actually projected on the road surface R2 by the lamp 3b.
  • FIG. 15 is a flowchart showing the processing contents of the vehicle periphery monitoring apparatus according to Embodiment 3 of the present invention.
  • control unit 24 outputs an image acquisition command to the image holding unit 21 in a state where the lamps 3 a and 3 b are turned off.
  • image holding unit 21 acquires and holds the image P1 captured by the camera 2 (step ST21).
  • control unit 24 turns on the lamp 3a (step ST22), and outputs an image acquisition command to the image holding unit 21.
  • the lamp 3b is not turned on and is kept off.
  • the image holding unit 21 acquires and holds the image P2 captured by the camera 2 (step ST23).
  • control unit 24 turns off the lamp 3a and turns on the lamp 3b (step ST24), and then outputs an image acquisition command to the image holding unit 21.
  • image holding unit 21 acquires and holds the image P3 captured by the camera 2 (step ST25).
  • the control unit 24 When the image holding unit 21 acquires the images P1, P2, and P3, the control unit 24 outputs detection commands for the shadows Sa and Sb of the three-dimensional object A by the lamps 3a and 3b to the shadow detection unit 22.
  • the shadow detection unit 22 compares the image P1 and the image P2 held in the image holding unit 21 and projects the lamp 3a projected on the image P2.
  • the shadow Sa of the three-dimensional object A is detected.
  • the image P1 and the image P3 held in the image holding unit 21 are compared, and the shadow Sb of the three-dimensional object A by the lamp 3b projected on the image P3 is detected (step ST26). Since the shadow detection method by the shadow detection unit 22 is the same as that of the shadow detection unit 5 according to the first embodiment, detailed description thereof is omitted.
  • the control unit 24 When the shadow detection unit 22 detects the shadow Sa of the three-dimensional object A by the lamp 3a and the shadow Sb of the three-dimensional object A by the lamp 3b, the control unit 24 outputs a detection command for the three-dimensional object A to the three-dimensional object detection unit 23. .
  • the solid object detection unit 23 receives the detection command for the solid object A from the control unit 24, the solid object detection unit 23 detects the solid object A using the shadow Sa of the solid object A detected by the shadow detection unit 22.
  • the three-dimensional object detection unit 23 uses the shadow Sa of the three-dimensional object A detected by the shadow detection unit 22 in the same manner as the three-dimensional object detection unit 6 of the first embodiment, and the coordinates ( x 0 , y 0 , z 0 ) is calculated (step ST27).
  • the solid object detection unit 23 receives a detection command for the solid object A from the control unit 24, the solid object detection unit 23 detects the solid object A using the shadow Sb of the solid object A detected by the shadow detection unit 22. That is, the three-dimensional object detection unit 23 uses the shadow Sb of the three-dimensional object A detected by the shadow detection unit 22 in the same manner as the three-dimensional object detection unit 6 of the first embodiment, and the coordinates of the vertex of the three-dimensional object A ( x 0 , y 0 , z 0 ) is calculated (step ST27).
  • the control unit 24 detects the three-dimensional object A detected using the shadow Sa of the three-dimensional object A by the lamp 3a. compared with the coordinates of the vertices (x 0, y 0, z 0), the vertex coordinates of the detected three-dimensional object a using a shadow Sb of the three-dimensional object a by the lamp 3b of the (x 0, y 0, z 0) of (Step ST28).
  • the control unit 24 matches the road surface R1 where the vehicle 1 exists and the road surface R2 where the shadow S of the three-dimensional object A exists. Authenticate. On the other hand, if the two coordinates (x 0 , y 0 , z 0 ) are different, the road surface R1 on which the vehicle 1 exists and the road surface R2 on which the shadow S of the three-dimensional object A do not match. Authorize.
  • the three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A calculated by the three-dimensional object detection unit 23 are converted into information necessary for the user and displayed on the display unit 25 (step ST29).
  • step ST30 when recognized as road shadow S of the road surface R1 and the three-dimensional object A vehicle 1 is present is present R2 do not match, accurate three-dimensional coordinates of the three-dimensional object A (x 0, y 0, Attention information indicating that z 0 ) cannot be calculated is displayed on the display unit 25 (step ST30).
  • the image P1 held in the image holding unit 21 is compared with the image P2, and the three-dimensional object A formed by the lamp 3a projected on the image P2 is compared.
  • the shadow detection unit 22 detects the shadow Sa and detects the shadow Sb of the three-dimensional object A by the lamp 3b projected on the image P3 by comparing the image P1 and the image P3 held in the image holding unit 21.
  • the solid object A is detected using the shadow Sa of the solid object A detected by the shadow detection unit 22, and the solid object A is detected using the shadow Sb of the solid object A detected by the shadow detection unit 22.
  • a three-dimensional object detection unit 23 for detection, and the control unit 24 detects the coordinates of the three-dimensional object detected using the shadow Sa of the three-dimensional object A by the lamp 3a and the shadow Sb of the three-dimensional object A by the lamp 3b. If the coordinates of the created solid object are different Since the road surface R1 on which the vehicle 1 exists and the road surface R2 on which the shadow S of the three-dimensional object A does not match each other, the road surface R1 on which the vehicle 1 exists and the solid surface When the accurate three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A cannot be calculated because the road surface R2 on which the shadow S of the object A exists does not match, An effect that can alert the user and avoid unexpected contact is achieved.
  • the vehicle surroundings monitoring device is suitable for an application that detects a three-dimensional object existing around and avoids an unexpected contact.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A vehicle periphery monitoring apparatus has an image holding section (4) to acquire and hold an image P1 photographed by a camera (2) with a lamp (3) turned off, while acquiring and holding an image P2 photographed by the camera (2) with the lamp (3) turned on, a shadow detector section (5) to detect a shadow S of a solid object A that is made by the light from the lamp (3) and is projected on the image P2 by comparing the image P1 and the image P2 that are held by the image holding section (4), and a solid object detector section (6) to detect the solid object A using the shadow S of the solid object A that is detected by the shadow detector section (5).

Description

車両周囲監視装置Vehicle perimeter monitoring device
 この発明は、車両の周囲に存在している立体物を検出する車両周囲監視装置に関するものである。 The present invention relates to a vehicle surrounding monitoring device that detects a three-dimensional object existing around a vehicle.
 従来の車両周囲監視装置は、車両の運転席から死角になる領域を撮影するカメラを設け、ユーザがカメラにより撮影された画像を見ることで、死角領域を視認できるようにしている。
 その際、ユーザが、画像上に存在している物体が車両と接触するか否かを判断できることが重要であるが、単一の画像を見るだけでは、物体の高さを把握することが困難であるため、車両と接触するか否かの判断が難しい。
A conventional vehicle periphery monitoring device is provided with a camera that captures an area that becomes a blind spot from the driver's seat of the vehicle, and the user can visually recognize the blind spot area by viewing an image captured by the camera.
At that time, it is important that the user can determine whether or not an object present on the image is in contact with the vehicle, but it is difficult to grasp the height of the object only by looking at a single image. Therefore, it is difficult to determine whether to contact the vehicle.
 例えば、図16(b)に示すように、高さが異なる物体A,B,Cのどれか1つが車両の周辺に存在しているとき、車両に取り付けられたカメラによって、その物体を撮影すると、カメラにより撮影された画像は、すべて図16(a)のようになる。
 実際の物体がCである場合には、物体Cは路面上にあり、車両とは接触する可能性が少ないが、物体がAである場合にはこのまま後進すると車両に接触してしまう。運転上では画像上の物体Dが車両と接触するか否かの判断が重要であるが、カメラにより撮影された画像上では、物体A,B,Cが同じ長さの物体Dとして表示されてしまうため車両と接触するか否かの判断は困難である。
For example, as shown in FIG. 16B, when any one of objects A, B, and C having different heights exists around the vehicle, the object is photographed by a camera attached to the vehicle. All images taken by the camera are as shown in FIG.
When the actual object is C, the object C is on the road surface and is less likely to come into contact with the vehicle. However, when the object is A, the vehicle will come into contact with the vehicle as it moves backward. In driving, it is important to determine whether or not the object D on the image is in contact with the vehicle. On the image taken by the camera, the objects A, B, and C are displayed as the object D having the same length. Therefore, it is difficult to determine whether to contact the vehicle.
 ただし、物体Dの高さを検出して、物体Dの実体が物体A,B,Cのどれであるかを確認することができれば、物体Dが車両と接触するか否かを判断することができる。
 例えば、以下の特許文献1には、車両の移動距離とカメラにより撮影された画像上の物体の移動距離とを比較して、画像上の物体が立体物であることを検出する手法が開示されている。
However, if it is possible to detect the height of the object D and confirm whether the substance D is the substance A, B, or C, it is possible to determine whether the object D is in contact with the vehicle. it can.
For example, Patent Document 1 below discloses a method for detecting that an object on an image is a three-dimensional object by comparing the moving distance of a vehicle and the moving distance of an object on an image captured by a camera. ing.
特開2007-087236号公報(段落番号[0030]から[0034]、図1)Japanese Unexamined Patent Publication No. 2007-087236 (paragraph numbers [0030] to [0034], FIG. 1)
 従来の車両周囲監視装置は以上のように構成されているので、画像上の物体が立体物であることを検出するには、車両の移動距離を用いる必要があり、車両が停止している状態では、画像上の物体が立体物であることを検出することができないなどの課題があった。 Since the conventional vehicle periphery monitoring device is configured as described above, it is necessary to use the moving distance of the vehicle to detect that the object on the image is a three-dimensional object, and the vehicle is stopped. However, there is a problem that it cannot be detected that the object on the image is a three-dimensional object.
 この発明は上記のような課題を解決するためになされたもので、車両が停止している状態でも、立体物を検出することができる車両周囲監視装置を得ることを目的とする。 The present invention has been made to solve the above-described problems, and an object of the present invention is to obtain a vehicle periphery monitoring device that can detect a three-dimensional object even when the vehicle is stopped.
 この発明に係る車両周囲監視装置は、ランプが消灯している状態で、カメラにより撮影された第1の画像を取得するとともに、ランプが点灯している状態で、カメラにより撮影された第2の画像を取得する画像取得手段と、画像取得手段により取得された第1の画像と第2の画像を比較して、第2の画像上に投影されているランプによる立体物の影を検出する影検出手段とを設け、立体物検出手段が影検出手段により検出された立体物の影を用いて、立体物を検出するようにしたものである。 The vehicle surrounding monitoring apparatus according to the present invention acquires a first image taken by a camera while the lamp is turned off, and also obtains a second image taken by the camera while the lamp is turned on. A shadow for detecting a shadow of a three-dimensional object by a lamp projected on the second image by comparing the first image acquired by the image acquiring unit for acquiring an image and the second image acquired by the image acquiring unit. Detecting means, and the three-dimensional object detecting means detects the three-dimensional object using the shadow of the three-dimensional object detected by the shadow detecting means.
 このことによって、車両が停止している状態でも、立体物を検出することができるなどの効果がある。 This has the effect that a three-dimensional object can be detected even when the vehicle is stopped.
この発明の実施の形態1による車両周囲監視装置を示す構成図である。It is a block diagram which shows the vehicle periphery monitoring apparatus by Embodiment 1 of this invention. この発明の実施の形態1による車両周囲監視装置により車両の周囲が監視される様子を示す説明図である。It is explanatory drawing which shows a mode that the circumference | surroundings of a vehicle are monitored by the vehicle periphery monitoring apparatus by Embodiment 1 of this invention. この発明の実施の形態1による車両周囲監視装置の処理内容を示すフローチャートである。It is a flowchart which shows the processing content of the vehicle periphery monitoring apparatus by Embodiment 1 of this invention. ランプ3が消灯している状態で、カメラ2により撮影された画像P1を示す説明図である。It is explanatory drawing which shows the image P1 image | photographed with the camera 2 in the state in which the lamp | ramp 3 was extinguished. ランプ3が点灯している状態で、カメラ2により撮影された画像P2を示す説明図である。It is explanatory drawing which shows the image P2 image | photographed with the camera 2 in the state in which the lamp | ramp 3 is lighting. 立体物Aの高さhの表示例を示す説明図である。It is explanatory drawing which shows the example of a display of the height h of the solid object A. FIG. この発明の実施の形態2による車両周囲監視装置を示す構成図である。It is a block diagram which shows the vehicle periphery monitoring apparatus by Embodiment 2 of this invention. この発明の実施の形態2による車両周囲監視装置により車両の周囲が監視される様子を示す説明図である。It is explanatory drawing which shows a mode that the circumference | surroundings of a vehicle are monitored by the vehicle periphery monitoring apparatus by Embodiment 2 of this invention. この発明の実施の形態2による車両周囲監視装置の処理内容を示すフローチャートである。It is a flowchart which shows the processing content of the vehicle periphery monitoring apparatus by Embodiment 2 of this invention. ランプ3が消灯している状態で、カメラ2により撮影された画像P1を示す説明図である。It is explanatory drawing which shows the image P1 image | photographed with the camera 2 in the state in which the lamp | ramp 3 was extinguished. ランプ3が点灯している状態で、カメラ2により撮影された画像P2を示す説明図である。It is explanatory drawing which shows the image P2 image | photographed with the camera 2 in the state in which the lamp | ramp 3 is lighting. この発明の実施の形態3による車両周囲監視装置を示す構成図である。It is a block diagram which shows the vehicle periphery monitoring apparatus by Embodiment 3 of this invention. この発明の実施の形態3による車両周囲監視装置により車両の周囲が監視される様子を示す説明図である。It is explanatory drawing which shows a mode that the circumference | surroundings of a vehicle are monitored by the vehicle periphery monitoring apparatus by Embodiment 3 of this invention. この発明の実施の形態3による車両周囲監視装置の検出結果評価に用いる説明図である。It is explanatory drawing used for the detection result evaluation of the vehicle periphery monitoring apparatus by Embodiment 3 of this invention. この発明の実施の形態3による車両周囲監視装置の処理内容を示すフローチャートである。It is a flowchart which shows the processing content of the vehicle periphery monitoring apparatus by Embodiment 3 of this invention. 従来の車両周囲監視装置のカメラにより撮影された画像等を示す説明図である。It is explanatory drawing which shows the image etc. which were image | photographed with the camera of the conventional vehicle periphery monitoring apparatus.
 以下、この発明をより詳細に説明するために、この発明を実施するための最良の形態について、添付の図面に従って説明する。
実施の形態1.
 図1はこの発明の実施の形態1による車両周囲監視装置を示す構成図である。
 また、図2はこの発明の実施の形態1による車両周囲監視装置により車両の周囲が監視される様子を示す説明図である。
Hereinafter, in order to describe the present invention in more detail, the best mode for carrying out the present invention will be described with reference to the accompanying drawings.
Embodiment 1 FIG.
1 is a block diagram showing a vehicle surrounding monitoring apparatus according to Embodiment 1 of the present invention.
FIG. 2 is an explanatory diagram showing a situation in which the surroundings of the vehicle are monitored by the vehicle surrounding monitoring apparatus according to Embodiment 1 of the present invention.
 図1,2において、カメラ2は例えば車両1の後部に設置され、車両1の周囲を撮影する。
 ランプ3は例えば車両1の後部に設置され、制御部7の指示の下、車両1の周囲に向けて光を照射する。
1 and 2, the camera 2 is installed, for example, at the rear portion of the vehicle 1 and photographs the surroundings of the vehicle 1.
The lamp 3 is installed, for example, at the rear of the vehicle 1 and irradiates light toward the periphery of the vehicle 1 under the instruction of the control unit 7.
 画像保持部4は制御部7の指示の下、ランプ3が消灯している状態で、カメラ2により撮影された画像P1(第1の画像)を取得して保持するとともに、ランプ3が点灯している状態で、カメラ2により撮影された画像P2(第2の画像)を取得して保持する処理を実施する。
 なお、画像保持部4及び制御部7から画像取得手段が構成されている。
Under the instruction of the control unit 7, the image holding unit 4 acquires and holds the image P1 (first image) taken by the camera 2 with the lamp 3 turned off, and the lamp 3 is turned on. In this state, a process of acquiring and holding the image P2 (second image) taken by the camera 2 is performed.
The image holding unit 4 and the control unit 7 constitute an image acquisition unit.
 影検出部5は制御部7の指示の下、画像保持部4に保持されている画像P1と画像P2を比較して、画像P2上に投影されているランプ3による立体物Aの影Sを検出する処理を実施する。
 なお、影検出部5及び制御部7から影検出手段が構成されている。
Under the instruction of the control unit 7, the shadow detection unit 5 compares the image P1 and the image P2 held in the image holding unit 4, and determines the shadow S of the three-dimensional object A by the lamp 3 projected on the image P2. Perform the detection process.
The shadow detection unit 5 and the control unit 7 constitute a shadow detection unit.
 立体物検出部6は制御部7の指示の下、影検出部5により検出された立体物Aの影Sを用いて、立体物Aを検出する処理を実施する。即ち、影検出部5により検出された立体物Aの影Sを用いて、立体物Aの3次元座標(x0,y0,z0)を算出する処理を実施する。
 なお、立体物検出部6及び制御部7から立体物検出手段が構成されている。
The three-dimensional object detection unit 6 performs processing for detecting the three-dimensional object A using the shadow S of the three-dimensional object A detected by the shadow detection unit 5 under the instruction of the control unit 7. That is, the process of calculating the three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A is performed using the shadow S of the three-dimensional object A detected by the shadow detection unit 5.
The three-dimensional object detection unit 6 and the control unit 7 constitute a three-dimensional object detection unit.
 制御部7はランプ3、画像保持部4、影検出部5、立体物検出部6及び表示部8の動作を制御する処理部であり、制御部7は立体物検出部6により検出された立体物Aの3次元座標(x0,y0,z0)をユーザが必要な情報に変換して表示部8に表示する。
 表示部8は液晶ディスプレイなどから構成されており、制御部7の指示の下、カメラ2により撮影された画像や、ユーザが必要な情報などを表示する。
The control unit 7 is a processing unit that controls the operations of the lamp 3, the image holding unit 4, the shadow detection unit 5, the three-dimensional object detection unit 6, and the display unit 8, and the control unit 7 is a three-dimensional object detected by the three-dimensional object detection unit 6. object 3-dimensional coordinates of a (x 0, y 0, z 0) and converts the user information necessary to display on the display unit 8.
The display unit 8 is composed of a liquid crystal display or the like, and displays an image photographed by the camera 2 or information necessary for the user under the instruction of the control unit 7.
 図2において、座標(xl,yl,zl)はランプ3が設置されている位置を示す座標であり、座標(x0,y0,z0)は立体物Aの頂点の座標である。
 また、座標(xb,yb,zb)は立体物Aが路面と交わっている点の座標であり、座標(xs,ys,zs)は立体物Aの頂点に対応する影Sの端点の座標である。
 図3はこの発明の実施の形態1による車両周囲監視装置の処理内容を示すフローチャートである。
In FIG. 2, the coordinates (x 1 , y 1 , z 1 ) are coordinates indicating the position where the lamp 3 is installed, and the coordinates (x 0 , y 0 , z 0 ) are the coordinates of the vertex of the three-dimensional object A. is there.
The coordinates (x b , y b , z b ) are the coordinates of the point where the solid object A intersects the road surface, and the coordinates (x s , y s , z s ) are shadows corresponding to the vertices of the solid object A. The coordinates of the end point of S.
FIG. 3 is a flowchart showing the processing contents of the vehicle periphery monitoring apparatus according to Embodiment 1 of the present invention.
 次に動作について説明する。
 まず、制御部7は、ランプ3が消灯している状態で、画像の取得指令を画像保持部4に出力する。
 画像保持部4は、制御部7から画像の取得指令を受けると、カメラ2により撮影された画像P1を取得して保持する(ステップST1)。
Next, the operation will be described.
First, the control unit 7 outputs an image acquisition command to the image holding unit 4 while the lamp 3 is turned off.
When receiving an image acquisition command from the control unit 7, the image holding unit 4 acquires and holds the image P1 captured by the camera 2 (step ST1).
 ここで、図4はランプ3が消灯している状態で、カメラ2により撮影された画像P1を示す説明図である。
 図4において、座標(u0,v0)は図2における立体物Aの頂点の座標(x0,y0,z0)に対応する点の画像P1上の座標である。
 座標(ub,vb)は図2における立体物Aと路面の交点の座標(xb,yb,zb)に対応する点の画像P1上の座標である。
 ただし、ランプ3が消灯している状態で撮影されているので、ランプ3による立体物Aの影Sは、図4の画像P1上には投影されていない。
Here, FIG. 4 is an explanatory view showing an image P1 taken by the camera 2 in a state where the lamp 3 is turned off.
In FIG. 4, coordinates (u 0 , v 0 ) are coordinates on the image P1 of points corresponding to the coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A in FIG.
The coordinates (u b , v b ) are the coordinates on the image P1 of the point corresponding to the coordinates (x b , y b , z b ) of the intersection of the solid object A and the road surface in FIG.
However, since the image is taken with the lamp 3 turned off, the shadow S of the three-dimensional object A by the lamp 3 is not projected on the image P1 in FIG.
 次に、制御部7は、ランプ3を点灯して(ステップST2)、ランプ3による立体物Aの影Sを路面上に生成してから、画像の取得指令を画像保持部4に出力する。
 画像保持部4は、制御部7から画像の取得指令を受けると、カメラ2により撮影された画像P2を取得して保持する(ステップST3)。
Next, the control unit 7 turns on the lamp 3 (step ST2), generates a shadow S of the three-dimensional object A by the lamp 3 on the road surface, and then outputs an image acquisition command to the image holding unit 4.
When receiving an image acquisition command from the control unit 7, the image holding unit 4 acquires and holds the image P2 captured by the camera 2 (step ST3).
 ここで、図5はランプ3が点灯している状態で、カメラ2により撮影された画像P2を示す説明図である。
 図5において、座標(u0,v0)は図2における立体物Aの頂点の座標(x0,y0,z0)に対応する点の画像P2上の座標である。
 座標(ub,vb)は図2における立体物Aと路面の交点の座標(xb,yb,zb)に対応する点の画像P2上の座標である。
 座標(us,vs)は図2における影Sの端点の座標(xs,ys,zs)に対応する点の画像P2上の座標である。
Here, FIG. 5 is an explanatory diagram showing an image P2 taken by the camera 2 in a state where the lamp 3 is lit.
In FIG. 5, coordinates (u 0 , v 0 ) are coordinates on the image P2 of points corresponding to the coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A in FIG.
Coordinates (u b, v b) is a three-dimensional object A and the road surface at the intersection of the coordinates (x b, y b, z b) the coordinates of the image P2 in a point corresponding to the in Figure 2.
The coordinates (u s , v s ) are the coordinates on the image P2 of the point corresponding to the coordinates (x s , y s , z s ) of the end point of the shadow S in FIG.
 制御部7は、画像保持部4が画像P1,P2を取得すると、ランプ3による立体物Aの影Sの検出指令を影検出部5に出力する。
 影検出部5は、制御部7から影Sの検出指令を受けると、画像保持部4に保持されている画像P1と画像P2を比較して、画像P2上に投影されているランプ3による立体物Aの影Sを検出する(ステップST4)。
When the image holding unit 4 acquires the images P <b> 1 and P <b> 2, the control unit 7 outputs a detection command for the shadow S of the three-dimensional object A by the lamp 3 to the shadow detection unit 5.
When the shadow detection unit 5 receives the detection command of the shadow S from the control unit 7, the shadow detection unit 5 compares the image P1 and the image P2 held by the image holding unit 4 and compares the image P1 with the lamp 3 projected on the image P2. The shadow S of the object A is detected (step ST4).
 例えば、画像P1の各画素値から画像P2の各画素値を減算し、その減算値が所定の閾値以上である画素が存在する部分を立体物Aの影Sとして検出する。
 ただし、影Sの検出方法については、特に限定するものではなく、例えば、ユーザが画像P1と画像P2を見比べて、影Sが存在していると判断する位置を指定するようにしてもよい。
 なお、影検出部5は、ランプ3による立体物Aの影Sを検出すると、図2の影Sの端点の座標(xs,ys,zs)に対応する図5の点の座標(us,vs)を取得する。
For example, each pixel value of the image P2 is subtracted from each pixel value of the image P1, and a portion where a pixel whose subtraction value is equal to or larger than a predetermined threshold exists is detected as the shadow S of the three-dimensional object A.
However, the method for detecting the shadow S is not particularly limited. For example, the user may compare the image P1 and the image P2 and specify the position at which the shadow S is determined to exist.
When the shadow detection unit 5 detects the shadow S of the three-dimensional object A by the lamp 3, the coordinates of the points in FIG. 5 (x s , y s , z s ) corresponding to the coordinates (x s , y s , z s ) of the shadow S in FIG. u s , v s ).
 制御部7は、影検出部5がランプ3による立体物Aの影Sを検出すると、立体物Aの検出指令を立体物検出部6に出力する。
 立体物検出部6は、制御部7から立体物Aの検出指令を受けると、影検出部5により検出された立体物Aの影Sを用いて、立体物Aを検出する。
 即ち、立体物検出部6は、影検出部5により検出された立体物Aの影Sを用いて、図2における立体物Aの頂点の座標(x0,y0,z0)を算出する(ステップST5)。
When the shadow detection unit 5 detects the shadow S of the three-dimensional object A by the lamp 3, the control unit 7 outputs a detection command for the three-dimensional object A to the three-dimensional object detection unit 6.
When the solid object detection unit 6 receives the detection command for the solid object A from the control unit 7, the solid object detection unit 6 detects the solid object A using the shadow S of the solid object A detected by the shadow detection unit 5.
That is, the three-dimensional object detection unit 6 uses the shadow S of the three-dimensional object A detected by the shadow detection unit 5 to calculate the coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A in FIG. (Step ST5).
 以下、立体物検出部6における立体物Aの3次元座標(x0,y0,z0)の算出例を具体的に説明する。
 まず、カメラ2の光学中心を座標原点として、座標軸であるZ軸をカメラ2の視線方向、Y軸をZ軸と直交して上方に向かう方向、X軸をY軸及びZ軸と直交する方向とする。
 カメラ2とランプ3が車両1に固定されており、カメラ2とランプ3の位置関係が不変であるため、立体物検出部6は、予め、ランプ3が設置されている位置を測定することにより、ランプ3が設置されている位置の座標(xl,yl,zl)を求めておくようにする。
Hereinafter, a calculation example of the three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A in the three-dimensional object detection unit 6 will be specifically described.
First, with the optical center of the camera 2 as the coordinate origin, the Z axis that is the coordinate axis is the line-of-sight direction of the camera 2, the Y axis is orthogonal to the Z axis, and upwards, and the X axis is orthogonal to the Y and Z axes And
Since the camera 2 and the lamp 3 are fixed to the vehicle 1 and the positional relationship between the camera 2 and the lamp 3 is unchanged, the three-dimensional object detection unit 6 previously measures the position where the lamp 3 is installed. The coordinates (x l , y l , z l ) of the position where the lamp 3 is installed are obtained in advance.
 また、立体物検出部6は、立体物Aが路面と交わっている点の座標(xb,yb,zb)と、立体物Aの頂点に対応する影Sの端点の座標(xs,ys,zs)とが同じ平面(車両の路面)上に存在しているので、予め、カメラ2により撮影される画像上の点と、車両1の路面上の点との関係(以下、「カメラ画像/路面関係」と称する)を平面射影変換などで求めておくようにする。
 立体物検出部6は、画像保持部4が画像P2を取得すると、カメラ画像/路面関係にしたがって、画像P2上の座標(ub,vb)から、図2における立体物Aと路面の交点の座標(xb,yb,zb)を算出する。
The three-dimensional object detection unit 6 also has coordinates (x b , y b , z b ) of the point where the three-dimensional object A intersects the road surface, and coordinates (x s) of the end point of the shadow S corresponding to the vertex of the three-dimensional object A. , Y s , z s ) exist on the same plane (the road surface of the vehicle), the relationship between the points on the image captured by the camera 2 in advance and the points on the road surface of the vehicle 1 (hereinafter, , Referred to as “camera image / road surface relationship”) by plane projective transformation or the like.
When the image holding unit 4 acquires the image P2, the three-dimensional object detection unit 6 determines the intersection of the three-dimensional object A and the road surface in FIG. 2 from the coordinates (u b , v b ) on the image P2 according to the camera image / road surface relationship. coordinates (x b, y b, z b) is calculated.
 また、立体物検出部6は、カメラ画像/路面関係にしたがって、画像P2上の座標(us,vs)から、図2における影Sの端点の座標(xs,ys,zs)を算出する。
 ただし、図2における立体物Aの頂点の座標(x0,y0,z0)は、車両1の路面上に存在しておらず、カメラ画像/路面関係にしたがって、画像P2上の座標(u0,v0)から算出することができないので、立体物検出部6は、以下のように算出する。
Further, the three-dimensional object detection unit 6 determines the coordinates (x s , y s , z s ) of the end point of the shadow S in FIG. 2 from the coordinates (u s , v s ) on the image P2 according to the camera image / road surface relationship. Is calculated.
However, the coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A in FIG. 2 do not exist on the road surface of the vehicle 1, and are coordinates on the image P 2 according to the camera image / road surface relationship ( can not be calculated from u 0, v 0), the three-dimensional object detection unit 6 is calculated as below.
 立体物検出部6は、立体物Aの頂点の座標(x0,y0,z0)が、既に算出している座標(xl,yl,zl)と座標(xs,ys,zs)の中間に位置しているので、座標(xl,yl,zl)と座標(xs,ys,zs)を用いて、下記の式(1)に示すように、立体物Aの頂点の座標(x0,y0,z0)を定義する。
Figure JPOXMLDOC01-appb-M000001
The three-dimensional object detection unit 6 uses coordinates (x 1 , y 1 , z 1 ) and coordinates (x s , y s ) that have already been calculated for the coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A. , since the position in the middle of the z s), using the coordinates (x l, y l, z l) and the coordinates (x s, y s, z s), as shown in the following formula (1) The coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A are defined.
Figure JPOXMLDOC01-appb-M000001
 また、立体物検出部6は、座標x0,y0については、カメラ2の焦点距離fと座標(x0,y0,z0)に対応する画像P2上の座標(u0,v0)を用いて表すことができるので、下記の式(2)のように、座標x0,y0を定義する。
Figure JPOXMLDOC01-appb-M000002
 ただし、焦点距離fは、カメラ1の固有値であるため、予め測定して求めておくものとする。
Also, three-dimensional object detection unit 6, coordinates x 0, for y 0 is the focal length f and the coordinates of the camera 2 (x 0, y 0, z 0) to the coordinates on the image P2 corresponding (u 0, v 0 ), The coordinates x 0 and y 0 are defined as in the following formula (2).
Figure JPOXMLDOC01-appb-M000002
However, since the focal length f is an eigenvalue of the camera 1, it is determined by measuring in advance.
 立体物検出部6は、上記のようにして、式(1)と式(2)を定義すると、式(1)と式(2)から、下記の式(3)を導出する。
Figure JPOXMLDOC01-appb-M000003
When the three-dimensional object detection unit 6 defines Expression (1) and Expression (2) as described above, the following Expression (3) is derived from Expression (1) and Expression (2).
Figure JPOXMLDOC01-appb-M000003
 立体物検出部6は、式(3)を導出すると、式(3)における“s”の値を求め、“s”の値を式(1)に代入することにより、立体物Aの頂点の座標(x0,y0,z0)を算出する。 When the three-dimensional object detection unit 6 derives the expression (3), the value of “s” in the expression (3) is obtained, and the value of “s” is substituted into the expression (1), thereby obtaining the vertex of the three-dimensional object A. Coordinates (x 0 , y 0 , z 0 ) are calculated.
 制御部7は、立体物検出部6が立体物Aを検出すると、立体物Aの3次元座標(x0,y0,z0)をユーザが必要な情報に変換して表示部8に表示する(ステップST6)。
 例えば、立体物Aの高さhを表示する場合には、下記の式(4)に示すように、立体物Aの3次元座標(x0,y0,z0)から高さhを算出して、表示部8に表示する。
Figure JPOXMLDOC01-appb-M000004
When the three-dimensional object detection unit 6 detects the three-dimensional object A, the control unit 7 converts the three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A into information necessary for the user and displays the information on the display unit 8. (Step ST6).
For example, when the height h of the three-dimensional object A is displayed, the height h is calculated from the three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A as shown in the following equation (4). And displayed on the display unit 8.
Figure JPOXMLDOC01-appb-M000004
 図6は立体物Aの高さhの表示例を示す説明図である。図6では、立体物Aの高さhが25cmである例を示している。
 図6では、立体物Aの高さhの具体的な高さを表示しているが、情報提示の手法については特に限定するものではない。
 例えば、車両1に接触するか否かの情報を表示すれば足りる場合には、車両1が乗り越えられない高さh以上の立体物Aを強調表示するようにしてもよい。
FIG. 6 is an explanatory diagram illustrating a display example of the height h of the three-dimensional object A. FIG. 6 shows an example in which the height h of the three-dimensional object A is 25 cm.
In FIG. 6, the specific height h of the three-dimensional object A is displayed, but the information presentation method is not particularly limited.
For example, if it is sufficient to display information on whether or not to contact the vehicle 1, a three-dimensional object A having a height h or higher that the vehicle 1 cannot get over may be highlighted.
 以上で明らかなように、この実施の形態1によれば、ランプ3が消灯している状態で、カメラ2により撮影された画像P1を取得して保持するとともに、ランプ3が点灯している状態で、カメラ2により撮影された画像P2を取得して保持する画像保持部4と、画像保持部4に保持されている画像P1と画像P2を比較して、画像P2上に投影されているランプ3による立体物Aの影Sを検出する影検出部5とを設け、立体物検出部6が影検出部5により検出された立体物Aの影Sを用いて、立体物Aを検出するように構成したので、車両1が停止している状態でも、立体物Aを検出することができるなどの効果を奏する。 As apparent from the above, according to the first embodiment, the image P1 photographed by the camera 2 is acquired and held while the lamp 3 is turned off, and the lamp 3 is turned on. Thus, the image holding unit 4 that acquires and holds the image P2 captured by the camera 2 is compared with the image P1 and the image P2 that are held in the image holding unit 4, and the lamp that is projected on the image P2 3 to detect the shadow S of the three-dimensional object A, and the three-dimensional object detection unit 6 uses the shadow S of the three-dimensional object A detected by the shadow detection unit 5 to detect the three-dimensional object A. Thus, the three-dimensional object A can be detected even when the vehicle 1 is stopped.
実施の形態2.
 図7はこの発明の実施の形態2による車両周囲監視装置を示す構成図である。
 また、図8はこの発明の実施の形態2による車両周囲監視装置により車両の周囲が監視される様子を示す説明図である。
 図7及び図8において、図1及び図2と同一符号は同一または相当部分を示すので説明を省略する。
Embodiment 2. FIG.
FIG. 7 is a block diagram showing a vehicle periphery monitoring apparatus according to Embodiment 2 of the present invention.
FIG. 8 is an explanatory diagram showing a situation in which the surroundings of the vehicle are monitored by the vehicle surroundings monitoring apparatus according to Embodiment 2 of the present invention.
7 and 8, the same reference numerals as those in FIGS. 1 and 2 indicate the same or corresponding parts, and thus description thereof is omitted.
 影検出部11は制御部14の指示の下、画像保持部4に保持されている画像P1と画像P2を比較して、画像P1上に投影されている太陽による立体物Aの影Sを検出する処理を実施する。
 なお、影検出部11及び制御部14から影検出手段が構成されている。
Under the instruction of the control unit 14, the shadow detection unit 11 compares the image P1 held in the image holding unit 4 with the image P2, and detects the shadow S of the three-dimensional object A due to the sun projected on the image P1. Perform the process.
The shadow detection unit 11 and the control unit 14 constitute a shadow detection unit.
 太陽情報入力部12は例えばGPSを用いて車両1の位置を求め、車両1の位置と現在時刻から太陽の位置を計算し、太陽情報として、太陽光線の方向を示す単位ベクトルを立体物検出部13に与える処理を実施する。
 立体物検出部13は制御部14の指示の下、影検出部11により検出された太陽による立体物Aの影Sと太陽情報入力部12から与えられる太陽情報とを用いて、立体物Aを検出する処理を実施する。
 なお、太陽情報入力部12、立体物検出部13及び制御部14から立体物検出手段が構成されている。
The solar information input unit 12 obtains the position of the vehicle 1 using, for example, GPS, calculates the position of the sun from the position of the vehicle 1 and the current time, and uses a unit vector indicating the direction of sunlight as the solar information. The process given to 13 is executed.
The three-dimensional object detection unit 13 uses the shadow S of the three-dimensional object A by the sun detected by the shadow detection unit 11 and the sun information provided from the sun information input unit 12 under the instruction of the control unit 14 to convert the three-dimensional object A. Perform the detection process.
The sun information input unit 12, the three-dimensional object detection unit 13, and the control unit 14 constitute a three-dimensional object detection unit.
 制御部14はランプ3、画像保持部4、影検出部11、立体物検出部13及び表示部8の動作を制御する処理部であり、制御部14は立体物検出部13により検出された立体物Aの3次元座標(x0,y0,z0)をユーザが必要な情報に変換して表示部8に表示する。 The control unit 14 is a processing unit that controls operations of the lamp 3, the image holding unit 4, the shadow detection unit 11, the three-dimensional object detection unit 13, and the display unit 8, and the control unit 14 is a three-dimensional object detected by the three-dimensional object detection unit 13. object 3-dimensional coordinates of a (x 0, y 0, z 0) and converts the user information necessary to display on the display unit 8.
 図8において、ベクトル(xv,yv,zv)は太陽光線の方向を示す単位ベクトルである。
 図9はこの発明の実施の形態2による車両周囲監視装置の処理内容を示すフローチャートである。
In FIG. 8, a vector (x v , y v , z v ) is a unit vector indicating the direction of sunlight.
FIG. 9 is a flowchart showing the processing contents of the vehicle periphery monitoring apparatus according to Embodiment 2 of the present invention.
 上記実施の形態1では、影検出部5がランプ3による立体物Aの影Sを検出し、立体物検出部6がランプ3による立体物Aの影Sを用いて、立体物Aを検出するものについて示したが、太陽によって立体物Aの影Sが投影される場合には、影検出部11が太陽による立体物Aの影Sを検出し、立体物検出部13が太陽による立体物Aの影Sを用いて、立体物Aを検出するようにしてもよい。 In the first embodiment, the shadow detection unit 5 detects the shadow S of the three-dimensional object A by the lamp 3, and the three-dimensional object detection unit 6 detects the three-dimensional object A by using the shadow S of the three-dimensional object A by the lamp 3. Although the thing was shown, when the shadow S of the solid object A is projected by the sun, the shadow detection part 11 detects the shadow S of the solid object A by the sun, and the solid object detection part 13 is the solid object A by the sun. The three-dimensional object A may be detected using the shadow S.
 次に動作について説明する。
 まず、制御部14は、ランプ3が消灯している状態で、画像の取得指令を画像保持部4に出力する。
 画像保持部4は、制御部14から画像の取得指令を受けると、カメラ2により撮影された画像P1を取得して保持する(ステップST11)。
Next, the operation will be described.
First, the control unit 14 outputs an image acquisition command to the image holding unit 4 while the lamp 3 is turned off.
When receiving an image acquisition command from the control unit 14, the image holding unit 4 acquires and holds the image P1 captured by the camera 2 (step ST11).
 ここで、図10はランプ3が消灯している状態で、カメラ2により撮影された画像P1を示す説明図である。
 図10において、座標(u0,v0)は図8における立体物Aの頂点の座標(x0,y0,z0)に対応する点の画像P1上の座標である。
 座標(ub,vb)は図8における立体物Aと路面の交点の座標(xb,yb,zb)に対応する点の画像P1上の座標である。
 ランプ3が消灯している状態では、ランプ3から立体物Aに向けて光が照射されないので、ランプ3の影響を受けずに、太陽による立体物Aの影Sが画像P1上に投影される。
 座標(us,vs)は図8における影Sの端点の座標(xs,ys,zs)に対応する点の画像P1上の座標である。
Here, FIG. 10 is an explanatory view showing an image P1 taken by the camera 2 in a state where the lamp 3 is turned off.
In FIG. 10, coordinates (u 0 , v 0 ) are coordinates on the image P1 of points corresponding to the coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A in FIG.
The coordinates (u b , v b ) are the coordinates on the image P1 of the point corresponding to the coordinates (x b , y b , z b ) of the intersection of the solid object A and the road surface in FIG.
In the state where the lamp 3 is turned off, no light is irradiated from the lamp 3 toward the three-dimensional object A, so that the shadow S of the three-dimensional object A by the sun is projected on the image P1 without being affected by the lamp 3. .
The coordinates (u s , v s ) are the coordinates on the image P1 of points corresponding to the coordinates (x s , y s , z s ) of the end points of the shadow S in FIG.
 次に、制御部14は、ランプ3を点灯して(ステップST12)、画像の取得指令を画像保持部4に出力する。
 画像保持部4は、制御部14から画像の取得指令を受けると、カメラ2により撮影された画像P2を取得して保持する(ステップST13)。
 ランプ3を点灯している状態では、ランプ3から照射された光が立体物Aに照らされるため、太陽による立体物Aの影Sが目立たなくなる。
Next, the control unit 14 turns on the lamp 3 (step ST12), and outputs an image acquisition command to the image holding unit 4.
When receiving an image acquisition command from the control unit 14, the image holding unit 4 acquires and holds the image P2 captured by the camera 2 (step ST13).
In the state where the lamp 3 is lit, the light irradiated from the lamp 3 is illuminated on the three-dimensional object A, so that the shadow S of the three-dimensional object A caused by the sun becomes inconspicuous.
 ここで、図11はランプ3が点灯している状態で、カメラ2により撮影された画像P2を示す説明図である。
 図11において、座標(u0,v0)は図8における立体物Aの頂点の座標(x0,y0,z0)に対応する点の画像P2上の座標である。
 座標(ub,vb)は図8における立体物Aと路面の交点の座標(xb,yb,zb)に対応する点の画像P2上の座標である。
Here, FIG. 11 is an explanatory diagram showing an image P2 taken by the camera 2 in a state where the lamp 3 is lit.
In FIG. 11, coordinates (u 0 , v 0 ) are coordinates on the image P2 of points corresponding to the coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A in FIG.
The coordinates (u b , v b ) are the coordinates on the image P2 of the points corresponding to the coordinates (x b , y b , z b ) of the intersection of the solid object A and the road surface in FIG.
 制御部14は、画像保持部4が画像P1,P2を取得すると、太陽による立体物Aの影Sの検出指令を影検出部11に出力する。
 影検出部11は、制御部14から影Sの検出指令を受けると、画像保持部4に保持されている画像P1と画像P2を比較して、画像P1上に投影されている太陽による立体物Aの影Sを検出する(ステップST14)。
When the image holding unit 4 acquires the images P <b> 1 and P <b> 2, the control unit 14 outputs a detection command for the shadow S of the three-dimensional object A by the sun to the shadow detection unit 11.
When the shadow detection unit 11 receives the detection command of the shadow S from the control unit 14, the shadow detection unit 11 compares the image P <b> 1 held in the image holding unit 4 with the image P <b> 2, and the solid object by the sun projected on the image P <b> 1. A shadow S of A is detected (step ST14).
 例えば、画像P1の各画素値から画像P2の各画素値を減算し、その減算値が所定の閾値以上である画素が存在する部分を立体物Aの影Sとして検出する。
 ただし、影Sの検出方法については、特に限定するものではなく、例えば、ユーザが画像P1と画像P2を見比べて、影Sが存在していると判断する位置を指定するようにしてもよい。
 なお、影検出部11は、太陽による立体物Aの影Sを検出すると、図8の影Sの端点の座標(xs,ys,zs)に対応する図10の点の座標(us,vs)を取得する。
For example, each pixel value of the image P2 is subtracted from each pixel value of the image P1, and a portion where a pixel whose subtraction value is equal to or larger than a predetermined threshold exists is detected as the shadow S of the three-dimensional object A.
However, the method for detecting the shadow S is not particularly limited. For example, the user may compare the image P1 and the image P2 and specify the position at which the shadow S is determined to exist.
When the shadow detection unit 11 detects the shadow S of the three-dimensional object A by the sun, the coordinates (u) of the point in FIG. 10 corresponding to the coordinates (x s , y s , z s ) of the shadow S in FIG. s , v s ).
 太陽情報入力部12は、例えば、GPSを用いて車両1の位置(緯度経度)を求め、車両1の位置と現在時刻から太陽の位置を計算し、太陽情報として、太陽光線の方向を示す単位ベクトル(xv,yv,zv)を算出する(ステップST15)。 The solar information input unit 12 obtains the position (latitude and longitude) of the vehicle 1 using, for example, GPS, calculates the position of the sun from the position of the vehicle 1 and the current time, and indicates the direction of the sunlight as solar information. A vector (x v , y v , z v ) is calculated (step ST15).
 制御部14は、影検出部11が太陽による立体物Aの影Sを検出し、太陽情報入力部12が太陽光線の方向を示す単位ベクトル(xv,yv,zv)を算出すると、立体物Aの検出指令を立体物検出部13に出力する。
 立体物検出部13は、制御部14から立体物Aの検出指令を受けると、影検出部11により検出された立体物Aの影Sと太陽情報入力部12により算出された単位ベクトル(xv,yv,zv)を用いて、立体物Aを検出する。
 即ち、立体物検出部13は、影検出部5により検出された立体物Aの影Sと単位ベクトル(xv,yv,zv)を用いて、図8における立体物Aの頂点の座標(x0,y0,z0)を算出する(ステップST16)。
When the shadow detection unit 11 detects the shadow S of the three-dimensional object A due to the sun and the solar information input unit 12 calculates a unit vector (x v , y v , z v ) indicating the direction of the sun rays, A detection command for the three-dimensional object A is output to the three-dimensional object detection unit 13.
When the solid object detection unit 13 receives a detection command for the solid object A from the control unit 14, the shadow S of the solid object A detected by the shadow detection unit 11 and the unit vector (x v ) calculated by the sun information input unit 12. , Y v , z v ) to detect the three-dimensional object A.
That is, the three-dimensional object detection unit 13 uses the shadow S and the unit vector (x v , y v , z v ) of the three-dimensional object A detected by the shadow detection unit 5 to coordinate the vertex of the three-dimensional object A in FIG. (X 0 , y 0 , z 0 ) is calculated (step ST16).
 以下、立体物検出部13における立体物Aの3次元座標(x0,y0,z0)の算出例を具体的に説明する。
 まず、カメラ2の光学中心を座標原点として、座標軸であるZ軸をカメラ2の視線方向、Y軸をZ軸と直交して上方に向かう方向、X軸をY軸及びZ軸と直交する方向とする。
Hereinafter, a calculation example of the three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A in the three-dimensional object detection unit 13 will be specifically described.
First, with the optical center of the camera 2 as the coordinate origin, the Z axis that is the coordinate axis is the line-of-sight direction of the camera 2, the Y axis is orthogonal to the Z axis, and upwards, and the X axis is orthogonal to the Y and Z axes And
 立体物検出部13は、立体物Aが路面と交わっている点の座標(xb,yb,zb)と、立体物Aの頂点に対応する影Sの端点の座標(xs,ys,zs)とが同じ平面(車両の路面)上に存在しているので、予め、カメラ画像/路面関係を平面射影変換などで求めておくようにする。
 立体物検出部13は、画像保持部4が画像P1を取得すると、カメラ画像/路面関係にしたがって、画像P1上の座標(ub,vb)から、図8における立体物Aと路面の交点の座標(xb,yb,zb)を算出する。
The three-dimensional object detection unit 13 coordinates the coordinates (x b , y b , z b ) of the point where the three-dimensional object A intersects the road surface, and the coordinates (x s , y) of the shadow S corresponding to the vertex of the three-dimensional object A. s 1 , z s ) exist on the same plane (road surface of the vehicle), so the camera image / road surface relationship is obtained in advance by plane projective transformation or the like.
When the image holding unit 4 acquires the image P1, the three-dimensional object detection unit 13 determines the intersection of the three-dimensional object A and the road surface in FIG. 8 from the coordinates (u b , v b ) on the image P1 according to the camera image / road surface relationship. The coordinates (x b , y b , z b ) are calculated.
 また、立体物検出部13は、カメラ画像/路面関係にしたがって、画像P1上の座標(us,vs)から、図8における影Sの端点の座標(xs,ys,zs)を算出する。
 ただし、図8における立体物Aの頂点の座標(x0,y0,z0)は、車両1の路面上に存在しておらず、カメラ画像/路面関係にしたがって、画像P1上の座標(u0,v0)から算出することができないので、立体物検出部13は、以下のように算出する。
In addition, the three-dimensional object detection unit 13 determines the coordinates (x s , y s , z s ) of the end point of the shadow S in FIG. 8 from the coordinates (u s , v s ) on the image P1 according to the camera image / road surface relationship. Is calculated.
However, the coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A in FIG. 8 do not exist on the road surface of the vehicle 1, and the coordinates on the image P 1 according to the camera image / road surface relationship ( Since it cannot be calculated from u 0 , v 0 ), the three-dimensional object detection unit 13 calculates as follows.
 立体物検出部13は、立体物Aの頂点の座標(x0,y0,z0)と既に算出している座標(xs,ys,zs)が同一の方向、即ち、太陽情報入力部12により算出された単位ベクトル(xv,yv,zv)が示す方向に存在しているので、座標(xs,ys,zs)と単位ベクトル(xv,yv,zv)を用いて、下記の式(5)に示すように、立体物Aの頂点の座標(x0,y0,z0)を定義する。
Figure JPOXMLDOC01-appb-M000005
The three-dimensional object detection unit 13 has the same coordinates in the vertex (x 0 , y 0 , z 0 ) of the three-dimensional object A and the already calculated coordinates (x s , y s , z s ), that is, solar information. Since the unit vector (x v , y v , z v ) calculated by the input unit 12 exists in the direction indicated, the coordinates (x s , y s , z s ) and the unit vector (x v , y v , z v ) is used to define the coordinates (x 0 , y 0 , z 0 ) of the vertex of the three-dimensional object A as shown in the following equation (5).
Figure JPOXMLDOC01-appb-M000005
 立体物検出部13は、式(5)のように、立体物Aの頂点の座標(x0,y0,z0)を定義すると、上記実施の形態1の立体物検出部6と同様にして、式(2)を定義し、式(5)と式(2)を用いて、式(5)における“t”の値を求めることで、立体物Aの頂点の座標(x0,y0,z0)を算出する。 The three-dimensional object detection unit 13 defines the vertex coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A as in equation (5), and performs the same process as the three-dimensional object detection unit 6 of the first embodiment. Then, the equation (2) is defined, and the value of “t” in the equation (5) is obtained using the equations (5) and (2), whereby the coordinates (x 0 , y 0, z 0) is calculated.
 制御部14は、立体物検出部13が立体物Aを検出すると、上記実施の形態1の制御部7と同様に、立体物Aの3次元座標(x0,y0,z0)をユーザが必要な情報に変換して表示部8に表示する(ステップST17)。
 例えば、立体物Aの高さhを表示する場合には、上記の式(4)を用いて、立体物Aの3次元座標(x0,y0,z0)から高さhを算出して表示する。
When the three-dimensional object detection unit 13 detects the three-dimensional object A, the control unit 14 uses the three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A to the user as in the control unit 7 of the first embodiment. Is converted into necessary information and displayed on the display unit 8 (step ST17).
For example, when displaying the height h of the three-dimensional object A, the height h is calculated from the three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A using the above equation (4). To display.
 以上で明らかなように、この実施の形態2によれば、ランプ3が消灯している状態で、カメラ2により撮影された画像P1を取得して保持するとともに、ランプ3が点灯している状態で、カメラ2により撮影された画像P2を取得して保持する画像保持部4と、画像保持部4に保持されている画像P1と画像P2を比較して、画像P1上に投影されている太陽による立体物Aの影Sを検出する影検出部11とを設け、立体物検出部13が影検出部11により検出された立体物Aの影Sと太陽情報入力部12により算出された単位ベクトル(xv,yv,zv)を用いて、立体物Aを検出するように構成したので、車両1が停止している状態でも、立体物Aを検出することができるなどの効果を奏する。 As apparent from the above, according to the second embodiment, the image P1 photographed by the camera 2 is acquired and held while the lamp 3 is turned off, and the lamp 3 is turned on. The image holding unit 4 that acquires and holds the image P2 captured by the camera 2 is compared with the image P1 and the image P2 that are held in the image holding unit 4, and the sun projected on the image P1. The shadow detection unit 11 for detecting the shadow S of the three-dimensional object A is provided, and the three-dimensional object detection unit 13 calculates the shadow S of the three-dimensional object A detected by the shadow detection unit 11 and the unit vector calculated by the sun information input unit 12. Since the three-dimensional object A is detected using (x v , y v , z v ), the three-dimensional object A can be detected even when the vehicle 1 is stopped. .
実施の形態3.
 図12はこの発明の実施の形態3による車両周囲監視装置を示す構成図である。
 また、図13はこの発明の実施の形態3による車両周囲監視装置により車両の周囲が監視される様子を示す説明図である。
 図12及び図13において、図1及び図2と同一符号は同一または相当部分を示すので説明を省略する。
Embodiment 3 FIG.
12 is a block diagram showing a vehicle surrounding monitoring apparatus according to Embodiment 3 of the present invention.
FIG. 13 is an explanatory diagram showing a situation in which the surroundings of the vehicle are monitored by the vehicle surrounding monitoring apparatus according to Embodiment 3 of the present invention.
12 and FIG. 13, the same reference numerals as those in FIG. 1 and FIG.
 第1のランプであるランプ3aは例えば車両1の後部に設置され、制御部24の指示の下、車両1の周囲に向けて光を照射する。
 第2のランプであるランプ3bは例えば車両1の後部において、ランプ3aと異なる位置に設置され、制御部24の指示の下、ランプ3aと異なる位置から車両1の周囲に向けて光を照射する。
The lamp 3a, which is the first lamp, is installed at the rear of the vehicle 1, for example, and irradiates light toward the periphery of the vehicle 1 under the instruction of the control unit 24.
The lamp 3b, which is the second lamp, is installed at a position different from the lamp 3a at the rear part of the vehicle 1, for example, and irradiates light from the position different from the lamp 3a toward the periphery of the vehicle 1 under the instruction of the control unit 24. .
 画像保持部21は制御部24の指示の下、ランプ3a,3bが消灯している状態で、カメラ2により撮影された画像P1(第1の画像)を取得して保持するとともに、ランプ3aが点灯して、ランプ3bが消灯している状態で、カメラ2により撮影された画像P2(第2の画像)を取得して保持し、また、ランプ3aが消灯して、ランプ3bが点灯している状態で、カメラ2により撮影された画像P3(第3の画像)を取得して保持する処理を実施する。
 なお、画像保持部21及び制御部24から画像取得手段が構成されている。
Under the instruction of the control unit 24, the image holding unit 21 acquires and holds the image P1 (first image) taken by the camera 2 with the lamps 3a and 3b turned off, and the lamp 3a With the lamp 3b turned off, the image P2 (second image) taken by the camera 2 is acquired and held, and the lamp 3a is turned off and the lamp 3b is turned on. In this state, a process of acquiring and holding the image P3 (third image) taken by the camera 2 is performed.
The image holding unit 21 and the control unit 24 constitute an image acquisition unit.
 影検出部22は制御部24の指示の下、画像保持部21に保持されている画像P1と画像P2を比較して、画像P2上に投影されているランプ3aによる立体物Aの影Saを検出するとともに、画像保持部21に保持されている画像P1と画像P3を比較して、画像P3上に投影されているランプ3bによる立体物Aの影Sbを検出する処理を実施する。
 なお、影検出部22及び制御部24から影検出手段が構成されている。
Under the instruction of the control unit 24, the shadow detection unit 22 compares the image P1 and the image P2 held in the image holding unit 21, and calculates the shadow Sa of the three-dimensional object A by the lamp 3a projected on the image P2. While detecting, the image P1 currently hold | maintained at the image holding | maintenance part 21 and the image P3 are compared, and the process which detects the shadow Sb of the solid object A by the lamp | ramp 3b currently projected on the image P3 is implemented.
The shadow detection unit 22 and the control unit 24 constitute a shadow detection unit.
 立体物検出部23は制御部24の指示の下、影検出部22により検出された立体物Aの影Saを用いて、立体物Aを検出するとともに、影検出部22により検出された立体物Aの影Sbを用いて、立体物Aを検出する処理を実施する。
 なお、立体物検出部23及び制御部24から立体物検出手段が構成されている。
The three-dimensional object detection unit 23 detects the three-dimensional object A using the shadow Sa of the three-dimensional object A detected by the shadow detection unit 22 under the instruction of the control unit 24, and the three-dimensional object detected by the shadow detection unit 22. The process of detecting the three-dimensional object A is performed using the shadow Sb of A.
The three-dimensional object detection unit 23 and the control unit 24 constitute a three-dimensional object detection unit.
 制御部24はランプ3a,3b、画像保持部21、影検出部22、立体物検出部23及び表示部25の動作を制御する処理部であり、制御部24は立体物検出部23によりランプ3aによる立体物Aの影Saを用いて検出された立体物と、ランプ3bによる立体物Aの影Sbを用いて検出された立体物を照合して、立体物検出部23の検出結果の妥当性を評価する処理を実施する。即ち、ランプ3aによる立体物Aの影Saを用いて検出された立体物の座標と、ランプ3bによる立体物Aの影Sbを用いて検出された立体物の座標とが異なる場合、車両1が存在している路面R1と立体物Aの影Sが存在している路面R2が一致していない旨の認定を実施する。
 なお、制御部24は検出結果評価手段を構成している。
The control unit 24 is a processing unit that controls operations of the lamps 3 a and 3 b, the image holding unit 21, the shadow detection unit 22, the three-dimensional object detection unit 23, and the display unit 25, and the control unit 24 is operated by the three-dimensional object detection unit 23. The three-dimensional object detected by using the shadow Sa of the three-dimensional object A by the lamp 3b and the three-dimensional object detected by using the shadow Sb of the three-dimensional object A by the lamp 3b are collated, and the validity of the detection result of the three-dimensional object detection unit 23 Implement a process to evaluate That is, if the coordinates of the three-dimensional object detected using the shadow Sa of the three-dimensional object A by the lamp 3a and the coordinates of the three-dimensional object detected using the shadow Sb of the three-dimensional object A by the lamp 3b are different, the vehicle 1 The certification that the road surface R1 that exists and the road surface R2 on which the shadow S of the three-dimensional object A exists does not match is performed.
The control unit 24 constitutes a detection result evaluation unit.
 表示部25は液晶ディスプレイなどから構成されており、制御部24の指示の下、カメラ2により撮影された画像や、ユーザが必要な情報を表示するほか、制御部24により車両1が存在している路面R1と立体物Aの影Sが存在している路面R2が一致していない旨の認定がなされた場合、例えば、注意を喚起する情報を表示する。 The display unit 25 is composed of a liquid crystal display or the like. Under the instruction of the control unit 24, the display unit 25 displays images taken by the camera 2 and information necessary for the user. When it is recognized that the existing road surface R1 and the road surface R2 on which the shadow S of the three-dimensional object A does not match, for example, information for calling attention is displayed.
 上記実施の形態1,2において、立体物Aの正確な3次元座標(x0,y0,z0)を検出するには、車両1が存在している路面R1と立体物Aの影Sが存在している路面R2が一致している必要がある。
 例えば、図13に示すように、車両1が存在している路面R1と立体物Aの影Sが存在している路面R2とが同一平面でない場合、路面R2に投影される立体物Aの影Sが、図2の立体物Aの影Sより短くなるため、立体物Aの正確な3次元座標(x0,y0,z0)を算出することができない。
In the first and second embodiments, in order to detect the accurate three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A, the road surface R1 on which the vehicle 1 exists and the shadow S of the three-dimensional object A are present. It is necessary that the road surface R <b> 2 where the road is present matches.
For example, as shown in FIG. 13, when the road surface R1 on which the vehicle 1 exists and the road surface R2 on which the shadow S of the three-dimensional object A is not coplanar, the shadow of the three-dimensional object A projected on the road surface R2 Since S is shorter than the shadow S of the three-dimensional object A in FIG. 2, the accurate three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A cannot be calculated.
 立体物Aの不正確な3次元座標(x0,y0,z0)をユーザに提示してしまうと、車両1が立体物Aと接触しないはずなのに、接触してしまうなどの問題が発生するため、正確に立体物Aの3次元座標(x0,y0,z0)が算出されているか否かを検証できることが重要である。
 そこで、この実施の形態3では、正確に立体物Aの3次元座標(x0,y0,z0)が算出されているか否かを検証できるようにしている。
If an incorrect three-dimensional coordinate (x 0 , y 0 , z 0 ) of the three-dimensional object A is presented to the user, the vehicle 1 should not come into contact with the three-dimensional object A, but problems such as contact will occur. Therefore, it is important to be able to verify whether or not the three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A are accurately calculated.
Therefore, in the third embodiment, it is possible to verify whether or not the three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A are accurately calculated.
 図14はこの発明の実施の形態3による車両周囲監視装置の検出結果評価に用いる説明図である。
 図14において、点D1は立体物Aの影Sが仮に路面R1上に存在するとすれば、ランプ3aにより投影される影Saの端点である。
 点D2はランプ3aにより立体物Aの影Saが実際に路面R2上に投影される場合において、その立体物Aの影Saをもとにカメラ2により撮影される画像から算出される点である。
 点D3はランプ3aにより実際に路面R2上に投影される立体物Aの影Saの端点である。
FIG. 14 is an explanatory diagram used for detection result evaluation of the vehicle periphery monitoring device according to Embodiment 3 of the present invention.
In FIG. 14, a point D1 is an end point of the shadow Sa projected by the lamp 3a if the shadow S of the three-dimensional object A exists on the road surface R1.
A point D2 is a point calculated from an image photographed by the camera 2 based on the shadow Sa of the three-dimensional object A when the shadow Sa of the three-dimensional object A is actually projected on the road surface R2 by the lamp 3a. .
The point D3 is an end point of the shadow Sa of the three-dimensional object A that is actually projected on the road surface R2 by the lamp 3a.
 点D4は立体物Aの影Sが仮に路面R1上に存在するとすれば、ランプ3bにより路面に投影される影Sbの端点である。
 点D5はランプ3bにより実際に路面R2上に投影される場合の立体物Aの影Sbの端点である。
The point D4 is an end point of the shadow Sb projected on the road surface by the lamp 3b if the shadow S of the three-dimensional object A exists on the road surface R1.
The point D5 is an end point of the shadow Sb of the three-dimensional object A when it is actually projected on the road surface R2 by the lamp 3b.
 1つのカメラ2を用いて、立体物Aの3次元座標(x0,y0,z0)を求める場合に、上記実施の形態1で述べた手法で、カメラ2により撮影された画像上の点D3を3次元座標に変換すると、点D2の座標になる。
 その理由は、上記実施の形態1では、車両1と立体物Aの影Sが同じ路面R1上に存在していることを利用して求めているため、カメラ2から見て、点D3と同じ座標にある路面R1上の点D2に変換されてしまうからである。
When the three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A are obtained using one camera 2, the method described in Embodiment 1 above is used on the image photographed by the camera 2. When the point D3 is converted into three-dimensional coordinates, the coordinates of the point D2 are obtained.
The reason for this is that in the first embodiment, since the vehicle 1 and the shadow S of the three-dimensional object A are obtained using the same road surface R1, the same as the point D3 when viewed from the camera 2. This is because the point D2 on the road surface R1 at the coordinates is converted.
 この場合、式(1)では、正しくは立体物Aの座標が、ランプ3aと点D3の中間にあるとして求める必要があるが、実際には路面R1上の点D2とランプ3aの中間にある座標として求めてしまうため、異なった座標が求められることになる。
 また、もう1つのランプ3bを用いて、立体物Aの座標を求める場合も、同様に、ランプ3bと点D5の中間にあるとして求める必要があるが、路面R1上の点D4とランプ3bの中間にある座標として求めてしまうため、異なった座標が求められることになる。
In this case, in the equation (1), it is necessary to correctly determine that the coordinate of the three-dimensional object A is in the middle between the ramp 3a and the point D3, but in actuality it is in the middle between the point D2 on the road surface R1 and the ramp 3a. Since the coordinates are obtained, different coordinates are obtained.
Similarly, when the coordinates of the three-dimensional object A are obtained by using another lamp 3b, it is necessary to obtain the coordinates between the lamp 3b and the point D5. Since the coordinates are obtained as intermediate coordinates, different coordinates are obtained.
 立体物Aの影Sが路面R1上に存在している場合には、ランプ3aによる影Saから求めた立体物Aの座標と、ランプ3bによる影Sbから求めた立体物Aの座標が一致する。
 しかしながら、立体物Aの影Sが路面R1上に存在していない場合には、上述したように、求めた座標が異なるため、車両1と影Sが同じ路面上に存在していないことを検出することができる。
 図15はこの発明の実施の形態3による車両周囲監視装置の処理内容を示すフローチャートである。
When the shadow S of the three-dimensional object A exists on the road surface R1, the coordinates of the three-dimensional object A obtained from the shadow Sa by the lamp 3a coincide with the coordinates of the three-dimensional object A obtained from the shadow Sb by the lamp 3b. .
However, when the shadow S of the three-dimensional object A does not exist on the road surface R1, as described above, since the obtained coordinates are different, it is detected that the vehicle 1 and the shadow S do not exist on the same road surface. can do.
FIG. 15 is a flowchart showing the processing contents of the vehicle periphery monitoring apparatus according to Embodiment 3 of the present invention.
 次に動作について説明する。
 まず、制御部24は、ランプ3a,3bが消灯している状態で、画像の取得指令を画像保持部21に出力する。
 画像保持部21は、制御部24から画像の取得指令を受けると、カメラ2により撮影された画像P1を取得して保持する(ステップST21)。
Next, the operation will be described.
First, the control unit 24 outputs an image acquisition command to the image holding unit 21 in a state where the lamps 3 a and 3 b are turned off.
When receiving an image acquisition command from the control unit 24, the image holding unit 21 acquires and holds the image P1 captured by the camera 2 (step ST21).
 次に、制御部24は、ランプ3aを点灯して(ステップST22)、画像の取得指令を画像保持部21に出力する。このとき、ランプ3bについては点灯せずに、消灯している状態を維持する。
 画像保持部21は、制御部24から画像の取得指令を受けると、カメラ2により撮影された画像P2を取得して保持する(ステップST23)。
Next, the control unit 24 turns on the lamp 3a (step ST22), and outputs an image acquisition command to the image holding unit 21. At this time, the lamp 3b is not turned on and is kept off.
When receiving an image acquisition command from the control unit 24, the image holding unit 21 acquires and holds the image P2 captured by the camera 2 (step ST23).
 次に、制御部24は、ランプ3aを消灯して、ランプ3bを点灯してから(ステップST24)、画像の取得指令を画像保持部21に出力する。
 画像保持部21は、制御部24から画像の取得指令を受けると、カメラ2により撮影された画像P3を取得して保持する(ステップST25)。
Next, the control unit 24 turns off the lamp 3a and turns on the lamp 3b (step ST24), and then outputs an image acquisition command to the image holding unit 21.
When receiving an image acquisition command from the control unit 24, the image holding unit 21 acquires and holds the image P3 captured by the camera 2 (step ST25).
 制御部24は、画像保持部21が画像P1,P2,P3を取得すると、ランプ3a,3bによる立体物Aの影Sa,Sbの検出指令を影検出部22に出力する。
 影検出部22は、制御部24から影Sa,Sbの検出指令を受けると、画像保持部21に保持されている画像P1と画像P2を比較して、画像P2上に投影されているランプ3aによる立体物Aの影Saを検出する。
 また、画像保持部21に保持されている画像P1と画像P3を比較して、画像P3上に投影されているランプ3bによる立体物Aの影Sbを検出する(ステップST26)。
 影検出部22による影の検出方法は、上記実施の形態1による影検出部5と同様であるため詳細な説明を省略する。
When the image holding unit 21 acquires the images P1, P2, and P3, the control unit 24 outputs detection commands for the shadows Sa and Sb of the three-dimensional object A by the lamps 3a and 3b to the shadow detection unit 22.
When receiving the shadow Sa, Sb detection command from the control unit 24, the shadow detection unit 22 compares the image P1 and the image P2 held in the image holding unit 21 and projects the lamp 3a projected on the image P2. The shadow Sa of the three-dimensional object A is detected.
Further, the image P1 and the image P3 held in the image holding unit 21 are compared, and the shadow Sb of the three-dimensional object A by the lamp 3b projected on the image P3 is detected (step ST26).
Since the shadow detection method by the shadow detection unit 22 is the same as that of the shadow detection unit 5 according to the first embodiment, detailed description thereof is omitted.
 制御部24は、影検出部22がランプ3aによる立体物Aの影Saと、ランプ3bによる立体物Aの影Sbとを検出すると、立体物Aの検出指令を立体物検出部23に出力する。
 立体物検出部23は、制御部24から立体物Aの検出指令を受けると、影検出部22により検出された立体物Aの影Saを用いて、立体物Aを検出する。
 即ち、立体物検出部23は、上記実施の形態1の立体物検出部6と同様に、影検出部22により検出された立体物Aの影Saを用いて、立体物Aの頂点の座標(x0,y0,z0)を算出する(ステップST27)。
When the shadow detection unit 22 detects the shadow Sa of the three-dimensional object A by the lamp 3a and the shadow Sb of the three-dimensional object A by the lamp 3b, the control unit 24 outputs a detection command for the three-dimensional object A to the three-dimensional object detection unit 23. .
When the solid object detection unit 23 receives the detection command for the solid object A from the control unit 24, the solid object detection unit 23 detects the solid object A using the shadow Sa of the solid object A detected by the shadow detection unit 22.
That is, the three-dimensional object detection unit 23 uses the shadow Sa of the three-dimensional object A detected by the shadow detection unit 22 in the same manner as the three-dimensional object detection unit 6 of the first embodiment, and the coordinates ( x 0 , y 0 , z 0 ) is calculated (step ST27).
 また、立体物検出部23は、制御部24から立体物Aの検出指令を受けると、影検出部22により検出された立体物Aの影Sbを用いて、立体物Aを検出する。
 即ち、立体物検出部23は、上記実施の形態1の立体物検出部6と同様に、影検出部22により検出された立体物Aの影Sbを用いて、立体物Aの頂点の座標(x0,y0,z0)を算出する(ステップST27)。
In addition, when the solid object detection unit 23 receives a detection command for the solid object A from the control unit 24, the solid object detection unit 23 detects the solid object A using the shadow Sb of the solid object A detected by the shadow detection unit 22.
That is, the three-dimensional object detection unit 23 uses the shadow Sb of the three-dimensional object A detected by the shadow detection unit 22 in the same manner as the three-dimensional object detection unit 6 of the first embodiment, and the coordinates of the vertex of the three-dimensional object A ( x 0 , y 0 , z 0 ) is calculated (step ST27).
 制御部24は、立体物検出部23が立体物Aの頂点の座標(x0,y0,z0)を算出すると、ランプ3aによる立体物Aの影Saを用いて検出された立体物Aの頂点の座標(x0,y0,z0)と、ランプ3bによる立体物Aの影Sbを用いて検出された立体物Aの頂点の座標(x0,y0,z0)を比較する(ステップST28)。
 制御部24は、2つの座標(x0,y0,z0)が同じであれば、車両1が存在している路面R1と立体物Aの影Sが存在している路面R2が一致していると認定する。
 一方、2つの座標(x0,y0,z0)が異なっていれば、車両1が存在している路面R1と立体物Aの影Sが存在している路面R2が一致していないと認定する。
When the three-dimensional object detection unit 23 calculates the coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A, the control unit 24 detects the three-dimensional object A detected using the shadow Sa of the three-dimensional object A by the lamp 3a. compared with the coordinates of the vertices (x 0, y 0, z 0), the vertex coordinates of the detected three-dimensional object a using a shadow Sb of the three-dimensional object a by the lamp 3b of the (x 0, y 0, z 0) of (Step ST28).
If the two coordinates (x 0 , y 0 , z 0 ) are the same, the control unit 24 matches the road surface R1 where the vehicle 1 exists and the road surface R2 where the shadow S of the three-dimensional object A exists. Authenticate.
On the other hand, if the two coordinates (x 0 , y 0 , z 0 ) are different, the road surface R1 on which the vehicle 1 exists and the road surface R2 on which the shadow S of the three-dimensional object A do not match. Authorize.
 制御部24は、車両1が存在している路面R1と立体物Aの影Sが存在している路面R2が一致していると認定すると、上記実施の形態1の制御部7と同様に、立体物検出部23により算出された立体物Aの3次元座標(x0,y0,z0)をユーザが必要な情報に変換して表示部25に表示する(ステップST29)。
 一方、車両1が存在している路面R1と立体物Aの影Sが存在している路面R2が一致していないと認定すると、立体物Aの正確な3次元座標(x0,y0,z0)を算出することができない旨の注意情報を表示部25に表示する(ステップST30)。
When the control unit 24 recognizes that the road surface R1 on which the vehicle 1 exists and the road surface R2 on which the shadow S of the three-dimensional object A coincides, like the control unit 7 in the first embodiment, The three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A calculated by the three-dimensional object detection unit 23 are converted into information necessary for the user and displayed on the display unit 25 (step ST29).
On the other hand, when recognized as road shadow S of the road surface R1 and the three-dimensional object A vehicle 1 is present is present R2 do not match, accurate three-dimensional coordinates of the three-dimensional object A (x 0, y 0, Attention information indicating that z 0 ) cannot be calculated is displayed on the display unit 25 (step ST30).
 以上で明らかなように、この実施の形態3によれば、画像保持部21に保持されている画像P1と画像P2を比較して、画像P2上に投影されているランプ3aによる立体物Aの影Saを検出するとともに、画像保持部21に保持されている画像P1と画像P3を比較して、画像P3上に投影されているランプ3bによる立体物Aの影Sbを検出する影検出部22と、影検出部22により検出された立体物Aの影Saを用いて、立体物Aを検出するとともに、影検出部22により検出された立体物Aの影Sbを用いて、立体物Aを検出する立体物検出部23とを設け、制御部24が、ランプ3aによる立体物Aの影Saを用いて検出された立体物の座標と、ランプ3bによる立体物Aの影Sbを用いて検出された立体物の座標とが異なる場合、車両1が存在している路面R1と立体物Aの影Sが存在している路面R2が一致していない旨の認定を行うように構成したので、車両1が存在している路面R1と立体物Aの影Sが存在している路面R2が一致していないことが原因で、立体物Aの正確な3次元座標(x0,y0,z0)を算出することができない場合に、ユーザに注意を喚起して、不意な接触を回避することができる効果を奏する。 As apparent from the above, according to the third embodiment, the image P1 held in the image holding unit 21 is compared with the image P2, and the three-dimensional object A formed by the lamp 3a projected on the image P2 is compared. The shadow detection unit 22 detects the shadow Sa and detects the shadow Sb of the three-dimensional object A by the lamp 3b projected on the image P3 by comparing the image P1 and the image P3 held in the image holding unit 21. The solid object A is detected using the shadow Sa of the solid object A detected by the shadow detection unit 22, and the solid object A is detected using the shadow Sb of the solid object A detected by the shadow detection unit 22. A three-dimensional object detection unit 23 for detection, and the control unit 24 detects the coordinates of the three-dimensional object detected using the shadow Sa of the three-dimensional object A by the lamp 3a and the shadow Sb of the three-dimensional object A by the lamp 3b. If the coordinates of the created solid object are different Since the road surface R1 on which the vehicle 1 exists and the road surface R2 on which the shadow S of the three-dimensional object A does not match each other, the road surface R1 on which the vehicle 1 exists and the solid surface When the accurate three-dimensional coordinates (x 0 , y 0 , z 0 ) of the three-dimensional object A cannot be calculated because the road surface R2 on which the shadow S of the object A exists does not match, An effect that can alert the user and avoid unexpected contact is achieved.
 以上のように、この発明に係る車両周囲監視装置は、周囲に存在している立体物を検出して、不意な接触を回避する用途に適している。 As described above, the vehicle surroundings monitoring device according to the present invention is suitable for an application that detects a three-dimensional object existing around and avoids an unexpected contact.

Claims (6)

  1.  車両の周囲を撮影するカメラと、上記車両の周囲に向けて光を照射するランプと、上記ランプが消灯している状態で、上記カメラにより撮影された第1の画像を取得するとともに、上記ランプが点灯している状態で、上記カメラにより撮影された第2の画像を取得する画像取得手段と、上記画像取得手段により取得された第1の画像と第2の画像を比較して、上記第2の画像上に投影されている上記ランプによる立体物の影を検出する影検出手段と、上記影検出手段により検出された立体物の影を用いて、上記立体物を検出する立体物検出手段とを備えた車両周囲監視装置。 A camera that captures the periphery of the vehicle, a lamp that irradiates light toward the periphery of the vehicle, a first image captured by the camera in a state where the lamp is turned off, and the lamp In a state where is lit, the image acquisition means for acquiring the second image taken by the camera is compared with the first image acquired by the image acquisition means and the second image, and the first image A shadow detection means for detecting the shadow of the three-dimensional object by the lamp projected on the image of 2, and a three-dimensional object detection means for detecting the three-dimensional object using the shadow of the three-dimensional object detected by the shadow detection means. And a vehicle surrounding monitoring device.
  2.  立体物検出手段は、影検出手段により検出された立体物の影を用いて、立体物の高さを算出することを特徴とする請求項1記載の車両周囲監視装置。 2. The vehicle surrounding monitoring apparatus according to claim 1, wherein the three-dimensional object detection means calculates the height of the three-dimensional object using the shadow of the three-dimensional object detected by the shadow detection means.
  3.  影検出手段は、太陽によって立体物の影が第1の画像上に投影されている場合、ランプによる立体物の影の代わりに、太陽による立体物の影を検出することを特徴とする請求項1記載の車両周囲監視装置。 The shadow detecting means detects a shadow of a three-dimensional object by the sun instead of a shadow of the three-dimensional object by a lamp when the shadow of the three-dimensional object is projected on the first image by the sun. The vehicle surrounding monitoring apparatus according to claim 1.
  4.  立体物検出手段は、太陽から照射される光の方向を検出し、その光の方向と太陽による立体物の影を用いて、上記立体物を検出することを特徴とする請求項3記載の車両周囲監視装置。 The vehicle according to claim 3, wherein the three-dimensional object detection means detects the direction of light emitted from the sun, and detects the three-dimensional object using the direction of the light and the shadow of the three-dimensional object by the sun. Ambient monitoring device.
  5.  車両の周囲を撮影するカメラと、上記車両の周囲に向けて光を照射する第1のランプと、上記第1のランプと異なる位置から上記車両の周囲に向けて光を照射する第2のランプと、上記第1及び第2のランプが消灯している状態で、上記カメラにより撮影された第1の画像、上記第1のランプが点灯して、上記第2のランプが消灯している状態で、上記カメラにより撮影された第2の画像、及び上記第1のランプが消灯して、上記第2のランプが点灯している状態で、上記カメラにより撮影された第3の画像を取得する画像取得手段と、上記画像取得手段により取得された第1の画像と第2の画像を比較して、上記第2の画像上に投影されている上記第1のランプによる立体物の影を検出するとともに、上記画像取得手段により取得された第1の画像と第3の画像を比較して、上記第3の画像上に投影されている上記第2のランプによる立体物の影を検出する影検出手段と、上記影検出手段により検出された第1のランプによる立体物の影を用いて、上記立体物を検出するとともに、上記影検出手段により検出された第2のランプによる立体物の影を用いて、上記立体物を検出する立体物検出手段と、上記立体物検出手段により第1のランプによる立体物の影を用いて検出された立体物と上記第2のランプによる立体物の影を用いて検出された立体物を照合して、上記立体物検出手段の検出結果の妥当性を評価する検出結果評価手段とを備えた車両周囲監視装置。 A camera that captures the periphery of the vehicle, a first lamp that irradiates light toward the periphery of the vehicle, and a second lamp that irradiates light toward the periphery of the vehicle from a position different from the first lamp In a state where the first and second lamps are turned off, the first image taken by the camera, the first lamp is turned on, and the second lamp is turned off. The second image captured by the camera and the third image captured by the camera in a state where the first lamp is turned off and the second lamp is turned on are acquired. The image acquisition means and the first image acquired by the image acquisition means are compared with the second image, and the shadow of the three-dimensional object by the first lamp projected on the second image is detected. And acquired by the image acquisition means. The shadow detection unit that compares the first image with the third image and detects the shadow of the three-dimensional object by the second lamp projected on the third image, and the shadow detection unit detects A three-dimensional object for detecting the three-dimensional object using the shadow of the three-dimensional object detected by the shadow detecting means and detecting the three-dimensional object using the shadow of the three-dimensional object by the first lamp. The three-dimensional object detected by the detection means and the solid object detected by the three-dimensional object detection means using the shadow of the three-dimensional object by the first lamp and the three-dimensional object detected by using the shadow of the three-dimensional object by the second lamp are collated. A vehicle periphery monitoring device comprising: a detection result evaluation unit that evaluates the validity of the detection result of the three-dimensional object detection unit.
  6.  検出結果評価手段は、立体物検出手段により第1のランプによる立体物の影を用いて検出された立体物の座標と第2のランプによる立体物の影を用いて検出された立体物の座標が異なる場合、車両が存在している路面と立体物の影が存在している路面が一致していない旨の認定を行うことを特徴とする請求項5記載の車両周囲監視装置。 The detection result evaluation means is a three-dimensional object coordinate detected by the three-dimensional object detection means using the three-dimensional object shadow detected by the first lamp and the three-dimensional object shadow detected by the second lamp. 6. The vehicle periphery monitoring device according to claim 5, wherein when the two are different, the road surface on which the vehicle exists and the road surface on which the shadow of the three-dimensional object does not coincide are identified.
PCT/JP2008/002484 2008-09-09 2008-09-09 Vehicle periphery monitoring apparatus WO2010029592A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2008/002484 WO2010029592A1 (en) 2008-09-09 2008-09-09 Vehicle periphery monitoring apparatus
JP2010528535A JP5295254B2 (en) 2008-09-09 2008-09-09 Vehicle perimeter monitoring device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2008/002484 WO2010029592A1 (en) 2008-09-09 2008-09-09 Vehicle periphery monitoring apparatus

Publications (1)

Publication Number Publication Date
WO2010029592A1 true WO2010029592A1 (en) 2010-03-18

Family

ID=42004863

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2008/002484 WO2010029592A1 (en) 2008-09-09 2008-09-09 Vehicle periphery monitoring apparatus

Country Status (2)

Country Link
JP (1) JP5295254B2 (en)
WO (1) WO2010029592A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10325163B2 (en) 2016-11-22 2019-06-18 Ford Global Technologies, Llc Vehicle vision

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014105297A1 (en) * 2014-04-14 2015-10-15 Connaught Electronics Ltd. Method for detecting an object in a surrounding area of a motor vehicle by means of a camera system of the motor vehicle, camera system and motor vehicle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6022611A (en) * 1984-06-28 1985-02-05 Matsushita Electric Ind Co Ltd Measuring device for height
JP2000146547A (en) * 1998-11-17 2000-05-26 Toyota Central Res & Dev Lab Inc Detector for shape of obstacle for vehicle
JP2001298036A (en) * 2000-02-08 2001-10-26 Toshiba Corp Methods and devices for measuring height and position of bump, and manufacturing and packaging methods of semiconductor device
JP2002074595A (en) * 2000-08-29 2002-03-15 Hitachi Ltd Safe driving support system for vehicle

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4944635B2 (en) * 2007-02-15 2012-06-06 本田技研工業株式会社 Environment recognition device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6022611A (en) * 1984-06-28 1985-02-05 Matsushita Electric Ind Co Ltd Measuring device for height
JP2000146547A (en) * 1998-11-17 2000-05-26 Toyota Central Res & Dev Lab Inc Detector for shape of obstacle for vehicle
JP2001298036A (en) * 2000-02-08 2001-10-26 Toshiba Corp Methods and devices for measuring height and position of bump, and manufacturing and packaging methods of semiconductor device
JP2002074595A (en) * 2000-08-29 2002-03-15 Hitachi Ltd Safe driving support system for vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10325163B2 (en) 2016-11-22 2019-06-18 Ford Global Technologies, Llc Vehicle vision

Also Published As

Publication number Publication date
JPWO2010029592A1 (en) 2012-02-02
JP5295254B2 (en) 2013-09-18

Similar Documents

Publication Publication Date Title
CN109690623B (en) System and method for recognizing pose of camera in scene
JP3951984B2 (en) Image projection method and image projection apparatus
US9835445B2 (en) Method and system for projecting a visible representation of infrared radiation
JP5013184B2 (en) Driving support device and computer program
EP2820841B1 (en) A method and system for performing alignment of a projection image to detected infrared (ir) radiation information
JP2005182306A (en) Vehicle display device
KR101880185B1 (en) Electronic apparatus for estimating pose of moving object and method thereof
JP2002092647A (en) Information presentation system and model error detection system
JP2004144557A (en) Three-dimensional visual sensor
JP2010085186A (en) Calibration device for on-vehicle camera
KR102006291B1 (en) Method for estimating pose of moving object of electronic apparatus
CN111487320B (en) Three-dimensional ultrasonic imaging method and system based on three-dimensional optical imaging sensor
CN111226094A (en) Information processing device, information processing method, program, and moving object
JP2006044596A (en) Display device for vehicle
JP2006234703A (en) Image processing device, three-dimensional measuring device, and program for image processing device
JP5295254B2 (en) Vehicle perimeter monitoring device
JP2007172378A (en) Apparatus for specifying object to be gazed at
JPH06189906A (en) Visual axial direction measuring device
JP2008183933A (en) Noctovision equipment
JP4857159B2 (en) Vehicle driving support device
WO2019119358A1 (en) Method, device and system for displaying augmented reality poi information
JP7013509B2 (en) Drawing projection system, drawing projection method and program in construction machinery
JP5230354B2 (en) POSITIONING DEVICE AND CHANGED BUILDING DETECTION DEVICE
JP2006163569A (en) Object detector and object detection method
JP2004364112A (en) Device for displaying vehicle surroundings

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08808425

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2010528535

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08808425

Country of ref document: EP

Kind code of ref document: A1