WO2010116801A1 - Image processing device - Google Patents

Image processing device Download PDF

Info

Publication number
WO2010116801A1
WO2010116801A1 PCT/JP2010/052530 JP2010052530W WO2010116801A1 WO 2010116801 A1 WO2010116801 A1 WO 2010116801A1 JP 2010052530 W JP2010052530 W JP 2010052530W WO 2010116801 A1 WO2010116801 A1 WO 2010116801A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
bird
obstacle
dimensional object
Prior art date
Application number
PCT/JP2010/052530
Other languages
French (fr)
Japanese (ja)
Inventor
洋平 石井
亮平 山本
Original Assignee
三洋電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三洋電機株式会社 filed Critical 三洋電機株式会社
Publication of WO2010116801A1 publication Critical patent/WO2010116801A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/60Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
    • B60R2300/602Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective with an adjustable viewpoint
    • B60R2300/605Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective with an adjustable viewpoint the adjustment being automatic
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8093Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for obstacle warning

Definitions

  • the present invention relates to an image processing apparatus, and in particular, captures a scene from above a reference plane based on a scene image output from a camera that captures the scene in a direction obliquely intersecting the reference plane.
  • the present invention relates to an image processing apparatus that creates a bird's-eye view image representing a state.
  • Patent Document 1 An example of this type of device is disclosed in Patent Document 1.
  • the four cameras are mounted on the vehicle so that the fields of view of the two adjacent cameras are partially common.
  • the surroundings of the vehicle are captured by such a camera, and the object scene image output from each camera is converted into a bird's-eye view image.
  • Boundary lines are assigned to partial images corresponding to the common visual field in the converted bird's-eye view image.
  • the four bird's-eye images respectively corresponding to the four cameras are combined with each other through the trimming process referring to such a boundary line.
  • the position of the boundary line is changed so as to avoid the obstacle image. Thereby, the quality of the obstacle image is maintained.
  • the posture of the obstacle image that appears in the bird's-eye image corresponding to one camera is the bird's-eye view corresponding to the other camera. It is different from the posture of the obstacle image that appeared in the image. For this reason, if the camera which catches the obstacle which appeared in the synthetic bird's-eye view image is changed along with the change of the boundary line, the posture of the obstacle image changes discontinuously.
  • a main object of the present invention is to provide an image processing apparatus capable of improving the visibility of a three-dimensional object in a common visual field.
  • An image processing apparatus includes the following: a plurality of cameras each having a common field of view and capturing an object scene in a direction obliquely intersecting a reference plane; respectively output from a plurality of cameras Creation means for creating a bird's-eye view image with respect to a reference plane based on a plurality of object scene images; discrimination means for discriminating a positional relationship between a three-dimensional object existing in a common visual field and a plurality of cameras; Adjustment means for adjusting the posture of the three-dimensional object image on the image based on the determination result of the determination means.
  • the determining means determines whether or not the three-dimensional object is sandwiched by a plurality of reference lines extending in a predetermined manner from the plurality of cameras toward the object scene, and the adjusting means is a three-dimensional object along the plurality of reference lines. Adjust the posture of the image.
  • the default mode corresponds to a mode in which a plurality of reference lines are parallel to each other.
  • the three-dimensional object image corresponds to a bird's-eye image of a three-dimensional object captured by a reference camera that is one of a plurality of cameras
  • the adjustment unit belongs to a field of view sandwiched by a plurality of reference lines.
  • Definition means for defining a line connecting the reference camera and the three-dimensional object
  • correction means for correcting the posture of the three-dimensional object image with reference to a difference in angle between the line defined by the definition means and the reference line.
  • the adjustment means includes a selection means for selecting, as a reference camera, a camera closer to the three-dimensional object among the plurality of cameras when the three-dimensional object deviates from the field of view sandwiched by the plurality of reference lines.
  • a restricting means for restricting the adjustment process of the adjusting means when the three-dimensional object meets a specific condition is further provided.
  • the posture of the three-dimensional object image on the bird's-eye view image is adjusted based on the positional relationship between the three-dimensional object existing in the common visual field and the plurality of cameras. Therefore, the change in the posture of the three-dimensional object image due to the change in the positional relationship between the three-dimensional object and the camera is suppressed, and the visibility of the three-dimensional object in the common visual field is improved.
  • FIG. 1 It is a block diagram which shows the basic composition of this invention. It is a block diagram which shows the structure of one Example of this invention. It is an illustration figure which shows the visual field captured by the some camera attached to the vehicle.
  • A is an illustrative view showing an example of a bird's eye view image based on the output of the previous camera
  • B is an illustrative view showing an example of a bird's eye view image based on the output of the right camera
  • (C) is an output of the left camera.
  • (D) is an illustration figure which shows an example of the bird's-eye view image based on the output of a back camera.
  • FIG. 10 is an illustrative view showing another portion of the operation of creating the driving assistance image. It is an illustration figure which shows an example of the operation
  • FIG. 11 is a flowchart showing still another portion of behavior of the CPU applied to the embodiment in FIG. 2; FIG.
  • FIG. 10 is a flowchart showing yet another portion of behavior of the CPU applied to the embodiment in FIG. 2; It is a flowchart which shows a part of other operation
  • FIG. 11 is a flowchart showing still another portion of behavior of the CPU applied to the embodiment in FIG. 2; It is an illustration figure which shows another example of the allocation state of an adjustment area. It is an illustration figure which shows another example of the allocation state of an adjustment area. It is an illustration figure which shows another example of the allocation state of an adjustment area. It is an illustration figure which shows another example of the allocation state of an adjustment area. It is a flowchart which shows a part of operation
  • the image processing apparatus of the present invention is basically configured as follows.
  • Each of the plurality of cameras 1, 1,... Has a common field of view and captures an object scene in a direction that obliquely intersects the reference plane.
  • the creating unit 2 creates a bird's-eye view image with respect to the reference plane based on a plurality of object scene images respectively output from the plurality of cameras 1, 1,.
  • the discriminating means 3 discriminates the positional relationship between the three-dimensional object existing in the common visual field and the plurality of cameras 1, 1,.
  • the adjusting unit 4 adjusts the posture of the three-dimensional object image on the bird's-eye image created by the creating unit 2 based on the determination result of the determining unit 3.
  • the camera 1 captures the object scene in a direction that crosses the reference plane obliquely. For this reason, when a three-dimensional object exists on the reference plane, the three-dimensional object image appears in the bird's-eye view image in a posture of falling down along a line connecting the three-dimensional object and the camera 1. Further, the posture of the three-dimensional object image changes due to a change in the positional relationship between the three-dimensional object and the camera 1.
  • the posture of the three-dimensional object image on the bird's-eye view image is adjusted based on the positional relationship between the three-dimensional object existing in the common visual field and the plurality of cameras 1, 1,.
  • the change in the posture of the three-dimensional object image due to the change in the positional relationship between the three-dimensional object and the camera 1 is suppressed, and the visibility of the three-dimensional object in the common visual field is improved.
  • the steering support device 10 of this embodiment shown in FIG. 2 includes four cameras C_1 to C_4.
  • the cameras C_1 to C_4 output scene images P_1 to P_4 every 1/30 seconds in synchronization with the common timing signal.
  • the output scene images P_1 to P_4 are given to the image processing circuit 12.
  • camera C_1 is installed at the center of the front portion of vehicle 100 in a posture in which the optical axis of camera C_1 extends obliquely downward in front of vehicle 100.
  • the camera C_2 is installed at the center on the right side of the vehicle 100 with the optical axis of the camera C_2 extending obliquely downward to the right of the vehicle 100.
  • the camera C_3 is installed on the upper rear side of the vehicle 100 in a posture in which the optical axis of the camera C_3 extends obliquely downward rearward of the vehicle 100.
  • the camera C_4 is installed at the center of the left side of the vehicle 100 in a posture in which the optical axis of the camera C_4 extends obliquely downward to the left of the vehicle 100.
  • the object scene around the vehicle 100 is captured from such a direction that intersects the road surface in an oblique direction by the cameras C_1 to C_4.
  • Camera C_1 has a field of view VW_1 that captures the front of the vehicle 100
  • camera C_2 has a field of view VW_2 that captures the right direction of the vehicle 100
  • camera C_3 has a field of view VW_3 that captures the rear of the vehicle 100
  • camera C_4 It has a visual field VW_4 that captures the left direction of the vehicle 100.
  • the visual fields VW_1 and VW_2 have a common visual field VW_12
  • the visual fields VW_2 and VW_3 have a common visual field VW_23
  • the visual fields VW_3 and VW_4 have a common visual field VW_34
  • the visual fields VW_4 and VW_1 have a common visual field VW_41.
  • the CPU 12p provided in the image processing circuit 12 generates the bird's-eye view image BEV_1 shown in FIG. 4A based on the object scene image P_1 output from the camera C_1, and is output from the camera C_2.
  • the bird's-eye view image BEV_2 shown in FIG. 4B is generated based on the object scene image P_2.
  • the CPU 12p also generates the bird's-eye view image BEV_3 shown in FIG. 4C based on the object scene image P_3 output from the camera C_3, and based on the object scene image P_4 output from the camera C_4, FIG.
  • the bird's-eye view image BEV_1 corresponds to an image captured by a virtual camera looking down the visual field VW_1 in the vertical direction
  • the bird's-eye view image BEV_2 corresponds to an image captured by a virtual camera looking down the visual field VW_2 in the vertical direction
  • the bird's-eye view image BEV_3 corresponds to an image captured by a virtual camera looking down the visual field VW_3 in the vertical direction
  • the bird's-eye view image BEV_4 corresponds to an image captured by a virtual camera looking down the visual field VW_4 in the vertical direction.
  • the bird's-eye image BEV_1 has a bird's-eye coordinate system X1 and Y1
  • the bird's-eye image BEV_2 has a bird's-eye coordinate system X2 and Y2
  • the bird's-eye image BEV_3 has a bird's-eye coordinate system.
  • the bird's-eye view image BEV_4 has a bird's-eye coordinate system X4 / Y4.
  • Such bird's-eye images BEV_1 to BEV_4 are held in the work area W1 of the memory 12m.
  • the CPU 12p deletes a part of the images outside the boundary line BL from each of the bird's-eye images BEV_1 to BEV_4, and rotates / moves the bird's-eye images BEV_1 to BEV_4 (see FIG. 5) remaining after the deletion. Join each other. As a result, the all-around bird's-eye view image shown in FIG. 6 is obtained in the work area W2 of the memory 12m.
  • the overlapping area OL_12 indicated by diagonal lines corresponds to an area that reproduces the common visual field VW_12
  • the overlapping area OL_23 indicated by diagonal lines corresponds to an area that reproduces the common visual field VW_23.
  • the overlapping area OL_34 indicated by hatching corresponds to an area for reproducing the common visual field VW_34
  • the overlapping area OL_41 indicated by hatching corresponds to an area for reproducing the common visual field VW_41.
  • the display device 16 installed in the driver's seat extracts a part of the images D1 in which the overlapping areas OL_12 to OL_41 are located at the four corners from the all-around bird's-eye view image on the work area W2, and the vehicle image D2 imitating the upper part of the vehicle 100 Is pasted in the center of the extracted image D1.
  • the driving assistance image shown in FIG. 7 is displayed on the monitor screen.
  • camera C_3 is disposed rearward and obliquely downward at the rear of vehicle 100. If the depression angle of the camera C_3 is “ ⁇ d”, the angle ⁇ shown in FIG. 7 corresponds to “180 ° ⁇ d”. Further, the angle ⁇ is defined in the range of 90 ° ⁇ ⁇ 180 °.
  • FIG. 9 shows the relationship between the camera coordinate system X ⁇ Y ⁇ Z, the coordinate system Xp ⁇ Yp of the imaging surface S of the camera C_3, and the world coordinate system Xw ⁇ Yw ⁇ Zw.
  • the camera coordinate system X, Y, Z is a three-dimensional coordinate system with the X, Y, and Z axes as coordinate axes.
  • the coordinate system Xp ⁇ Yp is a two-dimensional coordinate system having the Xp axis and the Yp axis as coordinate axes.
  • the world coordinate system Xw ⁇ Yw ⁇ Zw is a three-dimensional coordinate system having the Xw axis, the Yw axis, and the Zw axis as coordinate axes.
  • the optical center of the camera C3 is defined as the origin O
  • the Z axis is defined in the optical axis direction
  • the X axis is defined in the direction perpendicular to the Z axis and parallel to the road surface
  • a Y axis is defined in a direction orthogonal to the Z axis and the X axis.
  • the coordinate system Xp / Yp of the imaging surface S with the center of the imaging surface S as the origin, the Xp axis is defined in the horizontal direction of the imaging surface S, and the Yp axis is defined in the vertical direction of the imaging surface S.
  • the intersection of the vertical line passing through the origin O of the camera coordinate system XYZ and the road surface is defined as the origin Ow
  • the Yw axis is defined in the direction perpendicular to the road surface.
  • An Xw axis is defined in a direction parallel to the X axis of Z
  • a Zw axis is defined in a direction orthogonal to the Xw axis and the Yw axis.
  • the distance from the Xw axis to the X axis is “h”, and the obtuse angle formed by the Zw axis and the Z axis corresponds to the angle ⁇ described above.
  • a conversion formula between the coordinates (x, y, z) of the camera coordinate system X, Y, Z and the coordinates (xw, yw, zw) of the world coordinate system Xw, Yw, Zw is expressed by Formula 1.
  • Equation 3 is obtained based on Equation 1 and Equation 2.
  • Equation 3 shows a conversion formula between the coordinates (xp, yp) of the coordinate system Xp / Yp of the imaging surface S and the coordinates (xw, zw) of the two-dimensional road surface coordinate system Xw / Zw.
  • a bird's eye coordinate system X3 / Y3 that is a coordinate system of the bird's eye image BEV_3 shown in FIG. 4C is defined.
  • the bird's-eye coordinate system X3 / Y3 is a two-dimensional coordinate system having the X3 axis and the Y3 axis as coordinate axes.
  • the coordinates in the bird's-eye view coordinate system X3 / Y3 are expressed as (x3, y3)
  • the position of each pixel forming the bird's-eye view image BEV_3 is represented by the coordinates (x3, y3).
  • “X3” and “y3” respectively indicate an X3 axis component and a Y3 axis component in the bird's eye view coordinate system X3 ⁇ Y3.
  • the projection from the two-dimensional coordinate system Xw / Zw representing the road surface to the bird's eye coordinate system X3 / Y3 corresponds to a so-called parallel projection.
  • the conversion formula between the coordinates (xw, zw) of the two-dimensional coordinate system Xw ⁇ Zw and the coordinates (x3, y3) of the bird's-eye coordinate system X3 ⁇ Y3 is , Represented by Equation (4).
  • the height H of the virtual camera is determined in advance.
  • Equation 7 corresponds to a conversion formula for converting the coordinates (xp, yp) of the coordinate system Xp / Yp of the imaging surface S into the coordinates (x3, y3) of the bird's eye coordinate system X3 / Y3.
  • the coordinates (xp, yp) of the coordinate system Xp / Yp of the imaging surface S represent the coordinates of the object scene image P_3 captured by the camera C_3. Accordingly, the object scene image P_3 from the camera C3 is converted into the bird's-eye view image BEV_3 by using Equation 7. Actually, the object scene image P_3 is first subjected to image processing such as lens distortion correction, and then converted into a bird's-eye view image BEV_3 by Equation 7.
  • the obstacle captured by the camera C_3 is displayed as the obstacle image OBJ_3 in the bird's-eye view image BEV_3.
  • the obstacle that appears and is captured by the camera C_4 appears in the bird's-eye view image BEV_3 as an obstacle image OBJ_4.
  • the postures of the obstacle images OBJ_3 and OBJ_4 are different from each other due to the difference between the viewpoint of the camera C_3 and the viewpoint of the camera C_4. That is, the obstacle image OBJ_3 is reproduced so as to fall down along the connection line CL_3 connecting the camera C_3 and the bottom of the obstacle 200, and the obstacle image OBJ_4 is connected to the connection line CL_4 connecting the camera C_4 and the bottom of the obstacle 200. It is reproduced to fall down along.
  • the CPU 12p executes the following processing.
  • the reference line RF1a extends in parallel with the boundary line BL on the overlapping area OL_41 from the camera C_1 toward the common visual field VW_41.
  • the reference line RF1b extends in parallel with the boundary line BL on the overlapping area OL_12 from the camera C_1 toward the common visual field VW_12.
  • the reference line RF2a extends in parallel with the boundary line BL on the overlapping area OL_12 from the camera C_2 toward the common visual field VW_12.
  • the reference line RF2b extends in parallel with the boundary line BL on the overlapping area OL_23 from the camera C_2 toward the common visual field VW_23.
  • the reference line RF3a extends in parallel with the boundary line BL on the overlapping area OL_23 from the camera C_3 toward the common visual field VW_23.
  • the reference line RF3b extends in parallel with the boundary line BL on the overlapping area OL_34 from the camera C_3 toward the common visual field VW_34.
  • the reference line RF4a extends in parallel with the boundary line BL on the overlapping area OL_34 from the camera C_4 toward the common visual field VW_34.
  • the reference line RF4b extends in parallel with the boundary line BL on the overlapping area OL_41 from the camera C_4 toward the common visual field VW_41.
  • the adjustment area BRG_12 is assigned to the overlapping area OL_12 in a manner that extends in a band shape to the front right of the vehicle 100
  • the adjustment area BRG_23 is assigned to the overlapping area OL_23 in a manner that extends to the right rear of the vehicle 100
  • the adjustment area BRG_34 is assigned to the overlap area OL_34 in a manner that extends in a band shape to the left rear of the vehicle 100
  • the adjustment area BRG_41 is assigned to the overlap area OL_41 in a manner that extends to the left front of the vehicle 100.
  • the edge in the width direction of the adjustment area BRG_12 extends in parallel with the reference lines RF1b and RF2a, and the edge in the width direction of the adjustment area BRG_23 extends in parallel with the reference lines RF2b and RF3a.
  • the edge in the width direction of the adjustment area BRG_34 extends in parallel to the reference lines RF3b and RF4a, and the edge in the width direction of the adjustment area BRG_41 extends in parallel to the reference lines RF4b and RF1a.
  • the obstacle 200 In reproducing the obstacle 200 on the all-around bird's-eye view image, first, it is determined for each of the common visual fields VW_12 to VW_41 whether or not any obstacle exists.
  • the obstacle 200 exists in the common visual field VW_34, and the corresponding obstacle images OBJ_3 and OBJ_4 are reproduced as the bird's-eye view images BEV_3 and BEV_4, respectively. Therefore, the obstacle image OBJ_3 is cut out from the bird's-eye view image BEV_3, and the obstacle image OBJ_4 is cut out from the bird's-eye view image BEV_4. Subsequently, it is determined what positional relationship the coordinates corresponding to the bottom of the obstacle 200 is with respect to the adjustment area BRG_34.
  • the obstacle image OBJ_3 is determined as the selected image S_34 in the common visual field VW_34.
  • the obstacle image OBJ_4 is determined as the selected image S_34 in the common visual field VW_34.
  • connection line CL_4 connecting the camera C_4 and the bottom of the obstacle 200 is defined on the bird's-eye view image BEV_4. Is done. Further, the difference between the defined angle of the connecting line CL_4 and the angle of the boundary line BL assigned to the common visual field VW_34 (that is, the angle of the reference lines RF3b and RF4a) is calculated as “ ⁇ ”.
  • the posture of the obstacle image OBJ_4 is corrected so as to be along the boundary line BL (that is, the reference lines RF3b and RF4a) with reference to the calculated difference ⁇ .
  • the direction in which the obstacle image OBJ_4 falls is coincident with the length direction of the boundary line BL (that is, the reference lines RF3b and RF4a) as shown in the lower part of FIG.
  • the obstacle image OBJ_4 having the corrected posture is then determined as the selected image S_34 in the common visual field VW_34.
  • images outside the boundary line BL are deleted from each of the bird's-eye view images BEV_1 to BEV_4, and the bird's-eye view images BEV_1 to BEV_4 remaining after the deletion are combined with each other (FIG. 5 to FIG. 5). (See FIG. 6).
  • the selected image S_34 is pasted on the combined image, thereby completing the all-around bird's-eye view image.
  • the obstacle image OBJ_3 or OBJ_4 is reproduced in the overlap area OL_34 of the completed all-around bird's-eye view image as shown in the lower part of FIG. 12, the lower part of FIG. 13, or the lower part of FIG.
  • the CPU 12p executes processing according to the flowcharts shown in FIGS.
  • the control program corresponding to these flowcharts is stored in the flash memory 14 (see FIG. 1).
  • step S1 shown in FIG. 16 the scene images P_1 to P_4 are captured from the cameras C_1 to C_4.
  • step S3 bird's-eye view images BEV_1 to BEV_4 are created based on the captured object scene images P_1 to P_4.
  • the created bird's-eye view images BEV_1 to BEV_4 are secured in the work area W1.
  • step S5 an obstacle detection process is executed.
  • step S7 the all-around bird's-eye view image is created based on the bird's-eye view images BEV_1 to BEV_4 created in step S3.
  • the created all-around bird's-eye view image is secured in the work area W2.
  • a driving support image based on the all-around bird's-eye view image secured in the work area W2 is displayed.
  • step S5 follows the subroutine shown in FIGS.
  • variables M and N are set to “1” and “2”, respectively.
  • the variable M is a variable that is updated in the order of “1” ⁇ “2” ⁇ “3” ⁇ “4”
  • the variable N is “2” ⁇ “3” ⁇ “4” ⁇ “1”. It is a variable that is updated in order.
  • step S13 a difference image in the common visual field VW_MN is created based on the bird's-eye view image BEV_M based on the output of the camera C_M and the bird's-eye view image BEV_N based on the output of the camera C_N.
  • step S15 it is determined based on the difference image created in step S13 whether an obstacle exists in the common visual field VW_MN.
  • step S17 If no obstacle is detected from the common visual field VW_MN, the flag FLG_MN is set to “0” in step S17, and the process proceeds to step S23. On the other hand, if an obstacle is detected from the common visual field VW_MN, the flag FLG_MN is set to “1” in step S19, obstacle image processing is executed in step S21, and then the process proceeds to step S23.
  • step S23 it is determined whether or not the variable M indicates “4”. If the variable M is less than “4”, the variables M and N are updated in step S25, and the process returns to step S13. On the other hand, if the variable M is “4”, the process returns to the upper layer routine.
  • step S21 shown in FIG. 17 The obstacle image processing in step S21 shown in FIG. 17 is executed according to a subroutine shown in FIGS.
  • step S31 the obstacle image OBJ_M is cut out from the bird's-eye image BEV_M, and the obstacle image OBJ_N is cut out from the bird's-eye image BEV_N.
  • step S33 it is determined whether or not coordinates corresponding to the bottom of the obstacle are present on the camera C_M side with respect to the adjustment area BRG_MN.
  • step S35 it is determined whether or not the coordinates corresponding to the bottom of the obstacle exist on the camera C_N side with respect to the adjustment area BRG_MN.
  • step S33 the process proceeds to a step S37 to determine the obstacle image OBJ_M as the selected image S_MN. If “NO” in the step S33, but if “YES” in the step S35, the process proceeds to a step S39 to determine the obstacle image OBJ_N as the selected image S_MN. When the process of step S37 or S39 is completed, the process returns to the upper hierarchy routine.
  • step S41 a connection line CL_N connecting the camera C_N and the bottom of the obstacle is defined on the bird's-eye view image BEV_N.
  • step S43 the difference in angle between the connecting line CL_N defined in step S41 and the boundary line BL_MN is calculated as “ ⁇ ”.
  • step S45 the posture of the obstacle image OBJ_N is corrected with reference to the calculated difference ⁇ .
  • step S47 the obstacle image OBJ_N having the corrected posture is determined as the selected image S_MN.
  • step S51 the variable M is set to “1”.
  • the variable M is a variable that is updated in the order of “1” ⁇ “2” ⁇ “3” ⁇ “4”.
  • step S53 an image outside the boundary line is deleted from the bird's-eye view image BEV_M, and in step S55, it is determined whether or not the variable M has reached “4”. If the variable M is less than “4”, the variable M is updated in step S57, and then the process returns to step S53. On the other hand, if the variable M indicates “4”, the process proceeds to step S59, and the bird's-eye images BEV_1 to BEV_4 processed in step S53 are coupled to each other by coordinate transformation.
  • variables M and N are set to “1” and “2”, respectively, in step S61.
  • the variable M is a variable that is updated in the order of “1” ⁇ “2” ⁇ “3” ⁇ “4”
  • the variable N is “2” ⁇ “3” ⁇ “4” ⁇ “1”. It is a variable that is updated in this order.
  • step S63 it is determined whether or not the flag FLG_MN is “1”. If the determination result is NO, the process proceeds to step S67 as it is. If the determination result is YES, the selected image S_MN is created in step S59. Paste to image. In step S67, it is determined whether or not the variable M has reached “4”. If NO, the variables M and N are updated in step S69, and then the process returns to step S63. Return.
  • the cameras C_3 and C_4 when attention is paid to the cameras C_3 and C_4 among the cameras C_1 to C_4, the cameras C_3 and C_4 partially have a common visual field VW_34 and are obliquely intersecting the road surface. Capture the scene.
  • the CPU 12p captures the scene images P_3 and P_4 output from the cameras C_3 and C_4, respectively (S1), and creates the bird's-eye view image for the road surface based on the captured scene images P_3 and P_4 (S3, S7). .
  • the CPU 12p also sets the posture of the obstacle image appearing in the bird's-eye view image along the reference lines RF3b and RF4a extending parallel to each other from the cameras C_3 and C_4 toward the object scene when the obstacle exists in the common visual field VW_34. Adjust (S15, S21).
  • Each of the cameras C_3 and C_4 captures an object scene in a direction that obliquely intersects the road surface. For this reason, when an obstacle exists on the road surface, the obstacle image appears in the bird's-eye view image in a posture of falling along a line connecting the camera C_3 or C_4 and the obstacle. Further, the posture of the obstacle image changes due to a change in the positional relationship between the three-dimensional object and the camera C_3 or C_4.
  • the posture of the obstacle image appearing in the bird's-eye view image is such that the reference lines RF3b and RF4a extend parallel to each other from the cameras C_3 and C_4 toward the object scene. It is adjusted to follow. Thereby, the change in the posture of the obstacle image due to the change in the positional relationship between the obstacle and the camera C_3 or C_4 is suppressed, and the visibility of the obstacle in the common visual field VW_34 is improved.
  • the reference lines RFMb and RFNa are extended in parallel to the boundary line BL on the overlapping area OL_MN (M: 1 ⁇ 2 ⁇ 3 ⁇ 4, N: 2 ⁇ 3 ⁇ 4 ⁇ 1). .
  • the reference lines RFMb and RFNa need only be parallel to each other, and are not necessarily parallel to the boundary line BL on the overlapping area OL_MN.
  • the reference lines RF3b and RF4a may be defined as shown in FIG.
  • the shortest distance between the obstacle and the boundary line BL is calculated and the posture adjustment as shown in FIG. 15 is performed. It is preferred to do so.
  • step S71 the size of the obstacle is detected, and in step S73, it is determined whether or not the detected size is below a threshold value THsz. If the determination result is NO, the process proceeds to step S19, and if the determination result is YES, the process proceeds to step S21.
  • an obstacle is detected from the common visual field VW_MN based on the difference image created for the common visual field VW_MN (see steps S13 to S15 in FIG. 17).
  • a stereo vision method or an optical flow method may be used, and an ultrasonic sensor, a millimeter wave sensor, or a microwave sensor may be used.
  • the posture of the obstacle image OBJ_N corresponding to the camera C_N is corrected, and the obstacle image OBJ_N having the corrected posture is corrected. Is determined as the selected image S_MN.
  • the obstacle image cut out from the bird's-eye view image is pasted on the all-around bird's-eye view image (see steps S31, S37, S39 to S47 in FIG. 18, and step S65 in FIG. 21).
  • the obstacle image that appears in the object scene image before being converted into the bird's-eye view image may be pasted on the all-round bird's-eye image.
  • the coordinate transformation for generating a bird's-eye view image from a photographed image as described in the embodiment is generally called perspective projection transformation.
  • a bird's eye view image may be generated from the captured image by a known plane projective conversion.
  • planar projective transformation a homography matrix (coordinate transformation matrix) for converting the coordinate value of each pixel on the captured image into the coordinate value of each pixel on the bird's eye view image is obtained in advance at the stage of camera calibration processing. .
  • a method for obtaining a homography matrix is known. And when performing image conversion, what is necessary is just to convert a picked-up image into a bird's-eye view image based on a homography matrix. In any case, the captured image is converted into the bird's-eye view image by projecting the captured image onto the bird's-eye view image.

Abstract

When focusing on a camera (C_3) and another camera (C_4) among cameras (C_1 to C_4), the camera (C_3) and the other camera (C_4) partially have a common visual field (VW_34) and capture a field image in a direction that intersects the road surface in a slanting manner. A CPU takes in a field image (P_3) and another field image (P_4) outputted from the camera (C_3) and the other camera (C_4) respectively, and creates a birds eye image representing the state of capture of the field from above the road surface, based on the field image (P_3) and the other field image (P_4) that were taken in. Further, when there is an obstacle in the common visual field (VW_34), the CPU adjusts the posture of the obstacle image on the birds eye image, based on the positional relationship of the camera (C_3) and the other camera (C_4) with the obstacle.

Description

画像処理装置Image processing device
 この発明は、画像処理装置に関し、特に、基準面に対して斜めに交差する方向の被写界を捉えるカメラから出力された被写界像に基づいて基準面の上方から被写界を捉えた状態を表す鳥瞰画像を作成する、画像処理装置に関する。 The present invention relates to an image processing apparatus, and in particular, captures a scene from above a reference plane based on a scene image output from a camera that captures the scene in a direction obliquely intersecting the reference plane. The present invention relates to an image processing apparatus that creates a bird's-eye view image representing a state.
 この種の装置の一例が、特許文献1に開示されている。この背景技術によれば、4つのカメラは、隣り合う2つのカメラの視野が部分的に共通するように車両に搭載される。車両の周囲はこのようなカメラによって捉えられ、各々のカメラから出力された被写界像は鳥瞰画像に変換される。 An example of this type of device is disclosed in Patent Document 1. According to this background art, the four cameras are mounted on the vehicle so that the fields of view of the two adjacent cameras are partially common. The surroundings of the vehicle are captured by such a camera, and the object scene image output from each camera is converted into a bird's-eye view image.
 変換された鳥瞰画像のうち共通視野に対応する部分画像上には、境界線が割り当てられる。4つのカメラにそれぞれ対応する4つの鳥瞰画像は、このような境界線を参照したトリミング処理を経て、互いに合成される。ただし、共通視野から障害物(立体物)が検出された場合、境界線の位置は障害物画像を回避するように変更される。これによって、障害物画像の品質が維持される。 Boundary lines are assigned to partial images corresponding to the common visual field in the converted bird's-eye view image. The four bird's-eye images respectively corresponding to the four cameras are combined with each other through the trimming process referring to such a boundary line. However, when an obstacle (three-dimensional object) is detected from the common visual field, the position of the boundary line is changed so as to avoid the obstacle image. Thereby, the quality of the obstacle image is maintained.
特開2007-104373号公報JP 2007-104373 A
 しかし、隣り合う2つのカメラに共通する視野で障害物を捉えた場合、鳥瞰画像の性質上、一方のカメラに対応する鳥瞰画像に現れた障害物画像の姿勢は、他方のカメラに対応する鳥瞰画像に現れた障害物画像の姿勢と異なる。このため、合成鳥瞰画像に現れた障害物を捉えるカメラが境界線の変更に伴って変更されると、障害物画像の姿勢が不連続に変化してしまう。 However, when an obstacle is captured with a field of view common to two adjacent cameras, due to the nature of the bird's-eye view image, the posture of the obstacle image that appears in the bird's-eye image corresponding to one camera is the bird's-eye view corresponding to the other camera. It is different from the posture of the obstacle image that appeared in the image. For this reason, if the camera which catches the obstacle which appeared in the synthetic bird's-eye view image is changed along with the change of the boundary line, the posture of the obstacle image changes discontinuously.
 それゆえに、この発明の主たる目的は、共通視野における立体物の視認性を向上させることができる、画像処理装置を提供することである。 Therefore, a main object of the present invention is to provide an image processing apparatus capable of improving the visibility of a three-dimensional object in a common visual field.
 この発明に従う画像処理装置は、次のものを備える:各々が共通視野を部分的に有しかつ基準面に斜めに交差する方向の被写界を捉える複数のカメラ;複数のカメラからそれぞれ出力された複数の被写界像に基づいて基準面に対する鳥瞰画像を作成する作成手段;共通視野に存在する立体物と複数のカメラとの位置関係を判別する判別手段;および作成手段によって作成された鳥瞰画像上の立体物画像の姿勢を判別手段の判別結果に基づいて調整する調整手段。 An image processing apparatus according to the present invention includes the following: a plurality of cameras each having a common field of view and capturing an object scene in a direction obliquely intersecting a reference plane; respectively output from a plurality of cameras Creation means for creating a bird's-eye view image with respect to a reference plane based on a plurality of object scene images; discrimination means for discriminating a positional relationship between a three-dimensional object existing in a common visual field and a plurality of cameras; Adjustment means for adjusting the posture of the three-dimensional object image on the image based on the determination result of the determination means.
 好ましくは、判別手段は立体物が複数のカメラから被写界に向けて既定態様で延びる複数の基準線によって挟まれるか否かを判別し、調整手段は複数の基準線に沿うように立体物画像の姿勢を調整する。 Preferably, the determining means determines whether or not the three-dimensional object is sandwiched by a plurality of reference lines extending in a predetermined manner from the plurality of cameras toward the object scene, and the adjusting means is a three-dimensional object along the plurality of reference lines. Adjust the posture of the image.
 ある局面では、既定態様は複数の基準線が互いに平行である態様に相当する。 In one aspect, the default mode corresponds to a mode in which a plurality of reference lines are parallel to each other.
 他の局面では、立体物画像は複数のカメラの1つである基準カメラによって捉えられた立体物の鳥瞰画像に相当し、調整手段は、立体物が複数の基準線によって挟まれる視野に属するとき基準カメラと立体物とを結ぶ線を定義する定義手段、および定義手段によって定義された線と基準線との間の角度の相違を参照して立体物画像の姿勢を修正する修正手段を含む。 In another aspect, the three-dimensional object image corresponds to a bird's-eye image of a three-dimensional object captured by a reference camera that is one of a plurality of cameras, and the adjustment unit belongs to a field of view sandwiched by a plurality of reference lines. Definition means for defining a line connecting the reference camera and the three-dimensional object, and correction means for correcting the posture of the three-dimensional object image with reference to a difference in angle between the line defined by the definition means and the reference line.
 さらに好ましくは、調整手段は立体物が複数の基準線によって挟まれる視野から外れるとき複数のカメラのうち立体物により近いカメラを基準カメラとして選択する選択手段を含む。 More preferably, the adjustment means includes a selection means for selecting, as a reference camera, a camera closer to the three-dimensional object among the plurality of cameras when the three-dimensional object deviates from the field of view sandwiched by the plurality of reference lines.
 好ましくは、立体物が特定条件に合致するとき調整手段の調整処理を制限する制限手段がさらに備えられる。 Preferably, a restricting means for restricting the adjustment process of the adjusting means when the three-dimensional object meets a specific condition is further provided.
 この発明によれば、鳥瞰画像上の立体物画像の姿勢が、共通視野に存在する立体物と複数のカメラとの位置関係に基づいて調整される。これによって、立体物とカメラとの位置関係の変化に起因する立体物画像の姿勢の変化が抑制され、共通視野における立体物の視認性が向上する。 According to the present invention, the posture of the three-dimensional object image on the bird's-eye view image is adjusted based on the positional relationship between the three-dimensional object existing in the common visual field and the plurality of cameras. Thereby, the change in the posture of the three-dimensional object image due to the change in the positional relationship between the three-dimensional object and the camera is suppressed, and the visibility of the three-dimensional object in the common visual field is improved.
 この発明の上述の目的,その他の目的,特徴および利点は、図面を参照して行う以下の実施例の詳細な説明から一層明らかとなろう。 The above object, other objects, features, and advantages of the present invention will become more apparent from the following detailed description of embodiments with reference to the drawings.
この発明の基本的構成を示すブロック図である。It is a block diagram which shows the basic composition of this invention. この発明の一実施例の構成を示すブロック図である。It is a block diagram which shows the structure of one Example of this invention. 車両に取り付けられた複数のカメラによって捉えられる視野を示す図解図である。It is an illustration figure which shows the visual field captured by the some camera attached to the vehicle. (A)は前カメラの出力に基づく鳥瞰図画像の一例を示す図解図であり、(B)は右カメラの出力に基づく鳥瞰図画像の一例を示す図解図であり、(C)は左カメラの出力に基づく鳥瞰図画像の一例を示す図解図であり、(D)は後カメラの出力に基づく鳥瞰図画像の一例を示す図解図である。(A) is an illustrative view showing an example of a bird's eye view image based on the output of the previous camera, (B) is an illustrative view showing an example of a bird's eye view image based on the output of the right camera, and (C) is an output of the left camera. It is an illustration figure which shows an example of the bird's-eye view image based on (D), and (D) is an illustration figure which shows an example of the bird's-eye view image based on the output of a back camera. 全周鳥瞰図画像の作成動作の一部を示す図解図である。It is an illustration figure which shows a part of creation operation of an all-around bird's-eye view image. 作成された全周鳥瞰図画像の一例を示す図解図である。It is an illustration figure which shows an example of the created all-around bird's-eye view image. 表示装置によって表示される運転支援画像の一例を示す図解図である。It is an illustration figure which shows an example of the driving assistance image displayed by a display apparatus. 車両に取り付けられたカメラの角度を示す図解図である。It is an illustration figure which shows the angle of the camera attached to the vehicle. カメラ座標系と撮像面の座標系と世界座標系との関係を示す図解図である。It is an illustration figure which shows the relationship between a camera coordinate system, the coordinate system of an imaging surface, and a world coordinate system. 車両とその周辺に存在する障害物の一例を示す図解図である。It is an illustration figure which shows an example of the obstruction which exists in a vehicle and its periphery. 運転支援画像の作成動作の他の一部を示す図解図である。FIG. 10 is an illustrative view showing another portion of the operation of creating the driving assistance image. 障害物画像を再現する動作の一例を示す図解図である。It is an illustration figure which shows an example of the operation | movement which reproduces an obstacle image. 障害物画像の再現する動作の他の一例を示す図解図である。It is an illustration figure which shows another example of the operation | movement which reproduces an obstacle image. 障害物画像の再現する動作のその他の一例を示す図解図である。It is an illustration figure which shows another example of the operation | movement which reproduces an obstacle image. (A)は姿勢調整前の障害物画像の再現状態を示す図解図であり、(B)は姿勢調整後の障害物画像の再現状態を示す図解図である。(A) is an illustrative view showing a reproduction state of an obstacle image before posture adjustment, and (B) is an illustrative view showing a reproduction state of the obstacle image after posture adjustment. 図2実施例に適用されるCPUの動作の一部を示すフロー図である。It is a flowchart which shows a part of operation | movement of CPU applied to the FIG. 2 Example. 図2実施例に適用されるCPUの動作の他の一部を示すフロー図である。It is a flowchart which shows a part of other operation | movement of CPU applied to the FIG. 2 Example. 図2実施例に適用されるCPUの動作のその他の一部を示すフロー図である。FIG. 11 is a flowchart showing still another portion of behavior of the CPU applied to the embodiment in FIG. 2; 図2実施例に適用されるCPUの動作のさらにその他の一部を示すフロー図である。FIG. 10 is a flowchart showing yet another portion of behavior of the CPU applied to the embodiment in FIG. 2; 図2実施例に適用されるCPUの動作の他の一部を示すフロー図である。It is a flowchart which shows a part of other operation | movement of CPU applied to the FIG. 2 Example. 図2実施例に適用されるCPUの動作のその他の一部を示すフロー図である。FIG. 11 is a flowchart showing still another portion of behavior of the CPU applied to the embodiment in FIG. 2; 調整エリアの割り当て状態の他の一例を示す図解図である。It is an illustration figure which shows another example of the allocation state of an adjustment area. 調整エリアの割り当て状態のその他の一例を示す図解図である。It is an illustration figure which shows another example of the allocation state of an adjustment area. 調整エリアの割り当て状態のさらにその他の一例を示す図解図である。It is an illustration figure which shows another example of the allocation state of an adjustment area. その他の実施例に適用されるCPUの動作の一部を示すフロー図である。It is a flowchart which shows a part of operation | movement of CPU applied to another Example.
 図1を参照して、この発明の画像処理装置は、基本的に次のように構成される。複数のカメラ1,1,…の各々は、共通視野を部分的に有し、かつ基準面に斜めに交差する方向の被写界を捉える。作成手段2は、複数のカメラ1,1,…からそれぞれ出力された複数の被写界像に基づいて、基準面に対する鳥瞰画像を作成する。判別手段3は、共通視野に存在する立体物と複数のカメラ1,1,…との位置関係を判別する。調整手段4は、作成手段2によって作成された鳥瞰画像上の立体物画像の姿勢を、判別手段3の判別結果に基づいて調整する。 Referring to FIG. 1, the image processing apparatus of the present invention is basically configured as follows. Each of the plurality of cameras 1, 1,... Has a common field of view and captures an object scene in a direction that obliquely intersects the reference plane. The creating unit 2 creates a bird's-eye view image with respect to the reference plane based on a plurality of object scene images respectively output from the plurality of cameras 1, 1,. The discriminating means 3 discriminates the positional relationship between the three-dimensional object existing in the common visual field and the plurality of cameras 1, 1,. The adjusting unit 4 adjusts the posture of the three-dimensional object image on the bird's-eye image created by the creating unit 2 based on the determination result of the determining unit 3.
 カメラ1は、基準面に対して斜めに交差する方向の被写界を捉える。このため、基準面の上に立体物が存在する場合、立体物画像は、立体物とカメラ1とを結ぶ線に沿って倒れ込む姿勢で鳥瞰画像に現れる。さらに、立体物画像の姿勢は、立体物とカメラ1との位置関係の変化に起因して変化する。 The camera 1 captures the object scene in a direction that crosses the reference plane obliquely. For this reason, when a three-dimensional object exists on the reference plane, the three-dimensional object image appears in the bird's-eye view image in a posture of falling down along a line connecting the three-dimensional object and the camera 1. Further, the posture of the three-dimensional object image changes due to a change in the positional relationship between the three-dimensional object and the camera 1.
 この発明によれば、鳥瞰画像上の立体物画像の姿勢が、共通視野に存在する立体物と複数のカメラ1,1,…との位置関係に基づいて調整される。これによって、立体物とカメラ1との位置関係の変化に起因する立体物画像の姿勢の変化が抑制され、共通視野における立体物の視認性が向上する。 According to this invention, the posture of the three-dimensional object image on the bird's-eye view image is adjusted based on the positional relationship between the three-dimensional object existing in the common visual field and the plurality of cameras 1, 1,. Thereby, the change in the posture of the three-dimensional object image due to the change in the positional relationship between the three-dimensional object and the camera 1 is suppressed, and the visibility of the three-dimensional object in the common visual field is improved.
 図2に示すこの実施例の操縦支援装置10は、4個のカメラC_1~C_4を含む。カメラC_1~C_4はそれぞれ、共通のタイミング信号に同期して被写界像P_1~P_4を1/30秒毎に出力する。出力された被写界像P_1~P_4は、画像処理回路12に与えられる。 The steering support device 10 of this embodiment shown in FIG. 2 includes four cameras C_1 to C_4. The cameras C_1 to C_4 output scene images P_1 to P_4 every 1/30 seconds in synchronization with the common timing signal. The output scene images P_1 to P_4 are given to the image processing circuit 12.
 図3を参照して、カメラC_1は、カメラC_1の光軸が車両100の前方斜め下向きに延びる姿勢で、車両100の前部中央に設置される。カメラC_2は、カメラC_2の光軸が車両100の右方斜め下向きに延びる姿勢で、車両100の右側中央に設置される。カメラC_3は、カメラC_3の光軸が車両100の後方斜め下向きに延びる姿勢で、車両100の後部上側に設置される。カメラC_4は、カメラC_4の光軸が車両100の左方斜め下向きに延びる姿勢で、車両100の左側中央に設置される。車両100の周辺の被写界は、このようなカメラC_1~C_4によって路面に斜め方向に交差する方向から捉えられる。 Referring to FIG. 3, camera C_1 is installed at the center of the front portion of vehicle 100 in a posture in which the optical axis of camera C_1 extends obliquely downward in front of vehicle 100. The camera C_2 is installed at the center on the right side of the vehicle 100 with the optical axis of the camera C_2 extending obliquely downward to the right of the vehicle 100. The camera C_3 is installed on the upper rear side of the vehicle 100 in a posture in which the optical axis of the camera C_3 extends obliquely downward rearward of the vehicle 100. The camera C_4 is installed at the center of the left side of the vehicle 100 in a posture in which the optical axis of the camera C_4 extends obliquely downward to the left of the vehicle 100. The object scene around the vehicle 100 is captured from such a direction that intersects the road surface in an oblique direction by the cameras C_1 to C_4.
 カメラC_1は車両100の前方を捉える視野VW_1を有し、カメラC_2は車両100の右方向を捉える視野VW_2を有し、カメラC_3は車両100の後方を捉える視野VW_3を有し、そしてカメラC_4は車両100の左方向を捉える視野VW_4を有する。また、視野VW_1およびVW_2は共通視野VW_12を有し、視野VW_2およびVW_3は共通視野VW_23を有し、視野VW_3およびVW_4は共通視野VW_34を有し、そして視野VW_4およびVW_1は共通視野VW_41を有する。 Camera C_1 has a field of view VW_1 that captures the front of the vehicle 100, camera C_2 has a field of view VW_2 that captures the right direction of the vehicle 100, camera C_3 has a field of view VW_3 that captures the rear of the vehicle 100, and camera C_4 It has a visual field VW_4 that captures the left direction of the vehicle 100. Also, the visual fields VW_1 and VW_2 have a common visual field VW_12, the visual fields VW_2 and VW_3 have a common visual field VW_23, the visual fields VW_3 and VW_4 have a common visual field VW_34, and the visual fields VW_4 and VW_1 have a common visual field VW_41.
 図2に戻って、画像処理回路12に設けられたCPU12pは、カメラC_1から出力された被写界像P_1に基づいて図4(A)に示す鳥瞰画像BEV_1を生成し、カメラC_2から出力された被写界像P_2に基づいて図4(B)に示す鳥瞰画像BEV_2を生成する。CPU12pはまた、カメラC_3から出力された被写界像P_3に基づいて図4(C)に示す鳥瞰画像BEV_3を生成し、カメラC_4から出力された被写界像P_4に基づいて図4(D)に示す鳥瞰画像BEV_4を生成する。 Returning to FIG. 2, the CPU 12p provided in the image processing circuit 12 generates the bird's-eye view image BEV_1 shown in FIG. 4A based on the object scene image P_1 output from the camera C_1, and is output from the camera C_2. The bird's-eye view image BEV_2 shown in FIG. 4B is generated based on the object scene image P_2. The CPU 12p also generates the bird's-eye view image BEV_3 shown in FIG. 4C based on the object scene image P_3 output from the camera C_3, and based on the object scene image P_4 output from the camera C_4, FIG. The bird's-eye view image BEV_4 shown in FIG.
 鳥瞰画像BEV_1は視野VW_1を鉛直方向に見下ろす仮想カメラによって捉えられた画像に相当し、鳥瞰画像BEV_2は視野VW_2を鉛直方向に見下ろす仮想カメラによって捉えられた画像に相当する。また、鳥瞰画像BEV_3は視野VW_3を鉛直方向に見下ろす仮想カメラによって捉えられた画像に相当し、鳥瞰画像BEV_4は視野VW_4を鉛直方向に見下ろす仮想カメラによって捉えられた画像に相当する。 The bird's-eye view image BEV_1 corresponds to an image captured by a virtual camera looking down the visual field VW_1 in the vertical direction, and the bird's-eye view image BEV_2 corresponds to an image captured by a virtual camera looking down the visual field VW_2 in the vertical direction. The bird's-eye view image BEV_3 corresponds to an image captured by a virtual camera looking down the visual field VW_3 in the vertical direction, and the bird's-eye view image BEV_4 corresponds to an image captured by a virtual camera looking down the visual field VW_4 in the vertical direction.
 図4(A)~図4(D)によれば、鳥瞰画像BEV_1は鳥瞰座標系X1・Y1を有し、鳥瞰画像BEV_2は鳥瞰座標系X2・Y2を有し、鳥瞰画像BEV_3は鳥瞰座標系X3・Y3を有し、そして鳥瞰画像BEV_4は鳥瞰座標系X4・Y4を有する。このような鳥瞰画像BEV_1~BEV_4は、メモリ12mのワークエリアW1に保持される。 According to FIGS. 4A to 4D, the bird's-eye image BEV_1 has a bird's-eye coordinate system X1 and Y1, the bird's-eye image BEV_2 has a bird's-eye coordinate system X2 and Y2, and the bird's-eye image BEV_3 has a bird's-eye coordinate system. The bird's-eye view image BEV_4 has a bird's-eye coordinate system X4 / Y4. Such bird's-eye images BEV_1 to BEV_4 are held in the work area W1 of the memory 12m.
 CPU12pは続いて、鳥瞰画像BEV_1~BEV_4の各々から境界線BLよりも外方の一部の画像を削除し、削除の後に残った鳥瞰画像BEV_1~BEV_4(図5参照)を回転/移動処理によって互いに結合する。この結果、図6に示す全周鳥瞰画像がメモリ12mのワークエリアW2内に得られる。 Subsequently, the CPU 12p deletes a part of the images outside the boundary line BL from each of the bird's-eye images BEV_1 to BEV_4, and rotates / moves the bird's-eye images BEV_1 to BEV_4 (see FIG. 5) remaining after the deletion. Join each other. As a result, the all-around bird's-eye view image shown in FIG. 6 is obtained in the work area W2 of the memory 12m.
 図6において、斜線で示す重複エリアOL_12が共通視野VW_12を再現するエリアに相当し、斜線で示す重複エリアOL_23が共通視野VW_23を再現するエリアに相当する。また、斜線で示す重複エリアOL_34が共通視野VW_34を再現するエリアに相当し、斜線で示す重複エリアOL_41が共通視野VW_41を再現するエリアに相当する。 In FIG. 6, the overlapping area OL_12 indicated by diagonal lines corresponds to an area that reproduces the common visual field VW_12, and the overlapping area OL_23 indicated by diagonal lines corresponds to an area that reproduces the common visual field VW_23. Further, the overlapping area OL_34 indicated by hatching corresponds to an area for reproducing the common visual field VW_34, and the overlapping area OL_41 indicated by hatching corresponds to an area for reproducing the common visual field VW_41.
 運転席に設置された表示装置16は、重複エリアOL_12~OL_41が四隅に位置する一部の画像D1をワークエリアW2上の全周鳥瞰画像から抽出し、車両100の上部を模した車両画像D2を抽出された画像D1の中央に貼り付ける。この結果、図7に示す運転支援画像がモニタ画面に表示される。 The display device 16 installed in the driver's seat extracts a part of the images D1 in which the overlapping areas OL_12 to OL_41 are located at the four corners from the all-around bird's-eye view image on the work area W2, and the vehicle image D2 imitating the upper part of the vehicle 100 Is pasted in the center of the extracted image D1. As a result, the driving assistance image shown in FIG. 7 is displayed on the monitor screen.
 次に、鳥瞰画像BEV_1~BEV_4の作成要領について説明する。ただし、鳥瞰画像BEV_1~BEV_4はいずれも同じ要領で作成されるため、鳥瞰画像BEV_1~BEV_4を代表して鳥瞰画像BEV3の作成要領を説明する。 Next, how to create the bird's-eye view images BEV_1 to BEV_4 will be described. However, since all the bird's-eye images BEV_1 to BEV_4 are created in the same manner, the creation procedure of the bird's-eye image BEV3 will be described as a representative of the bird's-eye images BEV_1 to BEV_4.
 図8を参照して、カメラC_3は車両100の後部に後方斜め下向きに配置される。カメラC_3の俯角を“θd”とすると、図7に示す角度θは“180°-θd”に相当する。また、角度θは、90°<θ<180°の範囲で定義される。 Referring to FIG. 8, camera C_3 is disposed rearward and obliquely downward at the rear of vehicle 100. If the depression angle of the camera C_3 is “θd”, the angle θ shown in FIG. 7 corresponds to “180 ° −θd”. Further, the angle θ is defined in the range of 90 ° <θ <180 °.
 図9は、カメラ座標系X・Y・Zと、カメラC_3の撮像面Sの座標系Xp・Ypと、世界座標系Xw・Yw・Zwとの関係を示す。カメラ座標系X・Y・Zは、X軸,Y軸およびZ軸を座標軸とする三次元の座標系である。座標系Xp・Ypは、Xp軸およびYp軸を座標軸とする二次元の座標系である。世界座標系Xw・Yw・Zwは、Xw軸,Yw軸およびZw軸を座標軸とする三次元の座標系である。 FIG. 9 shows the relationship between the camera coordinate system X · Y · Z, the coordinate system Xp · Yp of the imaging surface S of the camera C_3, and the world coordinate system Xw · Yw · Zw. The camera coordinate system X, Y, Z is a three-dimensional coordinate system with the X, Y, and Z axes as coordinate axes. The coordinate system Xp · Yp is a two-dimensional coordinate system having the Xp axis and the Yp axis as coordinate axes. The world coordinate system Xw · Yw · Zw is a three-dimensional coordinate system having the Xw axis, the Yw axis, and the Zw axis as coordinate axes.
 カメラ座標系X・Y・Zでは、カメラC3の光学的中心を原点Oとして、光軸方向にZ軸が定義され、Z軸に直交しかつ路面に平行な方向にX軸が定義され、そしてZ軸およびX軸に直交する方向にY軸が定義される。撮像面Sの座標系Xp・Ypでは、撮像面Sの中心を原点として、撮像面Sの横方向にXp軸が定義され、撮像面Sの縦方向にYp軸が定義される。 In the camera coordinate system X, Y, Z, the optical center of the camera C3 is defined as the origin O, the Z axis is defined in the optical axis direction, the X axis is defined in the direction perpendicular to the Z axis and parallel to the road surface, and A Y axis is defined in a direction orthogonal to the Z axis and the X axis. In the coordinate system Xp / Yp of the imaging surface S, with the center of the imaging surface S as the origin, the Xp axis is defined in the horizontal direction of the imaging surface S, and the Yp axis is defined in the vertical direction of the imaging surface S.
 世界座標系Xw・Yw・Zwでは、カメラ座標系XYZの原点Oを通る鉛直線と路面との交点を原点Owとして、路面と垂直な方向にYw軸が定義され、カメラ座標系X・Y・ZのX軸と平行な方向にXw軸が定義され、そしてXw軸およびYw軸に直交する方向にZw軸が定義される。また、Xw軸からX軸までの距離は“h”であり、Zw軸およびZ軸によって形成される鈍角が上述の角度θに相当する。 In the world coordinate system Xw / Yw / Zw, the intersection of the vertical line passing through the origin O of the camera coordinate system XYZ and the road surface is defined as the origin Ow, and the Yw axis is defined in the direction perpendicular to the road surface. An Xw axis is defined in a direction parallel to the X axis of Z, and a Zw axis is defined in a direction orthogonal to the Xw axis and the Yw axis. The distance from the Xw axis to the X axis is “h”, and the obtuse angle formed by the Zw axis and the Z axis corresponds to the angle θ described above.
 カメラ座標系X・Y・Zにおける座標を(x,y,z)と表記した場合、“x”,“y”および“z”はそれぞれ、カメラ座標系X・Y・ZにおけるX軸成分,Y軸成分およびZ軸成分を示す。撮像面Sの座標系Xp・Ypにおける座標を(xp,yp)と表記した場合、“xp”および“yp”はそれぞれ、撮像面Sの座標系Xp・YpにおけるXp軸成分およびYp軸成分を示す。世界座標系Xw・Yw・Zwにおける座標を(xw,yw,zw)と表記した場合、“xw”,“yw”および“zw”はそれぞれ、世界座標系Xw・Yw・ZwにおけるXw軸成分,Yw軸成分およびZw軸成分を示す。 When coordinates in the camera coordinate system X, Y, and Z are expressed as (x, y, z), “x”, “y”, and “z” are X-axis components in the camera coordinate system X, Y, and Z, respectively. A Y-axis component and a Z-axis component are shown. When the coordinates in the coordinate system Xp / Yp of the imaging surface S are expressed as (xp, yp), “xp” and “yp” respectively represent the Xp-axis component and the Yp-axis component in the coordinate system Xp / Yp of the imaging surface S. Show. When coordinates in the world coordinate system Xw · Yw · Zw are expressed as (xw, yw, zw), “xw”, “yw”, and “zw” are Xw axis components in the world coordinate system Xw · Yw · Zw, The Yw axis component and the Zw axis component are shown.
 カメラ座標系X・Y・Zの座標(x,y,z)と世界座標系Xw・Yw・Zwの座標(xw,yw,zw)との間の変換式は、数1で表される。
Figure JPOXMLDOC01-appb-M000001
A conversion formula between the coordinates (x, y, z) of the camera coordinate system X, Y, Z and the coordinates (xw, yw, zw) of the world coordinate system Xw, Yw, Zw is expressed by Formula 1.
Figure JPOXMLDOC01-appb-M000001
 ここで、カメラC_3の焦点距離を“f”とすると、撮像面Sの座標系Xp・Ypの座標(xp,yp)とカメラ座標系X・Y・Zの座標(x,y,z)との間の変換式は、数2で表される。
Figure JPOXMLDOC01-appb-M000002
Here, assuming that the focal length of the camera C_3 is “f”, the coordinates (xp, yp) of the coordinate system Xp • Yp of the imaging surface S and the coordinates (x, y, z) of the camera coordinate system X • Y • Z are The conversion formula between is expressed by Equation 2.
Figure JPOXMLDOC01-appb-M000002
 また、数1および数2に基づいて数3が得られる。数3は、撮像面Sの座標系Xp・Ypの座標(xp,yp)と二次元路面座標系Xw・Zwの座標(xw,zw)との間の変換式を示す。
Figure JPOXMLDOC01-appb-M000003
Also, Equation 3 is obtained based on Equation 1 and Equation 2. Equation 3 shows a conversion formula between the coordinates (xp, yp) of the coordinate system Xp / Yp of the imaging surface S and the coordinates (xw, zw) of the two-dimensional road surface coordinate system Xw / Zw.
Figure JPOXMLDOC01-appb-M000003
 また、図4(C)に示す鳥瞰画像BEV_3の座標系である鳥瞰座標系X3・Y3が定義される。鳥瞰座標系X3・Y3は、X3軸及びY3軸を座標軸とする二次元の座標系である。鳥瞰座標系X3・Y3における座標を(x3,y3)と表記した場合、鳥瞰画像BEV_3を形成する各画素の位置は座標(x3,y3)によって表される。“x3”および“y3”はそれぞれ、鳥瞰座標系X3・Y3におけるX3軸成分およびY3軸成分を示す。 Also, a bird's eye coordinate system X3 / Y3 that is a coordinate system of the bird's eye image BEV_3 shown in FIG. 4C is defined. The bird's-eye coordinate system X3 / Y3 is a two-dimensional coordinate system having the X3 axis and the Y3 axis as coordinate axes. When the coordinates in the bird's-eye view coordinate system X3 / Y3 are expressed as (x3, y3), the position of each pixel forming the bird's-eye view image BEV_3 is represented by the coordinates (x3, y3). “X3” and “y3” respectively indicate an X3 axis component and a Y3 axis component in the bird's eye view coordinate system X3 · Y3.
 路面を表す二次元座標系Xw・Zwから鳥瞰座標系X3・Y3への投影は、いわゆる平行投影に相当する。仮想カメラつまり仮想視点の高さを“H”とすると、二次元座標系Xw・Zwの座標(xw,zw)と鳥瞰座標系X3・Y3の座標(x3,y3)との間の変換式は、数4で表される。仮想カメラの高さHは予め決められている。
Figure JPOXMLDOC01-appb-M000004
The projection from the two-dimensional coordinate system Xw / Zw representing the road surface to the bird's eye coordinate system X3 / Y3 corresponds to a so-called parallel projection. When the height of the virtual camera, that is, the virtual viewpoint is “H”, the conversion formula between the coordinates (xw, zw) of the two-dimensional coordinate system Xw · Zw and the coordinates (x3, y3) of the bird's-eye coordinate system X3 · Y3 is , Represented by Equation (4). The height H of the virtual camera is determined in advance.
Figure JPOXMLDOC01-appb-M000004
 さらに、数4に基づいて数5が得られ、数5および数3に基づいて数6が得られ、そして数6に基づいて数7が得られる。数7は、撮像面Sの座標系Xp・Ypの座標(xp,yp)を鳥瞰座標系X3・Y3の座標(x3,y3)に変換するための変換式に相当する。
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000007
Further, based on Equation 4, Equation 5 is obtained, Equation 5 is obtained based on Equation 5 and Equation 3, and Equation 7 is obtained based on Equation 6. Equation 7 corresponds to a conversion formula for converting the coordinates (xp, yp) of the coordinate system Xp / Yp of the imaging surface S into the coordinates (x3, y3) of the bird's eye coordinate system X3 / Y3.
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000007
 撮像面Sの座標系Xp・Ypの座標(xp,yp)は、カメラC_3によって捉えられた被写界像P_3の座標を表す。したがって、カメラC3からの被写界像P_3は、数7を用いることによって鳥瞰画像BEV_3に変換される。実際には、被写界像P_3はまずレンズ歪み補正などの画像処理を施され、その後に数7によって鳥瞰画像BEV_3に変換される。 The coordinates (xp, yp) of the coordinate system Xp / Yp of the imaging surface S represent the coordinates of the object scene image P_3 captured by the camera C_3. Accordingly, the object scene image P_3 from the camera C3 is converted into the bird's-eye view image BEV_3 by using Equation 7. Actually, the object scene image P_3 is first subjected to image processing such as lens distortion correction, and then converted into a bird's-eye view image BEV_3 by Equation 7.
 図10および図11を参照して、車両100の左後方の共通視野VW_34に障害物(立体物)200が存在する場合、カメラC_3で捉えられた障害物は障害物画像OBJ_3として鳥瞰画像BEV_3に現れ、カメラC_4で捉えられた障害物は障害物画像OBJ_4として鳥瞰画像BEV_3に現れる。 Referring to FIGS. 10 and 11, when the obstacle (three-dimensional object) 200 exists in the common left visual field VW_34 of the vehicle 100, the obstacle captured by the camera C_3 is displayed as the obstacle image OBJ_3 in the bird's-eye view image BEV_3. The obstacle that appears and is captured by the camera C_4 appears in the bird's-eye view image BEV_3 as an obstacle image OBJ_4.
 障害物画像OBJ_3およびOBJ_4の姿勢は、カメラC_3の視点とカメラC_4の視点との相違に起因して、互いに相違する。つまり、障害物画像OBJ_3はカメラC_3と障害物200の底部とを結ぶ連結線CL_3に沿って倒れ込むように再現され、障害物画像OBJ_4はカメラC_4と障害物200の底部とを結ぶ連結線CL_4に沿って倒れ込むように再現される。 The postures of the obstacle images OBJ_3 and OBJ_4 are different from each other due to the difference between the viewpoint of the camera C_3 and the viewpoint of the camera C_4. That is, the obstacle image OBJ_3 is reproduced so as to fall down along the connection line CL_3 connecting the camera C_3 and the bottom of the obstacle 200, and the obstacle image OBJ_4 is connected to the connection line CL_4 connecting the camera C_4 and the bottom of the obstacle 200. It is reproduced to fall down along.
 このような障害物200を全周鳥瞰画像上で表示するとき、CPU12pは、次のような処理を実行する。 When displaying such an obstacle 200 on the all-around bird's-eye view image, the CPU 12p executes the following processing.
 処理の前提として、基準線RF1aは、カメラC_1から共通視野VW_41に向かって、重複エリアOL_41上の境界線BLと平行に延びる。基準線RF1bは、カメラC_1から共通視野VW_12に向かって、重複エリアOL_12上の境界線BLと平行に延びる。基準線RF2aは、カメラC_2から共通視野VW_12に向かって、重複エリアOL_12上の境界線BLと平行に延びる。基準線RF2bは、カメラC_2から共通視野VW_23に向かって、重複エリアOL_23上の境界線BLと平行に延びる。 As a premise of the processing, the reference line RF1a extends in parallel with the boundary line BL on the overlapping area OL_41 from the camera C_1 toward the common visual field VW_41. The reference line RF1b extends in parallel with the boundary line BL on the overlapping area OL_12 from the camera C_1 toward the common visual field VW_12. The reference line RF2a extends in parallel with the boundary line BL on the overlapping area OL_12 from the camera C_2 toward the common visual field VW_12. The reference line RF2b extends in parallel with the boundary line BL on the overlapping area OL_23 from the camera C_2 toward the common visual field VW_23.
 基準線RF3aは、カメラC_3から共通視野VW_23に向かって、重複エリアOL_23上の境界線BLと平行に延びる。基準線RF3bは、カメラC_3から共通視野VW_34に向かって、重複エリアOL_34上の境界線BLと平行に延びる。基準線RF4aは、カメラC_4から共通視野VW_34に向かって、重複エリアOL_34上の境界線BLと平行に延びる。基準線RF4bは、カメラC_4から共通視野VW_41に向かって、重複エリアOL_41上の境界線BLと平行に延びる。 The reference line RF3a extends in parallel with the boundary line BL on the overlapping area OL_23 from the camera C_3 toward the common visual field VW_23. The reference line RF3b extends in parallel with the boundary line BL on the overlapping area OL_34 from the camera C_3 toward the common visual field VW_34. The reference line RF4a extends in parallel with the boundary line BL on the overlapping area OL_34 from the camera C_4 toward the common visual field VW_34. The reference line RF4b extends in parallel with the boundary line BL on the overlapping area OL_41 from the camera C_4 toward the common visual field VW_41.
 また、調整エリアBRG_12は車両100の右前方に帯状に延びる態様で重複エリアOL_12に割り当てられ、調整エリアBRG_23は車両100の右後方に帯状に延びる態様で重複エリアOL_23に割り当てられる。調整エリアBRG_34は車両100の左後方に帯状に延びる態様で重複エリアOL_34に割り当てられ、調整エリアBRG_41は車両100の左前方に帯状に延びる態様で重複エリアOL_41に割り当てられる。 Further, the adjustment area BRG_12 is assigned to the overlapping area OL_12 in a manner that extends in a band shape to the front right of the vehicle 100, and the adjustment area BRG_23 is assigned to the overlapping area OL_23 in a manner that extends to the right rear of the vehicle 100. The adjustment area BRG_34 is assigned to the overlap area OL_34 in a manner that extends in a band shape to the left rear of the vehicle 100, and the adjustment area BRG_41 is assigned to the overlap area OL_41 in a manner that extends to the left front of the vehicle 100.
 調整エリアBRG_12の幅方向のエッジは基準線RF1bおよびRF2aに平行に延び、調整エリアBRG_23の幅方向のエッジは基準線RF2bおよびRF3aに平行に延びる。また、調整エリアBRG_34の幅方向のエッジは基準線RF3bおよびRF4aに平行に延び、調整エリアBRG_41の幅方向のエッジは基準線RF4bおよびRF1aに平行に延びる。 The edge in the width direction of the adjustment area BRG_12 extends in parallel with the reference lines RF1b and RF2a, and the edge in the width direction of the adjustment area BRG_23 extends in parallel with the reference lines RF2b and RF3a. The edge in the width direction of the adjustment area BRG_34 extends in parallel to the reference lines RF3b and RF4a, and the edge in the width direction of the adjustment area BRG_41 extends in parallel to the reference lines RF4b and RF1a.
 障害物200を全周鳥瞰画像上に再現するにあたっては、まず、何らかの障害物が存在するか否かが共通視野VW_12~VW_41の各々について判別される。障害物200は共通視野VW_34に存在し、対応する障害物画像OBJ_3およびOBJ_4が鳥瞰画像BEV_3およびBEV_4にそれぞれ再現される。このため、障害物画像OBJ_3が鳥瞰画像BEV_3から切り出され、障害物画像OBJ_4が鳥瞰画像BEV_4から切り出される。続いて、障害物200の底部に相当する座標が調整エリアBRG_34に対してどのような位置関係にあるかが判別される。 In reproducing the obstacle 200 on the all-around bird's-eye view image, first, it is determined for each of the common visual fields VW_12 to VW_41 whether or not any obstacle exists. The obstacle 200 exists in the common visual field VW_34, and the corresponding obstacle images OBJ_3 and OBJ_4 are reproduced as the bird's-eye view images BEV_3 and BEV_4, respectively. Therefore, the obstacle image OBJ_3 is cut out from the bird's-eye view image BEV_3, and the obstacle image OBJ_4 is cut out from the bird's-eye view image BEV_4. Subsequently, it is determined what positional relationship the coordinates corresponding to the bottom of the obstacle 200 is with respect to the adjustment area BRG_34.
 図12の上段に示すように障害物200の底部に相当する座標が調整エリアBRG_34よりもカメラC_3側に存在するときは、障害物画像OBJ_3が共通視野VW_34における選択画像S_34として決定される。また、図13の上段に示すように障害物200の底部に相当する座標が調整エリアBRG_34よりもカメラC_4側に存在するときは、障害物画像OBJ_4が共通視野VW_34における選択画像S_34として決定される。 As shown in the upper part of FIG. 12, when the coordinates corresponding to the bottom of the obstacle 200 are present on the camera C_3 side with respect to the adjustment area BRG_34, the obstacle image OBJ_3 is determined as the selected image S_34 in the common visual field VW_34. Also, as shown in the upper part of FIG. 13, when the coordinates corresponding to the bottom of the obstacle 200 are present on the camera C_4 side with respect to the adjustment area BRG_34, the obstacle image OBJ_4 is determined as the selected image S_34 in the common visual field VW_34. .
 さらに、図14に示すように、障害物200の底部に相当する座標が調整エリアBRG_34上に存在するときは、カメラC_4と障害物200の底部とを結ぶ連結線CL_4が鳥瞰画像BEV_4上で定義される。さらに、定義された連結線CL_4の角度と共通視野VW_34に割り当てられた境界線BLの角度(つまり基準線RF3bおよびRF4aの角度)との相違が、“Δθ”として算出される。 Furthermore, as shown in FIG. 14, when coordinates corresponding to the bottom of the obstacle 200 are present on the adjustment area BRG_34, a connection line CL_4 connecting the camera C_4 and the bottom of the obstacle 200 is defined on the bird's-eye view image BEV_4. Is done. Further, the difference between the defined angle of the connecting line CL_4 and the angle of the boundary line BL assigned to the common visual field VW_34 (that is, the angle of the reference lines RF3b and RF4a) is calculated as “Δθ”.
 障害物画像OBJ_4の姿勢は、算出された相違Δθを参照して、境界線BL(つまり基準線RF3bおよびRF4a)に沿うように修正される。このような修正処理の結果、障害物画像OBJ_4が倒れ込む方向は、図14の下段に示すように、境界線BL(つまり基準線RF3bおよびRF4a)の長さ方向と一致する。修正された姿勢を有する障害物画像OBJ_4はその後、共通視野VW_34における選択画像S_34として決定される。 The posture of the obstacle image OBJ_4 is corrected so as to be along the boundary line BL (that is, the reference lines RF3b and RF4a) with reference to the calculated difference Δθ. As a result of such correction processing, the direction in which the obstacle image OBJ_4 falls is coincident with the length direction of the boundary line BL (that is, the reference lines RF3b and RF4a) as shown in the lower part of FIG. The obstacle image OBJ_4 having the corrected posture is then determined as the selected image S_34 in the common visual field VW_34.
 なお、障害物200の底部に相当する座標が図15(A)に示す要領で調整エリアBRG_34内を遷移したとき、障害物画像OBJ_4の姿勢は図15(B)に示す要領で修正される。 Note that when the coordinates corresponding to the bottom of the obstacle 200 transition within the adjustment area BRG_34 as shown in FIG. 15A, the posture of the obstacle image OBJ_4 is corrected as shown in FIG. 15B.
 全周鳥瞰画像を作成するにあたっては、境界線BLよりも外方の画像が鳥瞰画像BEV_1~BEV_4の各々から削除され、削除の後に残った鳥瞰画像BEV_1~BEV_4が互いに結合される(図5~図6参照)。選択画像S_34は結合画像に貼り付けられ、これによって全周鳥瞰画像が完成する。完成した全周鳥瞰画像の重複エリアOL_34には、図12の下段,図13の下段または図14の下段に示す要領で障害物画像OBJ_3またはOBJ_4が再現される。 In creating the all-around bird's-eye view image, images outside the boundary line BL are deleted from each of the bird's-eye view images BEV_1 to BEV_4, and the bird's-eye view images BEV_1 to BEV_4 remaining after the deletion are combined with each other (FIG. 5 to FIG. 5). (See FIG. 6). The selected image S_34 is pasted on the combined image, thereby completing the all-around bird's-eye view image. The obstacle image OBJ_3 or OBJ_4 is reproduced in the overlap area OL_34 of the completed all-around bird's-eye view image as shown in the lower part of FIG. 12, the lower part of FIG. 13, or the lower part of FIG.
 CPU12pは、具体的には図16~図21に示すフロー図に従う処理を実行する。なお、これらのフロー図に対応する制御プログラムは、フラッシュメモリ14(図1参照)に記憶される。 Specifically, the CPU 12p executes processing according to the flowcharts shown in FIGS. The control program corresponding to these flowcharts is stored in the flash memory 14 (see FIG. 1).
 図16に示すステップS1では、カメラC_1~C_4から被写界像P_1~P_4を取り込む。ステップS3では、取り込まれた被写界像P_1~P_4に基づいて鳥瞰画像BEV_1~BEV_4を作成する。作成された鳥瞰画像BEV_1~BEV_4は、ワークエリアW1に確保される。ステップS5では、障害物検知処理を実行する。ステップS7では、ステップS3で作成された鳥瞰画像BEV_1~BEV_4に基づいて全周鳥瞰画像を作成する。作成された全周鳥瞰画像は、ワークエリアW2に確保される。表示装置16のモニタ画面には、ワークエリアW2に確保された全周鳥瞰画像に基づく運転支援画像が表示される。ステップS7の処理が完了すると、ステップS1に戻る。 In step S1 shown in FIG. 16, the scene images P_1 to P_4 are captured from the cameras C_1 to C_4. In step S3, bird's-eye view images BEV_1 to BEV_4 are created based on the captured object scene images P_1 to P_4. The created bird's-eye view images BEV_1 to BEV_4 are secured in the work area W1. In step S5, an obstacle detection process is executed. In step S7, the all-around bird's-eye view image is created based on the bird's-eye view images BEV_1 to BEV_4 created in step S3. The created all-around bird's-eye view image is secured in the work area W2. On the monitor screen of the display device 16, a driving support image based on the all-around bird's-eye view image secured in the work area W2 is displayed. When the process of step S7 is completed, the process returns to step S1.
 ステップS5に示す障害物検知処理は、図17~図18に示すサブルーチンに従う。まずステップS11で変数MおよびNをそれぞれ“1”および“2”に設定する。ここで、変数Mは“1”→“2”→“3”→“4”の順で更新される変数であり、変数Nは“2”→“3”→“4”→“1”の順で更新される変数である。 The obstacle detection process shown in step S5 follows the subroutine shown in FIGS. First, in step S11, variables M and N are set to “1” and “2”, respectively. Here, the variable M is a variable that is updated in the order of “1” → “2” → “3” → “4”, and the variable N is “2” → “3” → “4” → “1”. It is a variable that is updated in order.
 ステップS13では、カメラC_Mの出力に基づく鳥瞰画像BEV_MとカメラC_Nの出力に基づく鳥瞰画像BEV_Nとに基づいて、共通視野VW_MNにおける差分画像を作成する。ステップS15では、共通視野VW_MNに障害物が存在するか否かを、ステップS13で作成された差分画像に基づいて判別する。 In step S13, a difference image in the common visual field VW_MN is created based on the bird's-eye view image BEV_M based on the output of the camera C_M and the bird's-eye view image BEV_N based on the output of the camera C_N. In step S15, it is determined based on the difference image created in step S13 whether an obstacle exists in the common visual field VW_MN.
 共通視野VW_MNから障害物が検出されなければ、ステップS17でフラグFLG_MNを“0”に設定し、ステップS23に進む。一方、共通視野VW_MNから障害物が検出されれば、ステップS19でフラグFLG_MNを“1”に設定し、ステップS21で障害物画像処理を実行し、その後にステップS23に進む。 If no obstacle is detected from the common visual field VW_MN, the flag FLG_MN is set to “0” in step S17, and the process proceeds to step S23. On the other hand, if an obstacle is detected from the common visual field VW_MN, the flag FLG_MN is set to “1” in step S19, obstacle image processing is executed in step S21, and then the process proceeds to step S23.
 ステップS23では、変数Mが“4”を示すか否かを判別する。変数Mが“4”に満たなければ、変数MおよびNをステップS25で更新してからステップS13に戻る。これに対して、変数Mが“4”であれば、上階層のルーチンに復帰する。 In step S23, it is determined whether or not the variable M indicates “4”. If the variable M is less than “4”, the variables M and N are updated in step S25, and the process returns to step S13. On the other hand, if the variable M is “4”, the process returns to the upper layer routine.
 図17に示すステップS21の障害物画像処理は、図18~図19に示すサブルーチンに従って実行される。まずステップS31で、障害物画像OBJ_Mを鳥瞰画像BEV_Mから切り出すとともに、障害物画像OBJ_Nを鳥瞰画像BEV_Nから切り出す。 The obstacle image processing in step S21 shown in FIG. 17 is executed according to a subroutine shown in FIGS. First, in step S31, the obstacle image OBJ_M is cut out from the bird's-eye image BEV_M, and the obstacle image OBJ_N is cut out from the bird's-eye image BEV_N.
 ステップS33では、障害物の底部に相当する座標が調整エリアBRG_MNよりもカメラC_M側に存在するか否かを判別する。また、ステップS35では、障害物の底部に相当する座標が調整エリアBRG_MNよりもカメラC_N側に存在するか否かを判別する。 In step S33, it is determined whether or not coordinates corresponding to the bottom of the obstacle are present on the camera C_M side with respect to the adjustment area BRG_MN. In step S35, it is determined whether or not the coordinates corresponding to the bottom of the obstacle exist on the camera C_N side with respect to the adjustment area BRG_MN.
 ステップS33でYESであればステップS37に進み、障害物画像OBJ_Mを選択画像S_MNとして決定する。ステップS33でNOである一方、ステップSステップS35でYESであれば、ステップS39に進み、障害物画像OBJ_Nを選択画像S_MNとして決定する。ステップS37またはS39の処理が完了すると、上階層のルーチンに復帰する。 If “YES” in the step S33, the process proceeds to a step S37 to determine the obstacle image OBJ_M as the selected image S_MN. If “NO” in the step S33, but if “YES” in the step S35, the process proceeds to a step S39 to determine the obstacle image OBJ_N as the selected image S_MN. When the process of step S37 or S39 is completed, the process returns to the upper hierarchy routine.
 ステップS33およびS35のいずれもNOであれば、ステップS41以降の処理に進む。ステップS41では、カメラC_Nと障害物の底部とを結ぶ連結線CL_Nを鳥瞰画像BEV_N上で定義する。ステップS43では、ステップS41で定義された連結線CL_Nと境界線BL_MNとの角度の相違を“Δθ”として算出する。ステップS45では、算出された相違Δθを参照して障害物画像OBJ_Nの姿勢を修正する。ステップS47では、修正された姿勢を有する障害物画像OBJ_Nを選択画像S_MNとして決定する。ステップS47の処理が完了すると、上階層のルーチンに復帰する。 If both step S33 and S35 are NO, the process proceeds to step S41 and subsequent steps. In step S41, a connection line CL_N connecting the camera C_N and the bottom of the obstacle is defined on the bird's-eye view image BEV_N. In step S43, the difference in angle between the connecting line CL_N defined in step S41 and the boundary line BL_MN is calculated as “Δθ”. In step S45, the posture of the obstacle image OBJ_N is corrected with reference to the calculated difference Δθ. In step S47, the obstacle image OBJ_N having the corrected posture is determined as the selected image S_MN. When the process of step S47 is completed, the process returns to the upper-level routine.
 図16に示すステップS7の全周鳥瞰作成処理は、図20~図21に示すサブルーチンに従う。まずステップS51で変数Mを“1”に設定する。上述と同様、変数Mは“1”→“2”→“3”→“4”の順で更新される変数である。ステップS53では、境界線よりも外方の画像を鳥瞰画像BEV_Mから削除し、ステップS55では変数Mが“4”に達したか否かを判別する。変数Mが“4”に満たなければ、ステップS57で変数Mを更新してからステップS53に戻る。一方、変数Mが“4”を示していればステップS59に進み、ステップS53で加工された鳥瞰画像BEV_1~BEV_4を座標変換によって互いに結合する。 The all-around bird's-eye view creation process in step S7 shown in FIG. 16 follows a subroutine shown in FIGS. First, in step S51, the variable M is set to “1”. As described above, the variable M is a variable that is updated in the order of “1” → “2” → “3” → “4”. In step S53, an image outside the boundary line is deleted from the bird's-eye view image BEV_M, and in step S55, it is determined whether or not the variable M has reached “4”. If the variable M is less than “4”, the variable M is updated in step S57, and then the process returns to step S53. On the other hand, if the variable M indicates “4”, the process proceeds to step S59, and the bird's-eye images BEV_1 to BEV_4 processed in step S53 are coupled to each other by coordinate transformation.
 結合画像が完成すると、ステップS61で変数MおよびNをそれぞれ“1”および“2”に設定する。上述と同様、変数Mは“1”→“2”→“3”→“4”の順で更新される変数であり、変数Nは“2”→“3”→“4”→“1”の順で更新される変数である。 When the combined image is completed, variables M and N are set to “1” and “2”, respectively, in step S61. As described above, the variable M is a variable that is updated in the order of “1” → “2” → “3” → “4”, and the variable N is “2” → “3” → “4” → “1”. It is a variable that is updated in this order.
 ステップS63ではフラグFLG_MNが“1”であるか否かを判別し、判別結果がNOであればそのままステップS67に進む一方、判別結果がYESであれば選択画像S_MNをステップS59で作成された結合画像に貼り付ける。ステップS67では変数Mが“4”に達したか否かを判別し、NOであればステップS69で変数MおよびNを更新してからステップS63に戻る一方、YESであれば上階層のルーチンに復帰する。 In step S63, it is determined whether or not the flag FLG_MN is “1”. If the determination result is NO, the process proceeds to step S67 as it is. If the determination result is YES, the selected image S_MN is created in step S59. Paste to image. In step S67, it is determined whether or not the variable M has reached “4”. If NO, the variables M and N are updated in step S69, and then the process returns to step S63. Return.
 以上の説明から分かるように、カメラC_1~C_4のうちカメラC_3およびC_4に注目したとき、カメラC_3およびC_4は、共通視野VW_34を部分的に有し、かつ路面に対して斜めに交差する方向の被写界を捉える。CPU12pは、カメラC_3およびC_4からそれぞれ出力された被写界像P_3およびP_4を取り込み(S1)、路面に対する鳥瞰画像を取り込まれた被写界像P_3およびP_4に基づいて作成する(S3,S7)。CPU12pはまた、共通視野VW_34に障害物が存在するとき、鳥瞰画像に現れる障害物画像の姿勢を、カメラC_3およびC_4から被写界に向けて互いに平行に延びる基準線RF3bおよびRF4aに沿うように調整する(S15,S21)。 As can be seen from the above description, when attention is paid to the cameras C_3 and C_4 among the cameras C_1 to C_4, the cameras C_3 and C_4 partially have a common visual field VW_34 and are obliquely intersecting the road surface. Capture the scene. The CPU 12p captures the scene images P_3 and P_4 output from the cameras C_3 and C_4, respectively (S1), and creates the bird's-eye view image for the road surface based on the captured scene images P_3 and P_4 (S3, S7). . The CPU 12p also sets the posture of the obstacle image appearing in the bird's-eye view image along the reference lines RF3b and RF4a extending parallel to each other from the cameras C_3 and C_4 toward the object scene when the obstacle exists in the common visual field VW_34. Adjust (S15, S21).
 カメラC_3およびC_4の各々は、路面に対して斜めに交差する方向の被写界を捉える。このため、路面の上に障害物が存在する場合、障害物画像は、カメラC_3またはC_4と障害物とを結ぶ線に沿って倒れ込む姿勢で鳥瞰画像に現れる。さらに、障害物画像の姿勢は、立体物とカメラC_3またはC_4との位置関係の変化に起因して変化する。 Each of the cameras C_3 and C_4 captures an object scene in a direction that obliquely intersects the road surface. For this reason, when an obstacle exists on the road surface, the obstacle image appears in the bird's-eye view image in a posture of falling along a line connecting the camera C_3 or C_4 and the obstacle. Further, the posture of the obstacle image changes due to a change in the positional relationship between the three-dimensional object and the camera C_3 or C_4.
 この実施例によれば、共通視野VW_34から障害物が検出されたとき、鳥瞰画像に現れる障害物画像の姿勢が、カメラC_3およびC_4から被写界に向けて互いに平行に延びる基準線RF3bおよびRF4aに沿うように調整される。これによって、障害物とカメラC_3またはC_4との位置関係の変化に起因する障害物画像の姿勢の変化が抑制され、共通視野VW_34における障害物の視認性が向上する。 According to this embodiment, when an obstacle is detected from the common visual field VW_34, the posture of the obstacle image appearing in the bird's-eye view image is such that the reference lines RF3b and RF4a extend parallel to each other from the cameras C_3 and C_4 toward the object scene. It is adjusted to follow. Thereby, the change in the posture of the obstacle image due to the change in the positional relationship between the obstacle and the camera C_3 or C_4 is suppressed, and the visibility of the obstacle in the common visual field VW_34 is improved.
 なお、この実施例では、鳥瞰画像BEV_1~BEV_4を結合するにあたって、境界線BLよりも外方の一部の画像を削除するようにしている(図5参照)。しかし、共通視野を表す2つの部分画像を均等な重み付け量で合成するようにしてもよい。 In this embodiment, when the bird's-eye view images BEV_1 to BEV_4 are combined, a part of the image outside the boundary line BL is deleted (see FIG. 5). However, you may make it synthesize | combine two partial images showing a common visual field with equal weighting amount.
 また、この実施例では、基準線RFMbおよびRFNaを重複エリアOL_MN上の境界線BLに平行に延ばすようにしている(M:1→2→3→4,N:2→3→4→1)。しかし、基準線RFMbおよびRFNaは互いに平行であればよく、必ずしも重複エリアOL_MN上の境界線BLと平行である必要はない。たとえば、基準線RF3bおよびRF4aは、図22に示すように定義するようにしてもよい。 In this embodiment, the reference lines RFMb and RFNa are extended in parallel to the boundary line BL on the overlapping area OL_MN (M: 1 → 2 → 3 → 4, N: 2 → 3 → 4 → 1). . However, the reference lines RFMb and RFNa need only be parallel to each other, and are not necessarily parallel to the boundary line BL on the overlapping area OL_MN. For example, the reference lines RF3b and RF4a may be defined as shown in FIG.
 さらに、図23または図24に示すように基準線RFMbおよびRFNaを互いに平行とならないように延ばす場合には、障害物と境界線BLとの最短距離を算出して図15のような姿勢調整を行うのが好ましい。 Further, when the reference lines RFMb and RFNa are extended so as not to be parallel to each other as shown in FIG. 23 or FIG. 24, the shortest distance between the obstacle and the boundary line BL is calculated and the posture adjustment as shown in FIG. 15 is performed. It is preferred to do so.
 また、この実施例では、共通視野VW_MNから障害物が発見されると、障害物の大きさに関係なく、障害物画像処理が実行される(図17参照)。しかし、障害物が住宅やビルのように巨大である場合、障害物画像の姿勢を修正すると、障害物の再現性が逆に低下するおそれがある。 In this embodiment, when an obstacle is found from the common visual field VW_MN, obstacle image processing is executed regardless of the size of the obstacle (see FIG. 17). However, when the obstacle is huge like a house or a building, if the posture of the obstacle image is corrected, the reproducibility of the obstacle may be lowered.
 このような懸念は、図25に示すステップS71~S73の処理を追加することで解消することができる。図25を参照して、ステップS71では障害物のサイズを検出し、ステップS73では検出されたサイズが閾値THszを下回るか否かを判別する。判別結果がNOであればステップS19に進み、判別結果がYESであればステップS21に進む。 Such a concern can be solved by adding the processing of steps S71 to S73 shown in FIG. Referring to FIG. 25, in step S71, the size of the obstacle is detected, and in step S73, it is determined whether or not the detected size is below a threshold value THsz. If the determination result is NO, the process proceeds to step S19, and if the determination result is YES, the process proceeds to step S21.
 また、この実施例では、共通視野VW_MNについて作成した差分画像に基づいて共通視野VW_MNから障害物を検出するようにしている(図17のステップS13~S15参照)。しかし、障害物の検出にあたっては、ステレオ視手法またはオプティカルフロー手法を用いてもよく、さらには超音波センサ,ミリ波センサまたはマイクロ波センサを用いるようにしてもよい。 In this embodiment, an obstacle is detected from the common visual field VW_MN based on the difference image created for the common visual field VW_MN (see steps S13 to S15 in FIG. 17). However, when detecting an obstacle, a stereo vision method or an optical flow method may be used, and an ultrasonic sensor, a millimeter wave sensor, or a microwave sensor may be used.
 さらに、この実施例では、障害物の底部に相当する座標が調整エリアBRG_MN内に存在するとき、カメラC_Nに対応する障害物画像OBJ_Nの姿勢を修正し、修正された姿勢を有する障害物画像OBJ_Nを選択画像S_MNとして決定するようにしている。 Further, in this embodiment, when the coordinates corresponding to the bottom of the obstacle exists in the adjustment area BRG_MN, the posture of the obstacle image OBJ_N corresponding to the camera C_N is corrected, and the obstacle image OBJ_N having the corrected posture is corrected. Is determined as the selected image S_MN.
 しかし、障害物画像OBJ_MおよびOBJ_Nのいずれにこのような処理を施すかは、カメラC_MおよびC_Nの間の高さの相違,カメラC_MおよびC_Nの各々から障害物までの距離の相違,カメラC_MおよびC_Nの各々からの障害物の見え方に基づいて決定するようにしてもよい。また、基準線RFMbおよびRFNaに沿うように障害物画像OBJ_MおよびOBJ_Nの両方の姿勢を修正し、修正された姿勢を有する障害物画像OBJ_MおよびOBJ_Nを合成し、そして合成障害物画像を選択画像S_MNとして決定するようにしてもよい。 However, which of the obstacle images OBJ_M and OBJ_N is subjected to such processing depends on the difference in height between the cameras C_M and C_N, the difference in distance from each of the cameras C_M and C_N to the obstacle, the cameras C_M and You may make it determine based on the appearance of the obstruction from each of C_N. Further, the postures of both the obstacle images OBJ_M and OBJ_N are corrected so as to follow the reference lines RFMb and RFNa, the obstacle images OBJ_M and OBJ_N having the corrected postures are synthesized, and the synthesized obstacle image is selected as the selected image S_MN. You may make it determine as.
 なお、電柱,標識,自転車,バイク,自動販売機などは、視点によって見え方が明らかに相違する。したがって、電柱,標識,自転車,バイク,自動販売機などの特定の障害物を予め登録しておき、登録障害物については視認性を基準に選択画像を決定するようにすれば、障害物の再現性の低下を回避することができる。 It should be noted that utility poles, signs, bicycles, motorcycles, vending machines, etc. are clearly different depending on the viewpoint. Therefore, if specific obstacles such as utility poles, signs, bicycles, motorcycles, vending machines, etc. are registered in advance, and the selected images are determined based on the visibility of registered obstacles, the reproduction of the obstacles A decline in sex can be avoided.
 また、この実施例では、鳥瞰画像から切り出した障害物画像を全周鳥瞰画像に貼り付けるようにしている(図18のステップS31,S37,S39~S47,図21のステップS65参照)。しかし、鳥瞰画像への変換を施される前の被写界像に現れた障害物画像を全周鳥瞰画像に貼り付けるようにしてもよい。 In this embodiment, the obstacle image cut out from the bird's-eye view image is pasted on the all-around bird's-eye view image (see steps S31, S37, S39 to S47 in FIG. 18, and step S65 in FIG. 21). However, the obstacle image that appears in the object scene image before being converted into the bird's-eye view image may be pasted on the all-round bird's-eye image.
 上述の実施例に関する注釈事項を以下に示す。この注釈事項は、矛盾がない限り、上述の実施例に任意に組み合わせることが可能である。 The following are notes on the above examples. This annotation can be arbitrarily combined with the above-described embodiment as long as there is no contradiction.
 実施例で述べたような撮影画像から鳥瞰図画像を生成する座標変換は、一般に透視投影変換と呼ばれる。この透視投影変換を用いるのではなく、公知の平面射影変換によって撮影画像から鳥瞰図画像を生成するようにしてもよい。平面射影変換を用いる場合、撮影画像上の各画素の座標値を鳥瞰図画像上の各画素の座標値に変換するためのホモグラフィ行列(座標変換行列)をカメラ校正処理の段階で予め求めておく。ホモグラフィ行列の求め方は公知である。そして、画像変換を行う際に、ホモグラフィ行列に基づいて撮影画像を鳥瞰図画像に変換すればよい。いずれにせよ、撮影画像を鳥瞰図画像上に投影することによって撮影画像が鳥瞰図画像に変換される。 The coordinate transformation for generating a bird's-eye view image from a photographed image as described in the embodiment is generally called perspective projection transformation. Instead of using the perspective projection conversion, a bird's eye view image may be generated from the captured image by a known plane projective conversion. When using planar projective transformation, a homography matrix (coordinate transformation matrix) for converting the coordinate value of each pixel on the captured image into the coordinate value of each pixel on the bird's eye view image is obtained in advance at the stage of camera calibration processing. . A method for obtaining a homography matrix is known. And when performing image conversion, what is necessary is just to convert a picked-up image into a bird's-eye view image based on a homography matrix. In any case, the captured image is converted into the bird's-eye view image by projecting the captured image onto the bird's-eye view image.
 この発明が詳細に説明され図示されたが、それは単なる図解および一例として用いたものであり、限定であると解されるべきではないことは明らかであり、この発明の精神および範囲は添付されたクレームの文言によってのみ限定される。 Although the present invention has been described and illustrated in detail, it is clear that it has been used merely as an illustration and example and should not be construed as limiting, and the spirit and scope of the present invention are attached Limited only by the wording of the claims.
 10 …操縦支援装置
 C1~C4 …カメラ
 12 …画像処理回路
 12p …CPU
 12m …メモリ
 14 …フラシュメモリ
 16 …表示装置
 100 …車両
 200 …障害物
DESCRIPTION OF SYMBOLS 10 ... Steering assistance apparatus C1-C4 ... Camera 12 ... Image processing circuit 12p ... CPU
12m ... Memory 14 ... Flash memory 16 ... Display device 100 ... Vehicle 200 ... Obstacle

Claims (6)

  1.  画像処理装置であって、次のものを備える:
     各々が共通視野を部分的に有しかつ基準面に斜めに交差する方向の被写界を捉える複数のカメラ;
     前記複数のカメラからそれぞれ出力された複数の被写界像に基づいて前記基準面に対する鳥瞰画像を作成する作成手段;
     前記共通視野に存在する立体物と前記複数のカメラとの位置関係を判別する判別手段;および
     前記作成手段によって作成された鳥瞰画像上の立体物画像の姿勢を前記判別手段の判別結果に基づいて調整する調整手段。
    An image processing device comprising:
    A plurality of cameras, each having a common field of view and capturing a scene in a direction that obliquely intersects the reference plane;
    Creating means for creating a bird's-eye view image with respect to the reference plane based on a plurality of object scene images respectively output from the plurality of cameras;
    Discrimination means for discriminating a positional relationship between the three-dimensional object existing in the common visual field and the plurality of cameras; and a posture of the three-dimensional object image on the bird's-eye image created by the creation means based on a discrimination result of the discrimination means. Adjustment means to adjust.
  2.  クレーム1に従属する画像処理装置であって、前記判別手段は前記立体物が前記複数のカメラから前記被写界に向けて既定態様で延びる複数の基準線によって挟まれるか否かを判別し、
     前記調整手段は前記複数の基準線に沿うように前記立体物画像の姿勢を調整する。
    The image processing apparatus according to claim 1, wherein the determination unit determines whether or not the three-dimensional object is sandwiched by a plurality of reference lines extending in a predetermined manner from the plurality of cameras toward the object scene,
    The adjusting means adjusts the posture of the three-dimensional object image along the plurality of reference lines.
  3.  クレーム2に従属する画像処理装置であって、前記既定態様は前記複数の基準線が互いに平行である態様に相当する。 The image processing apparatus according to claim 2, wherein the predetermined mode corresponds to a mode in which the plurality of reference lines are parallel to each other.
  4.  クレーム2に従属する画像処理装置であって、前記立体物画像は前記複数のカメラの1つである基準カメラによって捉えられた前記立体物の鳥瞰画像に相当し、
     前記調整手段は、前記立体物が前記複数の基準線によって挟まれる視野に属するとき前記基準カメラと前記立体物とを結ぶ線を定義する定義手段、および前記定義手段によって定義された線と前記基準線との間の角度の相違を参照して前記立体物画像の姿勢を修正する修正手段を含む。
    The image processing apparatus according to claim 2, wherein the three-dimensional object image corresponds to a bird's-eye image of the three-dimensional object captured by a reference camera that is one of the plurality of cameras.
    The adjusting means includes a defining means for defining a line connecting the reference camera and the solid object when the solid object belongs to a field of view sandwiched by the plurality of reference lines, and the line defined by the definition means and the reference Correction means for correcting the posture of the three-dimensional object image with reference to the difference in angle with the line is included.
  5.  クレーム4に従属する画像処理装置であって、前記調整手段は前記立体物が前記複数の基準線によって挟まれる視野から外れるとき前記複数のカメラのうち前記立体物により近いカメラを前記基準カメラとして選択する選択手段を含む。 The image processing apparatus according to claim 4, wherein the adjustment unit selects, as the reference camera, a camera closer to the three-dimensional object among the plurality of cameras when the three-dimensional object deviates from a field of view sandwiched by the plurality of reference lines. Selection means to be included.
  6.  クレーム1に従属する画像処理装置であって、前記立体物が特定条件に合致するとき前記調整手段の調整処理を制限する制限手段をさらに備える。 The image processing apparatus according to claim 1, further comprising a restriction unit that restricts the adjustment process of the adjustment unit when the three-dimensional object meets a specific condition.
PCT/JP2010/052530 2009-04-06 2010-02-19 Image processing device WO2010116801A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009091779A JP2010245803A (en) 2009-04-06 2009-04-06 Image processing device
JP2009-091779 2009-04-06

Publications (1)

Publication Number Publication Date
WO2010116801A1 true WO2010116801A1 (en) 2010-10-14

Family

ID=42936089

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/052530 WO2010116801A1 (en) 2009-04-06 2010-02-19 Image processing device

Country Status (2)

Country Link
JP (1) JP2010245803A (en)
WO (1) WO2010116801A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015133367A1 (en) * 2014-03-07 2015-09-11 日立建機株式会社 Periphery monitoring device for work machine
CN106536281A (en) * 2014-06-17 2017-03-22 罗伯特博世汽车转向有限公司 Method for supporting a driver by monitoring driving of a motor vehicle or motor vehicle with trailer
CN110177723A (en) * 2017-01-13 2019-08-27 Lg伊诺特有限公司 For providing the device of circle-of-sight visibility

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6277652B2 (en) * 2013-09-30 2018-02-14 株式会社デンソー Vehicle peripheral image display device and camera adjustment method
JP7366982B2 (en) * 2021-11-24 2023-10-23 本田技研工業株式会社 Control device, control method, and control program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006253872A (en) * 2005-03-09 2006-09-21 Toshiba Corp Apparatus and method for displaying vehicle perimeter image
JP2007104373A (en) * 2005-10-05 2007-04-19 Alpine Electronics Inc On-vehicle image displaying device
JP2008048317A (en) * 2006-08-21 2008-02-28 Sanyo Electric Co Ltd Image processing unit, and sight support device and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006253872A (en) * 2005-03-09 2006-09-21 Toshiba Corp Apparatus and method for displaying vehicle perimeter image
JP2007104373A (en) * 2005-10-05 2007-04-19 Alpine Electronics Inc On-vehicle image displaying device
JP2008048317A (en) * 2006-08-21 2008-02-28 Sanyo Electric Co Ltd Image processing unit, and sight support device and method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015133367A1 (en) * 2014-03-07 2015-09-11 日立建機株式会社 Periphery monitoring device for work machine
JP2015171013A (en) * 2014-03-07 2015-09-28 日立建機株式会社 Periphery monitoring device for work machine
KR101752613B1 (en) 2014-03-07 2017-06-29 히다찌 겐끼 가부시키가이샤 Periphery monitoring device for work machine
US10044933B2 (en) 2014-03-07 2018-08-07 Hitachi Construction Machinery Co., Ltd. Periphery monitoring device for work machine
CN106536281A (en) * 2014-06-17 2017-03-22 罗伯特博世汽车转向有限公司 Method for supporting a driver by monitoring driving of a motor vehicle or motor vehicle with trailer
CN110177723A (en) * 2017-01-13 2019-08-27 Lg伊诺特有限公司 For providing the device of circle-of-sight visibility
CN110177723B (en) * 2017-01-13 2022-11-04 Lg伊诺特有限公司 Device for providing a peripheral field of view

Also Published As

Publication number Publication date
JP2010245803A (en) 2010-10-28

Similar Documents

Publication Publication Date Title
WO2010119734A1 (en) Image processing device
JP2022095776A (en) Rear-stitched view panorama for rear-view visualization
US9858639B2 (en) Imaging surface modeling for camera modeling and virtual view synthesis
JP2010141836A (en) Obstacle detecting apparatus
JP5223811B2 (en) Image correction apparatus, image correction method, and conversion map creation method used therefor
JP5483120B2 (en) Vehicle perimeter monitoring system
US20120154592A1 (en) Image-Processing System and Image-Processing Method
US20120069153A1 (en) Device for monitoring area around vehicle
WO2010116801A1 (en) Image processing device
US20100194902A1 (en) Method for high dynamic range imaging
JP2008187564A (en) Camera calibration apparatus and method, and vehicle
CN103770706A (en) Dynamic rearview mirror display features
JP2009129001A (en) Operation support system, vehicle, and method for estimating three-dimensional object area
JP2010093605A (en) Maneuvering assisting apparatus
JP6812862B2 (en) Image processing system, imaging device, image processing method and program
JP2012147149A (en) Image generating apparatus
JP2010041530A (en) Steering supporting device
KR101239740B1 (en) An apparatus for generating around view image of vehicle using polygon mapping and multi look-up table
JP2009239754A (en) Image processor, image processing program, image processing system, and image processing method
US10897600B1 (en) Sensor fusion based perceptually enhanced surround view
JP2010258691A (en) Maneuver assisting apparatus
JP5183152B2 (en) Image processing device
JP2003091720A (en) View point converting device, view point converting program and image processor for vehicle
JP5853457B2 (en) Vehicle perimeter monitoring system
KR100948872B1 (en) Camera image correction method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10761503

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10761503

Country of ref document: EP

Kind code of ref document: A1