WO2010119734A1 - Dispositif de traitement d'image - Google Patents

Dispositif de traitement d'image Download PDF

Info

Publication number
WO2010119734A1
WO2010119734A1 PCT/JP2010/053389 JP2010053389W WO2010119734A1 WO 2010119734 A1 WO2010119734 A1 WO 2010119734A1 JP 2010053389 W JP2010053389 W JP 2010053389W WO 2010119734 A1 WO2010119734 A1 WO 2010119734A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
bird
obj
images
camera
Prior art date
Application number
PCT/JP2010/053389
Other languages
English (en)
Japanese (ja)
Inventor
亮平 山本
長輝 楊
Original Assignee
三洋電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三洋電機株式会社 filed Critical 三洋電機株式会社
Publication of WO2010119734A1 publication Critical patent/WO2010119734A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the present invention relates to an image processing apparatus, and in particular, an image processing apparatus that converts a scene image output from each of a plurality of cameras that capture a reference plane from an oblique direction so as to partially have a common visual field into a bird's-eye view image.
  • an image processing apparatus that converts a scene image output from each of a plurality of cameras that capture a reference plane from an oblique direction so as to partially have a common visual field into a bird's-eye view image.
  • Patent Document 1 An example of this type of device is disclosed in Patent Document 1.
  • the four cameras are mounted on the vehicle so that the fields of view of the two adjacent cameras partially overlap.
  • the surroundings of the vehicle are captured by such a camera, and the object scene image output from each camera is converted into a bird's-eye view image.
  • Boundary lines are assigned on partial images corresponding to overlapping fields of view in the converted bird's-eye view image.
  • the four bird's-eye images respectively corresponding to the four cameras are combined with each other through the trimming process referring to such a boundary line.
  • the position of the boundary line is changed so as to avoid the three-dimensional object image. Thereby, the quality of the three-dimensional object image is maintained.
  • the reproducibility of the three-dimensional object in the bird's-eye view image varies due to the difference in height between the camera and the three-dimensional object, the distance from the camera to the three-dimensional object, etc. To do.
  • the position of the boundary line is not controlled in consideration of such characteristics, and the visibility of the three-dimensional object is limited.
  • a main object of the present invention is to provide an image processing apparatus capable of improving the visibility of a three-dimensional object.
  • An image processing apparatus includes the following: a capturing unit that captures a plurality of object scene images respectively output from a plurality of cameras that capture a reference plane obliquely from above so as to partially have a common visual field; Conversion means for converting a plurality of object scene images captured by the means into a plurality of bird's-eye images representing a state in which the reference plane is viewed from directly above; a plurality of bird's-eye images converted by the conversion means can be assigned to a common field of view Combining means for referring to the weights; detecting a difference in size between the plurality of three-dimensional object images respectively appearing in the plurality of bird's-eye images converted by the conversion means when a three-dimensional object exists in the common visual field; 1 detection means; and first adjustment means for adjusting the weight with reference to the difference detected by the first detection means.
  • the first adjustment means adjusts the weight so that a larger three-dimensional object image is reproduced.
  • second detection means for detecting the movement amount of the three-dimensional object in relation to the detection processing of the first detection means
  • second adjustment means for adjusting the weight with reference to the movement amount detected by the second detection means
  • control means for activating the second adjustment means instead of the first adjustment means when the difference detected by the first detection means falls below the reference.
  • the second adjustment means adjusts the weight so that the three-dimensional object image of the camera existing in the moving direction of the three-dimensional object is reproduced.
  • the weight includes a boundary line defined in the common visual field as a parameter, and the combining unit deletes a partial image outside the boundary line from each of the plurality of bird's-eye images, and deletion processing of the deleting unit And a plurality of partial bird's-eye images remaining after the combining unit.
  • the plurality of cameras are provided on the moving body, and further include display means for displaying the synthesized bird's-eye view image created by the synthesizing means toward the operator of the moving body.
  • the weight referred to when combining a plurality of bird's-eye images is adjusted by paying attention to the difference between the cameras in the size of the three-dimensional object image. Thereby, the visibility of a three-dimensional object can be improved.
  • FIG. 1 It is a block diagram which shows the basic composition of this invention. It is a block diagram which shows the structure of one Example of this invention. It is an illustration figure which shows the visual field captured by the some camera attached to the vehicle.
  • A is an illustrative view showing an example of a bird's eye view image based on the output of the previous camera
  • B is an illustrative view showing an example of a bird's eye view image based on the output of the right camera
  • (C) is an output of the left camera.
  • (D) is an illustration figure which shows an example of the bird's-eye view image based on the output of a back camera.
  • FIG. 10 is an illustrative view showing another portion of the operation of creating the driving assistance image. It is an illustration figure which shows an example of the operation
  • FIG. 11 is a flowchart showing still another portion of behavior of the CPU applied to the embodiment in FIG. 2;
  • FIG. 10 is a flowchart showing yet another portion of behavior of the CPU applied to the embodiment in FIG. 2;
  • FIG. 11 is a flowchart showing still another portion of behavior of the CPU applied to the embodiment in FIG. 2;
  • the image processing apparatus of the present invention is basically configured as follows.
  • the capturing means 2 captures a plurality of object scene images respectively output from a plurality of cameras 1, 1,... Capturing the reference plane obliquely from above so as to partially have a common visual field.
  • the converting unit 3 converts the plurality of scene images captured by the capturing unit 2 into a plurality of bird's-eye images representing a state where the reference plane is captured from directly above.
  • the synthesizing unit 4 synthesizes the plurality of bird's-eye images converted by the converting unit 3 with reference to the weight assigned to the common visual field.
  • the first detection unit 5 detects a difference in size between the plurality of three-dimensional object images respectively appearing in the plurality of bird's-eye images converted by the conversion unit 3 when a three-dimensional object exists in the common visual field.
  • the first adjusting unit 6 adjusts the weight with reference to the difference detected by the first detecting unit.
  • the size of the three-dimensional object image that appears in the bird's-eye view image differs depending on the height of the three-dimensional object from the reference plane and the height difference between the camera 1 and the three-dimensional object.
  • the weight referred to when combining a plurality of bird's-eye images is adjusted by paying attention to the difference between the cameras 1 in the size of such a three-dimensional object image. Thereby, the visibility of a three-dimensional object can be improved.
  • the steering support device 10 of this embodiment shown in FIG. 2 includes four cameras C_1 to C_4.
  • the cameras C_1 to C_4 output scene images P_1 to P_4 every 1/30 seconds in synchronization with the common timing signal.
  • the output scene images P_1 to P_4 are given to the image processing circuit 12.
  • camera C_1 is installed at the center of the front portion of vehicle 100 in a posture in which the optical axis of camera C_1 extends obliquely downward in front of vehicle 100.
  • the camera C_2 is installed on the upper right side of the vehicle 100 in a posture in which the optical axis of the camera C_2 extends obliquely downward to the right of the vehicle 100.
  • the camera C_3 is installed at the center of the rear part of the vehicle 100 in a posture in which the optical axis of the camera C_3 extends obliquely downward rearward of the vehicle 100.
  • the camera C_4 is installed on the upper left side of the vehicle 100 such that the optical axis of the camera C_4 extends obliquely downward to the left of the vehicle 100.
  • the object scene around the vehicle 100 is captured from such a direction that intersects the road surface in an oblique direction by the cameras C_1 to C_4.
  • Camera C_1 has a field of view VW_1 that captures the front of the vehicle 100
  • camera C_2 has a field of view VW_2 that captures the right direction of the vehicle 100
  • camera C_3 has a field of view VW_3 that captures the rear of the vehicle 100
  • camera C_4 It has a visual field VW_4 that captures the left direction of the vehicle 100.
  • the visual fields VW_1 and VW_2 have a common visual field VW_12
  • the visual fields VW_2 and VW_3 have a common visual field VW_23
  • the visual fields VW_3 and VW_4 have a common visual field VW_34
  • the visual fields VW_4 and VW_1 have a common visual field VW_41.
  • the CPU 12p provided in the image processing circuit 12 generates the bird's-eye view image BEV_1 shown in FIG. 4A based on the object scene image P_1 output from the camera C_1, and is output from the camera C_2.
  • the bird's-eye view image BEV_2 shown in FIG. 4B is generated based on the object scene image P_2.
  • the CPU 12p also generates the bird's-eye view image BEV_3 shown in FIG. 4C based on the object scene image P_3 output from the camera C_3, and based on the object scene image P_4 output from the camera C_4, FIG.
  • the bird's-eye view image BEV_1 corresponds to an image captured by a virtual camera looking down the visual field VW_1 in the vertical direction
  • the bird's-eye view image BEV_2 corresponds to an image captured by a virtual camera looking down the visual field VW_2 in the vertical direction
  • the bird's-eye view image BEV_3 corresponds to an image captured by a virtual camera looking down the visual field VW_3 in the vertical direction
  • the bird's-eye view image BEV_4 corresponds to an image captured by a virtual camera looking down the visual field VW_4 in the vertical direction.
  • the bird's-eye image BEV_1 has a bird's-eye coordinate system X1 and Y1
  • the bird's-eye image BEV_2 has a bird's-eye coordinate system X2 and Y2
  • the bird's-eye image BEV_3 has a bird's-eye coordinate system.
  • the bird's-eye view image BEV_4 has a bird's-eye coordinate system X4 / Y4.
  • Such bird's-eye images BEV_1 to BEV_4 are held in the work area W1 of the memory 12m.
  • the CPU 12p deletes a part of the image outside the boundary line BL from each of the bird's-eye images BEV_1 to BEV_4, and rotates / removes a part of the bird's-eye images BEV_1 to BEV_4 remaining after the deletion (see FIG. 5). Combine with each other by moving process. As a result, the all-around bird's-eye view image shown in FIG. 6 is obtained in the work area W2 of the memory 12m.
  • the overlapping area OL_12 indicated by diagonal lines corresponds to the common visual field VW_12
  • the overlapping area OL_23 indicated by diagonal lines corresponds to the common visual field VW_23
  • the overlapping area OL_34 indicated by hatching corresponds to the common visual field VW_34
  • the overlapping area OL_41 indicated by hatching corresponds to the common visual field VW_41.
  • the display device 16 installed in the driver's seat extracts a part of the images D1 in which the overlapping areas OL_12 to OL_41 are located at the four corners from the all-around bird's-eye view image on the work area W2, and the vehicle image D2 imitating the upper part of the vehicle 100 Is pasted in the center of the extracted image D1.
  • the driving assistance image shown in FIG. 7 is displayed on the monitor screen.
  • a part of the overlapping area OL_12 that forms the image D1 is defined as “reproduction overlapping area OLD_12”, and a part of the overlapping area OL_23 that forms the image D1 is defined as “reproduction overlapping area OLD_23”.
  • a part of the overlapping area OL_34 that forms the image D1 is defined as “reproduction overlapping area OLD_34”
  • a part of the overlapping area OL_41 that forms the image D1 is defined as “reproduction overlapping area OLD_41”.
  • camera C_3 is disposed rearward and obliquely downward at the rear of vehicle 100. If the depression angle of the camera C_3 is “ ⁇ d”, the angle ⁇ shown in FIG. 7 corresponds to “180 ° ⁇ d”. Further, the angle ⁇ is defined in the range of 90 ° ⁇ ⁇ 180 °.
  • FIG. 9 shows the relationship between the camera coordinate system X ⁇ Y ⁇ Z, the coordinate system Xp ⁇ Yp of the imaging surface S of the camera C_3, and the world coordinate system Xw ⁇ Yw ⁇ Zw.
  • the camera coordinate system X, Y, Z is a three-dimensional coordinate system with the X, Y, and Z axes as coordinate axes.
  • the coordinate system Xp ⁇ Yp is a two-dimensional coordinate system having the Xp axis and the Yp axis as coordinate axes.
  • the world coordinate system Xw ⁇ Yw ⁇ Zw is a three-dimensional coordinate system having the Xw axis, the Yw axis, and the Zw axis as coordinate axes.
  • the optical center of the camera C3 is defined as the origin O
  • the Z axis is defined in the optical axis direction
  • the X axis is defined in the direction perpendicular to the Z axis and parallel to the road surface
  • a Y axis is defined in a direction orthogonal to the Z axis and the X axis.
  • the coordinate system Xp / Yp of the imaging surface S with the center of the imaging surface S as the origin, the Xp axis is defined in the horizontal direction of the imaging surface S, and the Yp axis is defined in the vertical direction of the imaging surface S.
  • the intersection of the vertical line passing through the origin O of the camera coordinate system XYZ and the road surface is defined as the origin Ow
  • the Yw axis is defined in the direction perpendicular to the road surface.
  • An Xw axis is defined in a direction parallel to the X axis of Z
  • a Zw axis is defined in a direction orthogonal to the Xw axis and the Yw axis.
  • the distance from the Xw axis to the X axis is “h”, and the obtuse angle formed by the Zw axis and the Z axis corresponds to the angle ⁇ described above.
  • a conversion formula between the coordinates (x, y, z) of the camera coordinate system X, Y, Z and the coordinates (xw, yw, zw) of the world coordinate system Xw, Yw, Zw is expressed by Formula 1.
  • Equation 3 is obtained based on Equation 1 and Equation 2.
  • Equation 3 shows a conversion formula between the coordinates (xp, yp) of the coordinate system Xp / Yp of the imaging surface S and the coordinates (xw, zw) of the two-dimensional road surface coordinate system Xw / Zw.
  • a bird's eye coordinate system X3 / Y3 that is a coordinate system of the bird's eye image BEV_3 shown in FIG. 4C is defined.
  • the bird's-eye coordinate system X3 / Y3 is a two-dimensional coordinate system having the X3 axis and the Y3 axis as coordinate axes.
  • the coordinates in the bird's-eye view coordinate system X3 / Y3 are expressed as (x3, y3)
  • the position of each pixel forming the bird's-eye view image BEV_3 is represented by the coordinates (x3, y3).
  • “X3” and “y3” respectively indicate an X3 axis component and a Y3 axis component in the bird's eye view coordinate system X3 ⁇ Y3.
  • the projection from the two-dimensional coordinate system Xw / Zw representing the road surface to the bird's eye coordinate system X3 / Y3 corresponds to a so-called parallel projection.
  • the conversion formula between the coordinates (xw, zw) of the two-dimensional coordinate system Xw ⁇ Zw and the coordinates (x3, y3) of the bird's-eye coordinate system X3 ⁇ Y3 is , Represented by Equation (4).
  • the height H of the virtual camera is determined in advance.
  • Equation 7 corresponds to a conversion formula for converting the coordinates (xp, yp) of the coordinate system Xp / Yp of the imaging surface S into the coordinates (x3, y3) of the bird's eye coordinate system X3 / Y3.
  • the coordinates (xp, yp) of the coordinate system Xp / Yp of the imaging surface S represent the coordinates of the object scene image P_3 captured by the camera C_3. Accordingly, the object scene image P_3 from the camera C3 is converted into the bird's-eye view image BEV_3 by using Equation 7. Actually, the object scene image P_3 is first subjected to image processing such as lens distortion correction, and then converted into a bird's-eye view image BEV_3 by Equation 7.
  • the obstacles 201 and 202 captured by the camera C_3 are obstruction images. Appear in the bird's-eye view image BEV_3 as OBJ_3_1 and OBJ_3_2. Similarly, the obstacles 201 and 202 captured by the camera C_4 appear in the bird's-eye view image BEV_4 as the obstacle images OBJ_4_1 and OBJ_4_2.
  • the postures of the obstacle images OBJ_3_1 and OBJ_4_1 are different from each other due to the difference between the viewpoint of the camera C_3 and the viewpoint of the camera C_4.
  • the postures of the obstacle images OBJ_3_2 and OBJ_4_2 are also different from each other due to the difference between the viewpoint of the camera C_3 and the viewpoint of the camera C_4.
  • the obstacle image OBJ_3_1 is reproduced so as to fall down along a straight line connecting the camera C_3 and the bottom of the obstacle 201, and the obstacle image OBJ_4_1 falls down along a straight line connecting the camera C_4 and the bottom of the obstacle 201. Is reproduced.
  • the obstacle image OBJ_3_2 is reproduced so as to fall down along a straight line connecting the camera C_3 and the bottom of the obstacle 202, and the obstacle image OBJ_4_2 falls down along a straight line connecting the camera C_4 and the bottom of the obstacle 202. Is reproduced as follows.
  • the size of the obstacle images OBJ_3_1 and OBJ_4_1 is different from the height of the camera C_3 and the height of the camera C_4, or from the camera C_3 to the obstacle 201. And the distance from the camera C_4 to the obstacle 201 are different from each other.
  • the sizes of the obstacle images OBJ_3_2 and OBJ_4_2 also differ between the height of the camera C_3 and the height of the camera C_4, the difference between the distance from the camera C_3 to the obstacle 202 and the distance from the camera C_4 to the obstacle 202, and the like. Are different from each other.
  • the CPU 12p executes the following processing.
  • the boundary line BL is set in the initial state in each of the common visual fields VW_12, VW_23, and VW_41. That is, the boundary line BL is set so as to connect the diagonals of the reproduction overlap areas OLD_12, OLD_23, and OLD_41 (see FIG. 11).
  • the obstacles 201 and 202 exist in the common visual field VW_34, and the obstacle images OBJ_3_1 and OBJ_3_2 are reproduced as the bird's-eye image BEV_3, while the obstacle images OBJ_4_1 and OBJ_4_2 are reproduced as the bird's-eye image BEV_4.
  • the boundary line BL is corrected in the following manner.
  • the size difference between the obstacle images OBJ — 3 — 1 and OBJ — 4 — 1 is calculated as “ ⁇ SZ — 1”
  • the size difference between the obstacle images OBJ — 3 — 2 and OBJ — 4 — 2 is calculated as “ ⁇ SZ — 2”.
  • the size difference ⁇ SZ_1 is obtained by subtracting the size of the obstacle image OBJ_4_1 from the size of the obstacle image OBJ_3_1.
  • the size difference ⁇ SZ_2 is obtained by subtracting the size of the obstacle image OBJ_4_2 from the size of the obstacle image OBJ_3_2.
  • the obstacle image OBJ_3_1 is set as the selected image OBJ_1_SEL.
  • the obstacle image OBJ — 4_1 is set as the selected image OBJ — 1_SEL. That is, if the size difference between the obstacle image OBJ — 3_1 and the obstacle image OBJ — 4_1 is sufficiently large, the larger obstacle image is selected.
  • the obstacle is based on the obstacle images OBJ_3_1 and OBJ_4_1.
  • a motion vector of the object 201 is detected.
  • the obstacle image OBJ_3_1 is set as the selected image OBJ_1_SEL. If the amount of the detected motion vector exceeds the threshold value THmv and the direction of the motion vector is the camera C_4 side, the obstacle image OBJ_4_1 is set as the selected image OBJ_1_SEL. That is, if the movement of the obstacle 201 is large, an obstacle image corresponding to the camera in the direction in which the obstacle 201 moves is selected.
  • the detected motion vector amount is equal to or less than the threshold value THmv, the setting of the previous selected image OBJ_K_SEL is maintained.
  • Such processing is also executed for the obstacle 202, whereby one of the obstacle images OBJ_3_2 and OBJ_4_2 is set as the selected image OBJ_2_SEL.
  • the boundary line BL is set in the common visual field VW_34 so that the selected images OBJ_1_SEL to OBJ_2_SEL thus set are reproduced.
  • the size of the obstacle image OBJ_3_1 is sufficiently larger than the size of the obstacle image OBJ_4_1. If the size of the object image OBJ — 3_2 is substantially the same as the size of the obstacle image OBJ — 4_2, the boundary line BL is set as shown in the lower part of FIG. As a result, the obstacle images OBJ_3_1 and OBJ_4_2 are reproduced on the display device 12.
  • the obstacle OBJ_3_1 is substantially the same as the size of the obstacle image OBJ_4_1 and the size of the obstacle image OBJ_3_2 is substantially the same as the size of the obstacle image OBJ_4_2, the boundary line BL is set as shown in the lower part of FIG. .
  • the obstacle images OBJ_3_1 and OBJ_4_2 are reproduced on the display device 12.
  • the size of the obstacle image OBJ_3_1 is the obstacle image. If the size of the obstacle image OBJ_3_2 is substantially the same as the size of the obstacle image OBJ_4_2, the boundary line BL is set as shown in the lower part of FIG. As a result, the obstacle images OBJ — 3_1 and OBJ — 3_2 are reproduced on the display device 12.
  • the CPU 12p executes processing according to the flowcharts shown in FIGS.
  • the control program corresponding to these flowcharts is stored in the flash memory 14 (see FIG. 1).
  • step S1 shown in FIG. 15 the scene images P_1 to P_4 are captured from the cameras C_1 to C_4.
  • step S3 bird's-eye view images BEV_1 to BEV_4 are created based on the captured object scene images P_1 to P_4.
  • the created bird's-eye view images BEV_1 to BEV_4 are secured in the work area W1.
  • step S5 an obstacle detection process is executed.
  • step S7 the all-around bird's-eye view image is created based on the bird's-eye view images BEV_1 to BEV_4 created in step S3.
  • the created all-around bird's-eye view image is secured in the work area W2.
  • a driving support image based on the all-around bird's-eye view image secured in the work area W2 is displayed.
  • step S5 follows the subroutine shown in FIGS.
  • variables M and N are set to “1” and “2”, respectively.
  • the variable M is a variable that is updated in the order of “1” ⁇ “2” ⁇ “3” ⁇ “4”
  • the variable N is “2” ⁇ “3” ⁇ “4” ⁇ “1”. It is a variable that is updated in order.
  • step S13 a difference image in the common visual field VW_MN is created based on the bird's-eye view image BEV_M based on the output of the camera C_M and the bird's-eye view image BEV_N based on the output of the camera C_N.
  • step S15 it is determined based on the difference image created in step S13 whether an obstacle exists in the common visual field VW_MN.
  • step S17 If no obstacle is detected from the common visual field VW_MN, a boundary line is initialized in step S17. On the other hand, if an obstacle is detected from the common visual field VW_MN, boundary correction processing is executed in step S19. When the process of step S17 or S19 is completed, it is determined in step S21 whether or not the variable M indicates “4”. If the variable M is less than “4”, the variables M and N are updated in step S23, and then the process returns to step S13. On the other hand, if the variable M is “4”, the process returns to the upper layer routine.
  • step S19 shown in FIG. 16 is executed according to the subroutines shown in FIGS.
  • step S31 the number of obstacles existing in the common visual field VW_MN is specified as “Kmax”.
  • step S33 the variable K is set to “1”
  • step S35 the size difference between the obstacle images OBJ_M_K and OBJ_N_K is calculated as “ ⁇ SZ_K”.
  • the size difference ⁇ SZ_K is obtained by subtracting the size of the obstacle image OBJ_N_K from the size of the obstacle image OBJ_M_K.
  • step S37 it is determined whether or not the size difference ⁇ SZ_K exceeds the reference value REF, and in step S39, it is determined whether or not the size difference ⁇ SZ_K is lower than the reference value “ ⁇ REF”. If “YES” in the step S37, the process proceeds to a step S41 to set the obstacle image OBJ_M_K as the selected image OBJ_K_SEL. If “YES” in the step S39, the process proceeds to a step S43 to set the obstacle image OBJ_N_K as the selected image OBJ_K_SEL.
  • step S45 the process proceeds to step S45, and the motion vector of the obstacle 20K is detected based on the obstacle images OBJ_M_K and OBJ_N_K.
  • step S47 it is determined whether or not the amount of the detected motion vector exceeds a threshold value THmv.
  • step S49 it is determined whether or not the direction of the detected motion vector is on the camera C_M side.
  • step S47 the selected image OBJ_K_SEL is set to the same setting as the previous time. If both step S47 and S51 are YES, it will progress to step S53 and will set the obstruction image OBJ_M_K as selection image OBJ_K_SEL. If “YES” in the step S47, if “NO” in the step S51, the process proceeds to a step S55 to set the obstacle image OBJ_N_K as the selected image OBJ_K_SEL.
  • step S57 When the setting of the selected image OBJ_K_SEL is completed, the variable K is incremented in step S57, and whether or not the incremented variable K exceeds the variable Kmax is determined in step S59. If NO in step S59, the process returns to step S35, and if YES in step S59, the process proceeds to step S61.
  • step S61 the boundary line BL is set to the common visual field VW_MN so that the selected images OBJ_1_SEL to OBJ_Kmax_SEL are reproduced.
  • the process returns to the upper hierarchy routine.
  • step S71 the variable M is set to “1”.
  • the variable M is a variable that is updated in the order of “1” ⁇ “2” ⁇ “3” ⁇ “4”.
  • step S73 an image outside the boundary line is deleted from the bird's-eye view image BEV_M, and in step S75, it is determined whether or not the variable M has reached “4”. If the variable M is less than “4”, the variable M is updated in step S77, and the process returns to step S73.
  • step S79 a part of the bird's-eye view images BEV_1 to BEV_4 remaining after the deletion process in step S73 is coupled to each other by coordinate transformation.
  • the routine returns to the upper layer routine.
  • the cameras C_3 and C_4 grasp the road surface obliquely from above so as to partially have the common visual field VW_34.
  • the CPU 12p detects a difference in size between the three-dimensional object images that appear in each of the bird's-eye images BEV_3 and BEV_4 (S35), and refers to the detected difference as a boundary.
  • the setting of the line BL is adjusted (S41, S43, S61).
  • the boundary line BL is set so that a larger three-dimensional object image is reproduced.
  • the size of the three-dimensional object image that appears in each of the bird's-eye images BEV_3 and BEV_4 varies depending on the height of the three-dimensional object from the road surface and the height difference between each of the cameras C_3 and C_4 and the three-dimensional object.
  • the boundary line BL that is referred to when the bird's-eye view images BEV_3 and BEV_4 are combined is adjusted by paying attention to the difference between the cameras C_3 and C_4 having such a three-dimensional object image size. Thereby, the visibility of a three-dimensional object can be improved.
  • an obstacle is detected from the common visual field VW_MN based on the difference image created for the common visual field VW_MN (see steps S13 to S15 in FIG. 17).
  • a stereo vision method or an optical flow method may be used, and an ultrasonic sensor, a millimeter wave sensor, or a microwave sensor may be used.
  • the coordinate transformation for generating a bird's-eye view image from a photographed image as described in the embodiment is generally called perspective projection transformation.
  • a bird's eye view image may be generated from the captured image by a known plane projective conversion.
  • planar projective transformation a homography matrix (coordinate transformation matrix) for converting the coordinate value of each pixel on the captured image into the coordinate value of each pixel on the bird's eye view image is obtained in advance at the stage of camera calibration processing. .
  • a method for obtaining a homography matrix is known. And when performing image conversion, what is necessary is just to convert a picked-up image into a bird's-eye view image based on a homography matrix. In any case, the captured image is converted into the bird's-eye view image by projecting the captured image onto the bird's-eye view image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

Lorsque l'on observe des caméras (C_3) et (C_4) parmi quatre caméras, on voit que les caméras (C_3) et (C_4) capturent une surface de la chaussée depuis en haut et en diagonale, de sorte que leurs champs de vision se chevauchent partiellement. Une unité de traitement centrale charge les images de champ cibles transmises par les caméras (C_3) et (C_4), respectivement, convertit ces images en images vues du ciel et synthétise ces deux images vues du ciel par rapport à une ligne frontière (BL) attribuée au champ de vision en chevauchement. Lorsqu'un objet tridimensionnel se trouve dans la zone de chevauchement, l'unité centrale de traitement détecte les différences de taille entre les images tridimensionnelles (OBJ 3 1) et (OBJ 4 1) d'une part, les images tridimensionnelles (OBJ_3_2) et (OBJ_4_2) d'autre part, qui sont toutes représentées dans les images vues du ciel correspondantes, et retouche le calage de la ligne frontière en fonction de la différence détectée.
PCT/JP2010/053389 2009-04-17 2010-03-03 Dispositif de traitement d'image WO2010119734A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009-100552 2009-04-17
JP2009100552A JP2010250640A (ja) 2009-04-17 2009-04-17 画像処理装置

Publications (1)

Publication Number Publication Date
WO2010119734A1 true WO2010119734A1 (fr) 2010-10-21

Family

ID=42982402

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/053389 WO2010119734A1 (fr) 2009-04-17 2010-03-03 Dispositif de traitement d'image

Country Status (2)

Country Link
JP (1) JP2010250640A (fr)
WO (1) WO2010119734A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8655019B2 (en) 2009-09-24 2014-02-18 Panasonic Corporation Driving support display device
WO2014069471A1 (fr) 2012-10-31 2014-05-08 クラリオン株式会社 Système de traitement d'image et procédé de traitement d'image
JP2014183499A (ja) * 2013-03-19 2014-09-29 Sumitomo Heavy Ind Ltd 作業機械用周辺監視装置
EP2902261A1 (fr) * 2014-01-29 2015-08-05 MAN Truck & Bus AG Procédé de représentation imagée d'une zone environnante d'un véhicule automobile équipé d'un système d'affichage de vue d'avion
WO2016142079A1 (fr) * 2015-03-10 2016-09-15 Robert Bosch Gmbh Procédé pour assembler deux images de l'environnement d'un véhicule et dispositif correspondant
CN105960800A (zh) * 2014-03-27 2016-09-21 歌乐株式会社 影像显示装置及影像显示系统
CN109664879A (zh) * 2017-10-12 2019-04-23 丰田自动车株式会社 车辆用显示装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5704902B2 (ja) * 2010-11-26 2015-04-22 東芝アルパイン・オートモティブテクノロジー株式会社 運転支援装置及び運転支援方法
JP5699679B2 (ja) * 2011-02-24 2015-04-15 富士通セミコンダクター株式会社 画像処理装置、画像処理システム、及び画像処理方法
CN103782591B (zh) * 2011-08-26 2017-02-15 松下知识产权经营株式会社 驾驶辅助装置
JP6969738B2 (ja) * 2017-07-10 2021-11-24 株式会社Zmp 物体検出装置及び方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002114098A (ja) * 2000-06-30 2002-04-16 Matsushita Electric Ind Co Ltd 描画装置
JP2006253872A (ja) * 2005-03-09 2006-09-21 Toshiba Corp 車両周辺画像表示装置および車両周辺画像表示方法
JP2007027948A (ja) * 2005-07-13 2007-02-01 Nissan Motor Co Ltd 車両周辺監視装置及び車両周辺監視方法
JP2007104373A (ja) * 2005-10-05 2007-04-19 Alpine Electronics Inc 車両用画像表示装置
WO2007087975A2 (fr) * 2006-01-24 2007-08-09 Daimler Ag Procédé permettant d'assembler plusieurs prises de vue en une seule image dans un plan de plongée
JP2008048317A (ja) * 2006-08-21 2008-02-28 Sanyo Electric Co Ltd 画像処理装置並びに視界支援装置及び方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002114098A (ja) * 2000-06-30 2002-04-16 Matsushita Electric Ind Co Ltd 描画装置
JP2006253872A (ja) * 2005-03-09 2006-09-21 Toshiba Corp 車両周辺画像表示装置および車両周辺画像表示方法
JP2007027948A (ja) * 2005-07-13 2007-02-01 Nissan Motor Co Ltd 車両周辺監視装置及び車両周辺監視方法
JP2007104373A (ja) * 2005-10-05 2007-04-19 Alpine Electronics Inc 車両用画像表示装置
WO2007087975A2 (fr) * 2006-01-24 2007-08-09 Daimler Ag Procédé permettant d'assembler plusieurs prises de vue en une seule image dans un plan de plongée
JP2008048317A (ja) * 2006-08-21 2008-02-28 Sanyo Electric Co Ltd 画像処理装置並びに視界支援装置及び方法

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8655019B2 (en) 2009-09-24 2014-02-18 Panasonic Corporation Driving support display device
WO2014069471A1 (fr) 2012-10-31 2014-05-08 クラリオン株式会社 Système de traitement d'image et procédé de traitement d'image
JP2014090349A (ja) * 2012-10-31 2014-05-15 Clarion Co Ltd 画像処理システム及び画像処理方法
US9485438B2 (en) 2012-10-31 2016-11-01 Clarion Co., Ltd. Image processing system with image conversion unit that composites overhead view images and image processing method
JP2014183499A (ja) * 2013-03-19 2014-09-29 Sumitomo Heavy Ind Ltd 作業機械用周辺監視装置
EP2902261A1 (fr) * 2014-01-29 2015-08-05 MAN Truck & Bus AG Procédé de représentation imagée d'une zone environnante d'un véhicule automobile équipé d'un système d'affichage de vue d'avion
CN105960800A (zh) * 2014-03-27 2016-09-21 歌乐株式会社 影像显示装置及影像显示系统
EP3125544A4 (fr) * 2014-03-27 2017-09-20 Clarion Co., Ltd. Dispositif et système d'affichage d'image
WO2016142079A1 (fr) * 2015-03-10 2016-09-15 Robert Bosch Gmbh Procédé pour assembler deux images de l'environnement d'un véhicule et dispositif correspondant
CN107408295A (zh) * 2015-03-10 2017-11-28 罗伯特·博世有限公司 用于组合车辆的车辆周围环境的两个图像的方法和相应的设备
CN109664879A (zh) * 2017-10-12 2019-04-23 丰田自动车株式会社 车辆用显示装置
CN109664879B (zh) * 2017-10-12 2022-03-15 丰田自动车株式会社 车辆用显示装置

Also Published As

Publication number Publication date
JP2010250640A (ja) 2010-11-04

Similar Documents

Publication Publication Date Title
WO2010119734A1 (fr) Dispositif de traitement d'image
US9098928B2 (en) Image-processing system and image-processing method
WO2010016340A1 (fr) Appareil d'aide à la manœuvre
JP5444338B2 (ja) 車両周囲監視装置
JP5835383B2 (ja) 情報処理方法、情報処理装置、およびプログラム
JP5835384B2 (ja) 情報処理方法、情報処理装置、およびプログラム
JP2010093605A (ja) 操縦支援装置
JP2006287892A (ja) 運転支援システム
US20100149333A1 (en) Obstacle sensing apparatus
JP6812862B2 (ja) 画像処理システム、撮像装置、画像処理方法及びプログラム
WO2010070920A1 (fr) Dispositif de génération d'image des environs d'un véhicule
JP2010258691A (ja) 操縦支援装置
JP5178454B2 (ja) 車両周囲監視装置及び車両周囲監視方法
WO2010116801A1 (fr) Dispositif de traitement d'image
JP6350695B2 (ja) 装置、方法、およびプログラム
WO2010035628A1 (fr) Dispositif d’assistance à la conduite
JP4972036B2 (ja) 画像処理装置
TWI617195B (zh) 影像擷取裝置及其影像拼接方法
JP5271186B2 (ja) 車両用画像表示装置
KR102177878B1 (ko) 영상 처리 장치 및 방법
JP2016110312A (ja) 画像処理方法、画像処理装置及びプログラム
JP6293089B2 (ja) 後方モニタ
JP6583486B2 (ja) 情報処理方法、情報処理プログラムおよび情報処理装置
JP6128185B2 (ja) 装置、方法、およびプログラム
JP2009077022A (ja) 運転支援システム及び車両

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10764320

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10764320

Country of ref document: EP

Kind code of ref document: A1