JP2007048071A - Image processor and image processing method - Google Patents

Image processor and image processing method Download PDF

Info

Publication number
JP2007048071A
JP2007048071A JP2005232292A JP2005232292A JP2007048071A JP 2007048071 A JP2007048071 A JP 2007048071A JP 2005232292 A JP2005232292 A JP 2005232292A JP 2005232292 A JP2005232292 A JP 2005232292A JP 2007048071 A JP2007048071 A JP 2007048071A
Authority
JP
Japan
Prior art keywords
dimensional object
image
images
viewpoint
processing apparatus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2005232292A
Other languages
Japanese (ja)
Other versions
JP4736611B2 (en
Inventor
Teruhisa Takano
照久 高野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nissan Motor Co Ltd
Original Assignee
Nissan Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nissan Motor Co Ltd filed Critical Nissan Motor Co Ltd
Priority to JP2005232292A priority Critical patent/JP4736611B2/en
Publication of JP2007048071A publication Critical patent/JP2007048071A/en
Application granted granted Critical
Publication of JP4736611B2 publication Critical patent/JP4736611B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To prevent that one three-dimensional object is displayed as a plurality of three-dimensional objects when creating one image by applying viewpoint conversion processing to a plurality of images imaged by a plurality of cameras and composing them. <P>SOLUTION: This image processor converts images imaged by the plurality of cameras 1, 2 into viewpoint conversion images in a view from a viewpoint different from viewpoints of the cameras 1, 2, composing the plurality of converted viewpoint conversion images to create one composite image. In the image processor, the three-dimensional object extending from one start point to another direction is detected on the composite image, and redrawn as one three-dimensional object. <P>COPYRIGHT: (C)2007,JPO&INPIT

Description

本発明は、複数の撮像手段によって撮像された画像に対して視点変換処理を施して、1つの合成画像を作成する装置および方法に関する。   The present invention relates to an apparatus and a method for creating a single composite image by performing viewpoint conversion processing on images picked up by a plurality of image pickup means.

従来、異なる位置で撮像された2つの画像を、撮像した視点とは異なる視点に変換して合成することにより、1つの画像を作成する装置が知られている(特許文献1参照)。   2. Description of the Related Art Conventionally, there is known an apparatus that creates one image by converting two images captured at different positions into a viewpoint that is different from the captured viewpoint (see Patent Document 1).

特開2001−114047号公報JP 2001-114047 A

しかしながら、従来の技術では、ポール状の立体物などの幅の小さい立体物が合成画像上において二重に表示される等、適切に表示されないという問題があった。   However, the conventional technique has a problem that a three-dimensional object such as a pole-shaped three-dimensional object is not displayed properly, such as being displayed twice on a composite image.

(1)本発明による画像処理装置は、複数の撮像手段によってそれぞれ撮像された画像を、撮像手段の視点とは異なる視点から見た視点変換画像にそれぞれ変換し、変換した複数の視点変換画像を合成して1つの合成画像を作成するとともに、合成画像上において、1つの始点から複数の方向に伸びるようにして映っている立体物を検出して1つの立体物として描画し直すことを特徴とする。
(2)本発明による画像処理方法は、複数の撮像手段によってそれぞれ撮像された画像を、複数の撮像手段の視点とは異なる視点から見た画像(以下、視点変換画像と呼ぶ)にそれぞれ視点変換してから1つの合成画像を作成し、合成画像上において、1つの始点から複数の方向に伸びるようにして映っている立体物を検出して、1つの立体物として描画し直すことを特徴とする。
(1) An image processing apparatus according to the present invention converts images captured by a plurality of imaging units into viewpoint converted images viewed from a viewpoint different from the viewpoint of the imaging unit, and converts the converted plurality of viewpoint converted images. A composite image is created to create a single composite image, and a three-dimensional object appearing in a plurality of directions extending from one start point on the composite image is detected and redrawn as a single solid object. To do.
(2) The image processing method according to the present invention converts viewpoints of images captured by a plurality of imaging means into images viewed from a viewpoint different from the viewpoints of the plurality of imaging means (hereinafter referred to as viewpoint converted images). And then creating one composite image, detecting a three-dimensional object appearing in a plurality of directions from one start point on the composite image, and redrawing as one solid object To do.

本発明による画像処理装置および画像処理方法によれば、複数の視点変換画像を合成して1つの合成画像を作成するとともに、合成画像上において、1つの始点から複数の方向に伸びる立体物を検出して1つの立体物として描画し直すので、合成画像上において、1つの立体物が複数の立体物として表示されるのを防ぐことができる。   According to the image processing device and the image processing method of the present invention, a plurality of viewpoint-converted images are combined to create one composite image, and a three-dimensional object extending in one or more directions from one start point is detected on the composite image. Then, since it is redrawn as one solid object, it is possible to prevent one solid object from being displayed as a plurality of solid objects on the composite image.

図1は、一実施の形態における画像処理装置の構成を示す図である。一実施の形態における画像処理装置は、車両に搭載されて使用されるものであって、カメラ1と、カメラ2と、処理装置3と、ディスプレイ4とを備える。カメラ1およびカメラ2は、例えば、CCDカメラであり、車両の異なる位置に設けられて、車両周囲を撮像する。   FIG. 1 is a diagram illustrating a configuration of an image processing apparatus according to an embodiment. An image processing apparatus according to an embodiment is used by being mounted on a vehicle, and includes a camera 1, a camera 2, a processing apparatus 3, and a display 4. The camera 1 and the camera 2 are, for example, CCD cameras, and are provided at different positions on the vehicle to image the surroundings of the vehicle.

処理装置3は、内部で行う処理機能上、視点変換部31と、画像合成部32と、立体物検出部33と、立体物描画部34とを備える。視点変換部31は、カメラ1によって撮像された画像およびカメラ2によって撮像された画像を、任意の位置から撮像した視点変換画像に変換する座標変換処理を行う。具体的には、車両周囲画像の全てに平面状の路面が映っていると仮定し、この路面を任意の位置(例えば、車両の真上)から観測した視点変換画像を生成する。視点変換処理は、既知の技術であるため、ここでは、視点変換処理に関する詳しい説明は省略する。   The processing device 3 includes a viewpoint conversion unit 31, an image synthesis unit 32, a three-dimensional object detection unit 33, and a three-dimensional object drawing unit 34 in terms of processing functions performed internally. The viewpoint conversion unit 31 performs a coordinate conversion process for converting the image captured by the camera 1 and the image captured by the camera 2 into a viewpoint conversion image captured from an arbitrary position. Specifically, it is assumed that a planar road surface is reflected in all the vehicle surrounding images, and a viewpoint conversion image obtained by observing the road surface from an arbitrary position (for example, directly above the vehicle) is generated. Since the viewpoint conversion process is a known technique, a detailed description of the viewpoint conversion process is omitted here.

画像合成部32は、視点変換された2つの画像を合成して、合成画像を作成する。ここでは、それぞれの画像上に映っている同一物体が重なるように、両画像を合成する。立体物検出部33は、画像合成部32で作成された合成画像上に映っている立体物、特に、合成画像上において、1つの始点から別方向に伸びるようにして映っている立体物を検出する。立体物描画部34は、画像合成部32で作成された合成画像上に、立体物検出部33によって検出された立体物を1つの立体物として描画し直す。ディスプレイ4には、立体物が描画し直された合成画像が表示される。   The image composition unit 32 composes the two images whose viewpoints have been converted, and creates a composite image. Here, the two images are combined so that the same object appearing on each image overlaps. The three-dimensional object detection unit 33 detects a three-dimensional object appearing on the composite image created by the image composition unit 32, particularly a three-dimensional object appearing on the composite image so as to extend from one starting point to another direction. To do. The three-dimensional object drawing unit 34 redraws the three-dimensional object detected by the three-dimensional object detection unit 33 as one solid object on the composite image created by the image composition unit 32. On the display 4, a composite image in which a three-dimensional object is redrawn is displayed.

図2は、一実施の形態における画像処理装置によって行われる処理内容を示すフローチャートである。車両が起動すると、処理装置3は、ステップS10の処理を開始する。ステップS10では、カメラ1およびカメラ2によって撮像される画像をそれぞれ取得して、ステップS20に進む。ステップS20では、ステップS10で取得した2つの画像に対して、視点変換部31によって、視点変換処理を行う。これにより、カメラ1およびカメラ2によってそれぞれ撮像された画像は、同一の視点から撮像された画像に変換される。   FIG. 2 is a flowchart showing the contents of processing performed by the image processing apparatus according to the embodiment. When the vehicle is activated, the processing device 3 starts the process of step S10. In step S10, images captured by camera 1 and camera 2 are acquired, and the process proceeds to step S20. In step S20, viewpoint conversion processing is performed by the viewpoint conversion unit 31 on the two images acquired in step S10. Thereby, the images captured by the camera 1 and the camera 2 are converted into images captured from the same viewpoint.

ステップS20に続くステップS30では、視点変換部31によって視点変換された2つの画像を、画像合成部32によって合成して、合成画像を作成する。合成画像を作成すると、ステップS40に進む。ステップS40では、合成画像上において、1つの始点から別方向に伸びるようにして映っている立体物を、立体物検出部33によって検出する。立体物検出部33は、まず、カメラ1によって撮像された画像の視点変換画像、および、カメラ2によって撮像された画像の視点変換画像の同一画素における濃度(輝度)を比較する。同一画素における濃度の差が所定値未満であれば、その画素は路面が撮像された領域であると判定し、濃度差が所定値以上であれば、その画素は立体物が撮像された領域であると判定する。   In step S30 following step S20, the two images converted by the viewpoint conversion unit 31 are combined by the image combining unit 32 to create a combined image. When the composite image is created, the process proceeds to step S40. In step S <b> 40, the three-dimensional object detection unit 33 detects a three-dimensional object appearing on the composite image so as to extend from one starting point in another direction. First, the three-dimensional object detection unit 33 compares the density (luminance) in the same pixel of the viewpoint converted image of the image captured by the camera 1 and the viewpoint converted image of the image captured by the camera 2. If the difference in density in the same pixel is less than a predetermined value, it is determined that the pixel is an area where a road surface is imaged. If the density difference is greater than or equal to a predetermined value, the pixel is an area where a three-dimensional object is imaged. Judge that there is.

次に、立体物が撮像されていると判定された領域に対して、立体物のエッジを検出する処理を行う。図3は、合成画像上において、カメラ1によって撮像された立体物50のエッジと、カメラ2によって撮像された立体物50のエッジとを示す図である。カメラ1によって撮像された立体物50のエッジのうち、図3に示すように、一方向に連続的に伸びているものを、ベクトルa1',a2'とする(a1'、a2'は、それぞれベクトル記号を表すものとする)。この2つのベクトルa1',a2'の距離が最も狭くなっている点を両ベクトルの始点51とし、始点を基準に両ベクトルが広がり角を有している場合に、2つのエッジで構成される立体物は、鉛直方向に伸びているポール状の立体物であると判断する。   Next, a process of detecting the edge of the three-dimensional object is performed on the region where it is determined that the three-dimensional object is imaged. FIG. 3 is a diagram illustrating the edges of the three-dimensional object 50 captured by the camera 1 and the edges of the three-dimensional object 50 captured by the camera 2 on the composite image. Of the edges of the three-dimensional object 50 imaged by the camera 1, as shown in FIG. 3, the ones continuously extending in one direction are set as vectors a1 ′ and a2 ′ (a1 ′ and a2 ′ are respectively Represents a vector symbol). The point where the distance between the two vectors a1 ′ and a2 ′ is the shortest is the start point 51 of both vectors, and when both vectors have a divergence angle with respect to the start point, they are composed of two edges. The three-dimensional object is determined to be a pole-shaped three-dimensional object extending in the vertical direction.

また、図3に示すように、カメラ2によって撮像された立体物50のエッジのうち、一方向に連続的に伸びているものを、ベクトルb1',b2'とする(b1'、b2'は、それぞれベクトル記号を表すものとする)。この2つのベクトルb1',b2' の距離が最も狭くなっている点を両ベクトルの始点51とし、始点を基準に両ベクトルが広がり角を有している場合に、2つのエッジで構成される立体物は、ポール状の立体物であると判断する。   Further, as shown in FIG. 3, among the edges of the three-dimensional object 50 imaged by the camera 2, those continuously extending in one direction are set as vectors b 1 ′ and b 2 ′ (b 1 ′ and b 2 ′ are , Each representing a vector symbol). The point where the distance between the two vectors b1 ′ and b2 ′ is the shortest is the start point 51 of both vectors, and when both vectors have a divergence angle based on the start point, the two vectors b1 ′ and b2 ′ are composed of two edges. The three-dimensional object is determined to be a pole-shaped three-dimensional object.

ここで、ベクトルa1',a2'の始点と、ベクトルb1',b2' の始点とが一致している場合に、ベクトルa1',a2'で構成される立体物と、ベクトルb1',b2' で構成される立体物とは、同一の立体物であると判断する。すなわち、カメラ1およびカメラ2でそれぞれ撮像された立体物が合成画像上において、それぞれ別の立体物として映ってしまっていることになる。   Here, when the starting points of the vectors a1 ′ and a2 ′ coincide with the starting points of the vectors b1 ′ and b2 ′, the three-dimensional object constituted by the vectors a1 ′ and a2 ′ and the vectors b1 ′ and b2 ′. Are determined to be the same three-dimensional object. That is, the three-dimensional objects captured by the camera 1 and the camera 2 are reflected as different three-dimensional objects on the composite image.

図2に示すフローチャートのステップS40において、1つの始点から別方向に伸びている立体物の検出処理を行うと、ステップS50に進む。ステップS50では、合成画像上に、ステップS40で検出した立体物を描画し直すための位置を特定する。ここでは、次式(1)で表されるベクトルc1'、および、次式(2)で表されるベクトルc2'で囲まれた領域を立体物の位置とする。
c1'=(a1'+b1')/2 (1)
c2'=(a2'+b2')/2 (2)
In step S40 of the flowchart shown in FIG. 2, when the detection processing of the three-dimensional object extending from one starting point in another direction is performed, the process proceeds to step S50. In step S50, the position for redrawing the three-dimensional object detected in step S40 is specified on the composite image. Here, the region surrounded by the vector c1 ′ represented by the following equation (1) and the vector c2 ′ represented by the following equation (2) is set as the position of the three-dimensional object.
c1 ′ = (a1 ′ + b1 ′) / 2 (1)
c2 ′ = (a2 ′ + b2 ′) / 2 (2)

ステップS50に続くステップS60では、ステップS30で作成した合成画像上に、ステップS40で検出した立体物を描画し直す。ここでは、ベクトルa3'、b3'、c3'をそれぞれ次式(3)〜(5)で表した場合に、ベクトルc1'およびc3'で囲まれた領域には、カメラ1で撮像されたベクトルa1'およびa3'で囲まれた領域の映像を、(|c3'|/|a3'|)だけ縮小したものを表示する。また、ベクトルc2'およびc3'で囲まれた領域には、カメラ2で撮像されたベクトルb2'およびb3'で囲まれた領域の映像を、(|c3'|/|b3'|)だけ縮小したものを表示する。
a3'=(a1'+a2')/2 (3)
b3'=(b1'+b2')/2 (4)
c3'=(c1'+c2')/2 (5)
In step S60 following step S50, the three-dimensional object detected in step S40 is redrawn on the composite image created in step S30. Here, when the vectors a3 ′, b3 ′, and c3 ′ are respectively expressed by the following equations (3) to (5), a region captured by the vectors c1 ′ and c3 ′ is a vector captured by the camera 1. An image obtained by reducing the image of the area surrounded by a1 ′ and a3 ′ by (| c3 ′ | / | a3 ′ |) is displayed. Further, in the area surrounded by the vectors c2 ′ and c3 ′, the video of the area surrounded by the vectors b2 ′ and b3 ′ captured by the camera 2 is reduced by (| c3 ′ | / | b3 ′ |). Display what you did.
a3 ′ = (a1 ′ + a2 ′) / 2 (3)
b3 ′ = (b1 ′ + b2 ′) / 2 (4)
c3 ′ = (c1 ′ + c2 ′) / 2 (5)

このように、カメラ1で撮像された立体物50の映像、および、カメラ2で撮像された立体物50の映像の一部を合成することによって、合成画像上に立体物50の映像を描画し直す際に、両カメラ1,2でそれぞれ撮像された立体物50の像の大きさに基づいて、合成する像の大きさを決定するので、カメラ1,2で撮像された立体物50の像の大きさが異なる場合でも、合成画像に描画する立体物の形状が不連続となることを防ぐことができる。   Thus, by synthesizing a part of the image of the three-dimensional object 50 captured by the camera 1 and a part of the image of the three-dimensional object 50 captured by the camera 2, the image of the three-dimensional object 50 is drawn on the composite image. At the time of correction, the size of the image to be synthesized is determined based on the size of the image of the three-dimensional object 50 captured by both the cameras 1 and 2, so the image of the three-dimensional object 50 captured by the cameras 1 and 2. Even when the sizes of the three-dimensional objects are different, it is possible to prevent the shape of the three-dimensional object drawn on the composite image from becoming discontinuous.

また、合成画像上において、ベクトルa1'およびa2'で囲まれた領域には、カメラ2で撮像された路面の映像を表示し、ベクトルb1'およびb2'で囲まれた領域には、カメラ1で撮像された路面の映像を表示する。これにより、別方向に伸びるようにして映っていた立体物を1つの立体物として描画し直す際に、映像が消えてしまう部分を適切に表現することができる。   Further, on the composite image, the road image captured by the camera 2 is displayed in the area surrounded by the vectors a1 ′ and a2 ′, and the camera 1 is displayed in the area surrounded by the vectors b1 ′ and b2 ′. The image of the road surface imaged at is displayed. This makes it possible to appropriately represent the portion where the video disappears when the solid object that has been projected to extend in another direction is redrawn as one solid object.

一実施の形態における画像処理装置によれば、複数のカメラ1,2によってそれぞれ撮像された画像を、カメラ1,2の視点とは異なる視点から見た視点変換画像にそれぞれ変換し、変換した複数の視点変換画像を合成して1つの合成画像を作成する画像処理装置において、合成画像上において、1つの始点から複数の方向に伸びるようにして映っている立体物を検出して1つの立体物として描画し直す。これにより、合成画像上において、1つの立体物が複数の立体物として映ってしまうことを防ぐことができる。   According to the image processing apparatus in the embodiment, images respectively captured by the plurality of cameras 1 and 2 are converted into viewpoint conversion images viewed from a viewpoint different from the viewpoints of the cameras 1 and 2, and the converted plurality of images are converted. In the image processing apparatus that synthesizes the viewpoint-converted images to create one composite image, the solid image is detected on the composite image so as to extend in a plurality of directions from one start point, and one solid object is detected. Redraw as. Thereby, it is possible to prevent one solid object from appearing as a plurality of solid objects on the composite image.

また、一実施の形態における画像処理装置によれば、複数のカメラ1,2で同一の立体物をそれぞれ撮像した時の像の位置に基づいて、合成画像上に描画し直す立体物の位置を特定するので、合成画像上の適切な位置に立体物を描画し直すことができる。   Further, according to the image processing apparatus in the embodiment, the position of the three-dimensional object to be redrawn on the composite image is determined based on the position of the image when the same three-dimensional object is captured by the plurality of cameras 1 and 2. Therefore, the three-dimensional object can be redrawn at an appropriate position on the composite image.

一実施の形態における画像処理装置によれば、1つの始点から複数の方向に伸びるようにして映っている立体物を複数のカメラ1,2でそれぞれ撮像した時の像の一部を合成することによって、1つの立体物を描画し直すようにした。この時に、複数のカメラ1,2でそれぞれ撮像した時の像の大きさに基づいて、1つの立体物を描画し直す際に合成する像の大きさを決定した。これにより、立体物を複数のカメラで撮像した時の大きさが異なる場合に、合成画像に描画する立体物の形状が不連続となることを防ぐことができる。   According to the image processing apparatus in the embodiment, a part of an image when a plurality of cameras 1 and 2 capture a three-dimensional object that is projected from a single start point in a plurality of directions is combined. Thus, one solid object is redrawn. At this time, the size of the image to be combined when one solid object is redrawn is determined based on the size of the image taken by each of the plurality of cameras 1 and 2. Thereby, when the magnitude | size when a solid object is imaged with a some camera differs, it can prevent that the shape of the solid object drawn on a synthesized image becomes discontinuous.

さらに、一実施の形態における画像処理装置によれば、合成画像上において、1つの始点から複数の方向に伸びるようにして立体物が映っていた領域の映像を、複数のカメラ1,2で撮像された画像に基づいて、描画し直すので、1つの立体物に描画し直すことによって映像が消えてしまう部分を適切に表現することができる。   Furthermore, according to the image processing apparatus in one embodiment, a plurality of cameras 1 and 2 capture an image of a region in which a three-dimensional object is projected so as to extend in a plurality of directions from one starting point on a composite image. Since the image is redrawn based on the rendered image, the portion where the video disappears by redrawing the image on one solid object can be appropriately expressed.

本発明は、上述した一実施の形態に限定されることはない。例えば、上述した一実施の形態では、画像処理装置を車両に搭載して、カメラ1,2によって、車両周囲を撮像する例について説明したが、車両以外のものに適用することもできる。また、カメラは、2台に限定されることはなく、3台以上設けるようにしてもよい。この場合、合成画像上において、同一の始点から別方向に伸びている複数の立体物が映っていれば、これら複数の立体物を1つの立体物として描画し直せばよい。   The present invention is not limited to the embodiment described above. For example, in the above-described embodiment, the example in which the image processing apparatus is mounted on the vehicle and the surroundings of the vehicle are imaged by the cameras 1 and 2 has been described. However, the present invention can be applied to other than the vehicle. Further, the number of cameras is not limited to two, and three or more cameras may be provided. In this case, if a plurality of three-dimensional objects extending in the other direction from the same starting point are reflected on the composite image, the plurality of three-dimensional objects may be redrawn as one solid object.

立体物50が移動する物体の場合でも、合成画像上に立体物50を描画し直すことができる。立体物50が移動物体である場合の描画方法を、図4を用いて説明する。図4は、車両の真上の視点に変換された合成画像を示す図である。図4では、カメラ1の撮像範囲を境界線11および12で示すとともに、カメラ2の撮像範囲を境界線21および22で示している。また、境界線11と境界線21との交点を40とする。   Even when the three-dimensional object 50 is a moving object, the three-dimensional object 50 can be redrawn on the composite image. A drawing method when the three-dimensional object 50 is a moving object will be described with reference to FIG. FIG. 4 is a diagram illustrating a composite image converted into a viewpoint directly above the vehicle. In FIG. 4, the imaging range of the camera 1 is indicated by boundary lines 11 and 12, and the imaging range of the camera 2 is indicated by boundary lines 21 and 22. The intersection of the boundary line 11 and the boundary line 21 is set to 40.

交点40および立体物50を結ぶ線41と境界線11との成す角をθ2、交点40および立体物50を結ぶ線41と境界線21との成す角をθ1とすると、立体物50の位置を描画するためのベクトルc3'は、次式(6)で表される。
c3'=(θ2・a3'+θ1・b3')/(θ1+θ2) (6)
式(6)によれば、θ1=0の場合には、c3'=a3'となり、θ2=0の場合には、c3'=b3'となる。
When the angle formed between the line 41 connecting the intersection 40 and the three-dimensional object 50 and the boundary line 11 is θ2, and the angle formed between the line 41 connecting the intersection 40 and the three-dimensional object 50 and the boundary line 21 is θ1, the position of the three-dimensional object 50 is determined. A vector c3 ′ for drawing is expressed by the following equation (6).
c3 ′ = (θ2 · a3 ′ + θ1 · b3 ′) / (θ1 + θ2) (6)
According to equation (6), when θ1 = 0, c3 ′ = a3 ′, and when θ2 = 0, c3 ′ = b3 ′.

移動物の位置を描画するためのベクトルc3'を式(6)で示すことにより、カメラ1の撮像範囲の境界線11またはカメラ2の撮像範囲の境界線21付近における立体物の動きを滑らかに表現することができる。   By expressing the vector c3 ′ for drawing the position of the moving object by the equation (6), the movement of the three-dimensional object in the vicinity of the boundary line 11 of the imaging range of the camera 1 or the boundary line 21 of the imaging range of the camera 2 is smoothed. Can be expressed.

上述した一実施の形態では、カメラ1および2でそれぞれ撮像された画像の同一画素における濃度を比較することにより、撮像画像上の立体物を検出したが、両画像の同一画素における色を比較する等、他の方法により、立体物を検出することもできる。   In the above-described embodiment, the three-dimensional object on the captured image is detected by comparing the density at the same pixel of the images captured by the cameras 1 and 2, but the colors at the same pixel of both images are compared. A three-dimensional object can also be detected by other methods.

特許請求の範囲の構成要素と一実施の形態の構成要素との対応関係は次の通りである。すなわち、カメラ1,2が撮影手段を、視点変換部31が視点変換手段を、画像合成部32が画像合成手段を、立体物検出部33が立体物検出手段を、立体物描画部34が立体物描画部をそれぞれ構成する。なお、以上の説明はあくまで一例であり、発明を解釈する上で、上記の実施形態の構成要素と本発明の構成要素との対応関係に何ら限定されるものではない。   The correspondence between the constituent elements of the claims and the constituent elements of the embodiment is as follows. That is, the cameras 1 and 2 are imaging means, the viewpoint conversion unit 31 is viewpoint conversion means, the image composition unit 32 is image synthesis means, the three-dimensional object detection unit 33 is three-dimensional object detection means, and the three-dimensional object drawing unit 34 is three-dimensional. Each object drawing unit is configured. In addition, the above description is an example to the last, and when interpreting invention, it is not limited to the correspondence of the component of said embodiment and the component of this invention at all.

一実施の形態における画像処理装置の構成を示す図The figure which shows the structure of the image processing apparatus in one embodiment 一実施の形態における画像処理装置によって行われる処理内容を示すフローチャートThe flowchart which shows the processing content performed by the image processing apparatus in one embodiment 2つのカメラによって、ポール状の立体物を撮像した時に、検出されるエッジを示す図The figure which shows the edge detected when a pole-shaped solid object is imaged with two cameras 車両の真上の視点に変換された合成画像を示す図The figure which shows the composite image converted into the viewpoint right above the vehicle

符号の説明Explanation of symbols

1…カメラ、2…カメラ、3…処理装置、4…ディスプレイ、31…視点変換部、32…画像合成部、33…立体物検出部、34…立体物描画部 DESCRIPTION OF SYMBOLS 1 ... Camera, 2 ... Camera, 3 ... Processing apparatus, 4 ... Display, 31 ... Viewpoint conversion part, 32 ... Image composition part, 33 ... Three-dimensional object detection part, 34 ... Three-dimensional object drawing part

Claims (6)

複数の撮像手段と、
前記複数の撮像手段によってそれぞれ撮像された画像を、前記複数の撮像手段の視点とは異なる視点から見た画像(以下、視点変換画像と呼ぶ)にそれぞれ変換する視点変換手段と、
前記視点変換手段によって変換された複数の視点変換画像を合成して1つの合成画像を作成する画像合成手段と、
前記画像合成手段によって作成された合成画像上において、1つの始点から複数の方向に伸びるようにして映っている立体物を検出する立体物検出手段と、
前記立体物検出手段によって検出された立体物を1つの立体物として描画し直す立体物描画手段とを備えることを特徴とする画像処理装置。
A plurality of imaging means;
Viewpoint conversion means for converting images respectively captured by the plurality of imaging means into images viewed from viewpoints different from viewpoints of the plurality of imaging means (hereinafter referred to as viewpoint conversion images);
Image synthesizing means for synthesizing a plurality of viewpoint converted images converted by the viewpoint converting means to create one synthesized image;
Three-dimensional object detection means for detecting a three-dimensional object appearing in a plurality of directions from one starting point on the composite image created by the image composition means;
An image processing apparatus comprising: a three-dimensional object drawing unit that redraws a three-dimensional object detected by the three-dimensional object detection unit as one solid object.
請求項1に記載の画像処理装置において、
前記立体物描画手段は、前記合成画像上において、1つの始点を基準として複数の方向に伸びるようにして映っている立体物の位置に基づいて、描画し直す立体物の位置を特定することを特徴とする画像処理装置。
The image processing apparatus according to claim 1.
The three-dimensional object drawing means specifies the position of the three-dimensional object to be redrawn based on the position of the three-dimensional object appearing to extend in a plurality of directions with one start point as a reference on the composite image. A featured image processing apparatus.
請求項1または2に記載の画像処理装置において、
前記立体物描画手段は、前記立体物検出手段によって検出された立体物を前記複数の撮像手段でそれぞれ撮像した時の像の一部を合成することによって、1つの立体物を描画し直すことを特徴とする画像処理装置。
The image processing apparatus according to claim 1 or 2,
The three-dimensional object drawing means redraws one solid object by synthesizing a part of an image when the three-dimensional object detected by the three-dimensional object detection means is respectively captured by the plurality of imaging means. A featured image processing apparatus.
請求項3に記載の画像処理装置において、
前記立体物描画手段は、前記立体物検出手段によって検出された立体物を前記複数の撮像手段でそれぞれ撮像した時の像の大きさに基づいて、前記1つの立体物を描画し直す際に合成する像の大きさを決定することを特徴とする画像処理装置。
The image processing apparatus according to claim 3.
The three-dimensional object drawing unit is configured to redraw the one three-dimensional object based on the size of an image when the three-dimensional object detected by the three-dimensional object detection unit is captured by the plurality of imaging units. An image processing apparatus that determines a size of an image to be processed.
請求項1〜4のいずれかに記載の画像処理装置において、
前記立体物描画手段は、前記合成画像上において、前記1つの始点から複数の方向に伸びるようにして立体物が映っていた領域の映像を、前記複数の撮像手段でそれぞれ撮像された映像に基づいて、描画し直すことを特徴とする画像処理装置。
The image processing apparatus according to any one of claims 1 to 4,
The three-dimensional object drawing unit is configured to, based on the images captured by the plurality of imaging units, images of regions in which the three-dimensional object is projected so as to extend in a plurality of directions from the one start point on the composite image. An image processing apparatus characterized by redrawing.
複数の撮像手段によってそれぞれ撮像された画像を、前記複数の撮像手段の視点とは異なる視点から見た画像(以下、視点変換画像と呼ぶ)にそれぞれ視点変換し、
視点変換された複数の視点変換画像を合成して1つの合成画像を作成し、
前記合成画像上において、1つの始点から複数の方向に伸びるようにして映っている立体物を検出して、1つの立体物として描画し直すことを特徴とする画像処理方法。
Each of the images picked up by the plurality of image pickup means is subjected to viewpoint conversion into an image viewed from a viewpoint different from the viewpoint of the plurality of image pickup means (hereinafter referred to as viewpoint conversion image),
Create a composite image by combining multiple viewpoint-converted images that have undergone viewpoint conversion,
An image processing method, comprising: detecting a three-dimensional object appearing in a plurality of directions extending from one starting point on the composite image, and redrawing the three-dimensional object.
JP2005232292A 2005-08-10 2005-08-10 Image processing apparatus and image processing method Active JP4736611B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2005232292A JP4736611B2 (en) 2005-08-10 2005-08-10 Image processing apparatus and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2005232292A JP4736611B2 (en) 2005-08-10 2005-08-10 Image processing apparatus and image processing method

Publications (2)

Publication Number Publication Date
JP2007048071A true JP2007048071A (en) 2007-02-22
JP4736611B2 JP4736611B2 (en) 2011-07-27

Family

ID=37850839

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2005232292A Active JP4736611B2 (en) 2005-08-10 2005-08-10 Image processing apparatus and image processing method

Country Status (1)

Country Link
JP (1) JP4736611B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009265783A (en) * 2008-04-23 2009-11-12 Sanyo Electric Co Ltd Driving supporting system and vehicle
JP2010109451A (en) * 2008-10-28 2010-05-13 Panasonic Corp Vehicle surrounding monitoring device, and vehicle surrounding monitoring method
JP2010146082A (en) * 2008-12-16 2010-07-01 Denso Corp Image processor

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07250268A (en) * 1994-03-14 1995-09-26 Yazaki Corp Vehicle periphery monitoring device
JP2003169323A (en) * 2001-11-29 2003-06-13 Clarion Co Ltd Vehicle periphery-monitoring apparatus
JP2004235986A (en) * 2003-01-30 2004-08-19 Matsushita Electric Ind Co Ltd Monitoring system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07250268A (en) * 1994-03-14 1995-09-26 Yazaki Corp Vehicle periphery monitoring device
JP2003169323A (en) * 2001-11-29 2003-06-13 Clarion Co Ltd Vehicle periphery-monitoring apparatus
JP2004235986A (en) * 2003-01-30 2004-08-19 Matsushita Electric Ind Co Ltd Monitoring system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009265783A (en) * 2008-04-23 2009-11-12 Sanyo Electric Co Ltd Driving supporting system and vehicle
JP2010109451A (en) * 2008-10-28 2010-05-13 Panasonic Corp Vehicle surrounding monitoring device, and vehicle surrounding monitoring method
JP2010146082A (en) * 2008-12-16 2010-07-01 Denso Corp Image processor

Also Published As

Publication number Publication date
JP4736611B2 (en) 2011-07-27

Similar Documents

Publication Publication Date Title
EP2437494B1 (en) Device for monitoring area around vehicle
JP3935500B2 (en) Motion vector calculation method and camera shake correction device, imaging device, and moving image generation device using this method
JP4816923B2 (en) Vehicle peripheral image providing apparatus and method
JP5003395B2 (en) Vehicle periphery image processing apparatus and vehicle periphery state presentation method
JP2010109452A (en) Vehicle surrounding monitoring device and vehicle surrounding monitoring method
JP6882868B2 (en) Image processing equipment, image processing method, system
JP2009100095A (en) Image processing apparatus and image processing method
CN107950023B (en) Vehicle display device and vehicle display method
EP3280136B1 (en) Projection systems and projection methods
JP2010141836A (en) Obstacle detecting apparatus
JP6656035B2 (en) Image processing apparatus, imaging apparatus, and control method for image processing apparatus
JP5178454B2 (en) Vehicle perimeter monitoring apparatus and vehicle perimeter monitoring method
JP5935435B2 (en) Image processing apparatus and image processing method
JP6031819B2 (en) Image processing apparatus and image processing method
JP2013218432A (en) Image processing device, image processing method, program for image processing, and recording medium
JP4736611B2 (en) Image processing apparatus and image processing method
JP2018074411A (en) Object detection device and object detection method
JP5906696B2 (en) Vehicle periphery photographing apparatus and vehicle periphery image processing method
JP6513305B2 (en) Video combining apparatus and video combining method
JPH11250273A (en) Image synthesizing device
WO2013089183A1 (en) Image processing device, image processing method, computer program, recording medium, and stereoscopic image display device
KR20170139816A (en) Image synthesis method in real time
JPH10108003A (en) Image compositing device and image compositing method
JP6320165B2 (en) Image processing apparatus, control method therefor, and program
JP6906576B2 (en) Image processing equipment, image processing methods and in-vehicle equipment

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20080625

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20101216

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20101221

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110218

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20110405

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20110418

R150 Certificate of patent or registration of utility model

Ref document number: 4736611

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140513

Year of fee payment: 3