WO2022049926A1 - Image recognition simulator device - Google Patents

Image recognition simulator device Download PDF

Info

Publication number
WO2022049926A1
WO2022049926A1 PCT/JP2021/027676 JP2021027676W WO2022049926A1 WO 2022049926 A1 WO2022049926 A1 WO 2022049926A1 JP 2021027676 W JP2021027676 W JP 2021027676W WO 2022049926 A1 WO2022049926 A1 WO 2022049926A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
distance
virtual object
dimensional space
Prior art date
Application number
PCT/JP2021/027676
Other languages
French (fr)
Japanese (ja)
Inventor
玲 宇田川
崇之 佐藤
健 永崎
Original Assignee
日立Astemo株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立Astemo株式会社 filed Critical 日立Astemo株式会社
Priority to DE112021003088.4T priority Critical patent/DE112021003088T5/en
Priority to JP2022546154A priority patent/JP7373079B2/en
Publication of WO2022049926A1 publication Critical patent/WO2022049926A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes

Definitions

  • the present invention relates to an image recognition simulator device.
  • This application claims priority based on Japanese Patent Application No. 2020-149907 filed on September 7, 2020, the contents of which are incorporated herein by reference.
  • Patent Document 1 proposes a method of superimposing an image showing a weather disturbance or the like on live-action image data to create a plurality of pseudo driving scene patterns and performing a test.
  • Patent Document 1 merely superimposes another image on the live-action image data, which causes a problem of lack of reality. For example, if an image of a pedestrian created by CG is simply superimposed on a live-action image, the perspective is disturbed, resulting in a strange image.
  • the present invention has been made to solve such a technical problem, and an object of the present invention is to provide an image recognition simulator device capable of creating a composite image having reality.
  • the image recognition simulator device includes a distance calculation unit that calculates a distance by stereo matching based on brightness images captured by at least two cameras, and outputs a distance image in which the calculated result is represented by an image.
  • An area division calculation unit that obtains an area division image by performing area division on a brightness image, and a distance image error exclusion that excludes stereo matching errors from the distance image based on the result of the area division by the area division calculation unit.
  • a unit, a 3D space generation unit that generates a 3D space based on a distance image excluded by the distance image error exclusion unit, and a virtual object recognized as a target of an in-vehicle camera application at an arbitrary position and time.
  • the virtual object installation unit installed in the virtual object installation unit, the virtual object synthesis unit that synthesizes the virtual object installed by the virtual object installation unit into the three-dimensional space generated by the three-dimensional space generation unit, and the virtual object composition unit. It is characterized by including an image generation unit that generates a brightness image of the two cameras based on the combined result.
  • the distance image error exclusion unit excludes the stereo matching error from the distance image based on the result of region division by the region division calculation unit, so that the stereo matching error is not affected.
  • a virtual object can be combined with a brightness image. Therefore, it is possible to create a composite image having reality.
  • FIG. 1 is a schematic configuration diagram showing an image recognition simulator device according to an embodiment.
  • the image recognition simulator device 1 of the present embodiment is a recognition application using a natural image, CG to be synthesized, and its behavior based on a plurality of time-series images collected by the in-vehicle image collection device and a control signal synchronized with the image recognition simulator device 1. It is a device for performing simulation.
  • the vehicle-mounted image acquisition device includes an image acquisition unit having at least two cameras (here, a stereo camera).
  • the stereo camera is an in-vehicle camera, for example, consisting of a pair of left and right cameras arranged at a predetermined optical axis spacing (baseline length) so that the optical axes are parallel to each other, and images the surroundings of the own vehicle.
  • the pair of left and right cameras are each composed of an image sensor such as CMOS and an optical lens.
  • the image captured by this stereo camera is the above-mentioned natural image.
  • the image recognition simulator device 1 includes a CG-natural image synthesis unit 10 and an image recognition unit 20.
  • FIG. 2 is a schematic configuration diagram showing a CG-natural image synthesis unit of an image recognition simulator device.
  • the CG-natural image synthesis unit 10 includes a distance calculation unit 11, a region division calculation unit 12, a distance image error exclusion unit 13, a three-dimensional space generation unit 14, and a virtual object installation unit 15. And a virtual object synthesis unit 16 and an image generation unit 17.
  • the distance calculation unit 11 calculates the distance by stereo matching based on the brightness image captured by the stereo camera, and outputs a distance image representing the calculated result as an image. More specifically, the distance calculation unit 11 first calculates the distance using stereo matching based on the two luminance images captured by the stereo camera. At this time, the distance calculation unit 11 calculates the distance by acquiring the parallax for each pixel, for example, by the principle of triangulation. The acquired parallax can be converted from the specifications of the stereo camera used to the distance. For example, if the baseline length of a stereo camera is L, the CMOS size is ⁇ , the focal length of an optical lens is V, and the parallax is d, the distance can be calculated by VL / d ⁇ . Next, the distance calculation unit 11 outputs a distance image representing the result of the calculation as described above as an image to the distance image error exclusion unit 13.
  • a window is set near the image of interest, and the feature amount in the window is obtained by calculating the similarity between the left and right images.
  • the window has a width W and a height H, and is set centering on the pixel of interest.
  • SAD Sud of Absolute Difference
  • the parallax obtained for each pixel is not a true value because it is determined by the window and evaluation function set above.
  • a parallax different from the original parallax is obtained (called a mismatch), it is called a distance error.
  • the region division calculation unit 12 obtains a region division image by performing region division on the luminance image captured by the stereo camera.
  • Area division refers to dividing an image into areas with similar characteristics such as edges and brightness, and labeling each divided area.
  • CNN Convolutional Neural Network
  • the label here is, for example, a road, a car, a pedestrian, a grassy area, etc., and it is preferable that an ID is set for each.
  • the region division calculation unit 12 finely divides the front side of the above-mentioned luminance image into the front side rather than the back side. This is because the front side of the image is closer to the own vehicle than the back side, so it is possible to reduce errors and improve safety by finely dividing the image.
  • the distance image error exclusion unit 13 excludes the stereo matching error from the distance image output by the distance calculation unit 11 based on the result of the area division by the area division calculation unit 12. As mentioned above, the distance image contains an error. Therefore, the distance image error exclusion unit 13 excludes the stereo matching error from the distance image and the region division image, and outputs the distance image excluding the error to the three-dimensional space generation unit 14.
  • the distance image error exclusion unit 13 has an image segmentation unit 131 that divides an image based on the region segmentation image segmented by the region segmentation calculation unit 12, and each image segmented by the image segmentation unit 131. It has a distance acquisition unit 132 that acquires a distance and excludes an error in stereo matching. The image segmentation unit 131 divides the image based on the region division image, and sets the ID specified in the region division image to the divided image.
  • the distance acquisition unit 132 acquires a distance distribution based on the ID set in the region-divided image, and further acquires a predetermined distance to exclude stereo matching errors. That is, the distance acquisition unit 132 acquires the distance characterized for each ID of the image, and removes the unnatural distance as an error as compared with the characterized distance.
  • the distance acquisition unit 132 has a mechanism for changing the distance to be acquired depending on the presence or absence of depth.
  • the image ID is assigned as an integer, and the integer and the type are internally associated with each other. For example, it is conceivable that the image ID has a correspondence relationship as shown in Table 1.
  • the distance acquisition unit 132 has a distance acquisition method corresponding to each ID.
  • the distance acquisition unit 132 implements a distance acquisition method suitable for acquisition on the road.
  • Image collection is based on the premise that the vehicle is running on the road, and the distance of the road gradually increases toward the vanishing point method with the position of the own vehicle as 0 m and becomes maximum at the point at infinity.
  • the distance acquisition unit 132 corrects the distance set for each pixel on this premise. Since the distance is similar to the X-axis direction and further attenuates in the Y-axis direction, an approximate curve can be drawn. By correcting the distance in this way, the error of stereo matching is excluded.
  • ID 1
  • the distance acquisition unit 132 acquires the distribution of the temporal and spatial distances by acquiring the temporal changes of the distance image and the region-divided image, and makes the fluctuations of the distance uniform. By making the variation of the distance uniform in this way, the error of stereo matching is excluded.
  • the distribution of the temporal or spatial distance may be calculated over the entire image area, projected onto the X-axis or the Y-axis, or weighted in time.
  • the type is grassy.
  • the grass and the lawn have an irregular pattern, and this is a case where the local stereo matching using the window considered in the present embodiment is not good. Therefore, it is expected that there will be a relatively large number of mismatches, or the distance will not be determined in many cases.
  • the distance of the road which is relatively easy to estimate the correct distance, can be estimated and used as the distance of the grass, not the grass itself. By doing so, the error of stereo matching is excluded. In that case, if one image obtained by dividing the area is divided according to the change in the distance on the road surface, it becomes easier to perform CG composition described later.
  • the distance acquisition unit 132 acquires the corresponding distances for each ID set in the region-divided image, and excludes the stereo matching error based on the acquired distances. The accuracy of exclusion can be improved.
  • the three-dimensional space generation unit 14 generates a three-dimensional space based on the distance image excluded by the distance image error exclusion unit 13. More specifically, the three-dimensional space generation unit 14 generates a three-dimensional space from the distance image excluded by the distance image error exclusion unit 13 and the brightness image captured by the stereo. Since the brightness image is assumed to be obtained by orthographic projection, the coordinates in the three-dimensional space can be obtained by combining with the distance image based on the use of the stereo camera.
  • the virtual object installation unit 15 installs a virtual object recognized as a target of the application of the in-vehicle camera at an arbitrary position and time. More specifically, the virtual object installation unit 15 determines the type of CG of the virtual object to be synthesized, and determines the time and position at which the virtual object is installed. At this time, the virtual object installation unit 15 determines the installation position and time of the virtual object based on the information on how to move the virtual object and the moving distance of the own vehicle obtained from the control signal of the own vehicle. Is preferable. By doing so, it is possible to obtain a composite image closer to the real image, which has the effect of improving the reliability of the simulation. Further, the virtual object installation unit 15 installs a virtual object based on the determined result.
  • the virtual object is an object recognized as a target of an in-vehicle camera application, and examples thereof include automobiles, pedestrians, and motorcycles. Moreover, since a virtual object can be generated in a pseudo manner, its speed and size can be freely set.
  • the virtual object synthesis unit 16 synthesizes the virtual object installed by the virtual object installation unit 15 into the three-dimensional space generated by the three-dimensional space generation unit 14. At this time, the virtual object synthesis unit 16 synthesizes the virtual object installed by the virtual object installation unit 15 at a predetermined position in the three-dimensional space.
  • the image generation unit 17 generates a luminance image of the stereo camera based on the result synthesized by the virtual object synthesis unit 16. At this time, the image generation unit 17 generates a brightness image of the left and right cameras obtained by the stereo camera from the three-dimensional space in which the virtual object is synthesized.
  • the image recognition unit 20 recognizes the luminance images of the left and right cameras generated by the image generation unit 17.
  • the distance calculation unit 11 calculates the distance by stereo matching based on the luminance image captured by the stereo camera (see FIG. 3).
  • the left figure is a simulated representation of a luminance image acquired by a stereo camera
  • the right figure is a representation of a distance image obtained by the calculation of the distance calculation unit 11 in shades of color. The shades of color shown in the figure on the right are set based on the perspective of the distance for each object.
  • the region division calculation unit 12 divides the luminance image captured by the stereo camera into regions (see FIG. 4). As shown in FIG. 4, as a result of region division on the luminance image acquired by the stereo camera, labels such as "automobile”, “road”, “lawn”, and “other than that" are given.
  • the distance image error exclusion unit 13 excludes the stereo matching error from the distance image based on the result of the area division by the area division calculation unit 12 (see FIG. 5). Specifically, the distance acquisition unit 132 of the distance image error exclusion unit 13 recalculates the distance for each label based on the labeling obtained by the above-mentioned area division, and the distance error (that is, the right side of FIG. 3). Remove the irregular points in the figure). By recalculating the distance from the result of labeling by region division in this way, the error caused by stereo matching can be effectively excluded, and unnatural image composition can be suppressed.
  • the three-dimensional space generation unit 14 generates a three-dimensional space based on the distance image from which the distance error is removed, and the virtual object installation unit 15 installs a virtual object (here, a pedestrian) at an arbitrary position and time. do.
  • the virtual object synthesizing unit 16 arranges a pedestrian as a virtual object in the three-dimensional space generated by the three-dimensional space generating unit 14 to synthesize CG (see the left figure of FIG. 6).
  • the image generation unit 17 generates a luminance image of the stereo camera based on the CG synthesized by the virtual object synthesis unit 16 (see the right figure of FIG. 6).
  • the distance image error exclusion unit 13 excludes the stereo matching error from the distance image based on the result of the area division by the area division calculation unit 12, which affects the stereo matching error. It is possible to synthesize a virtual object into a brightness image without doing so. Therefore, it is possible to create a composite image having reality.
  • Image recognition simulator device 10 CG-Natural image synthesis unit 11
  • Distance calculation unit 12 Area segmentation calculation unit 13
  • Distance image error exclusion unit 14 3D space generation unit 15
  • Virtual object installation unit 16
  • Virtual object composition unit 17
  • Image generation unit 20
  • Image recognition Part 131 Image division part 132 Distance acquisition part

Abstract

An image recognition simulator device 1 is provided with a distance calculation unit 11 that calculates a distance through stereo matching on the basis of a brightness image captured by a stereo camera and outputs a distance image, a region division calculation unit 12 that performs region division on the brightness image, a distance image error removal unit 13 that removes an error from the distance image on the basis of the result of the division by the region division calculation unit 12, a three-dimensional space generation unit 14 that generates a three-dimensional space on the basis of the distance image that has been subjected to the removal by the distance image error removal unit 13, a virtual object placement unit 15 that places a virtual object in an arbitrary position and at an arbitrary time, a virtual object combining unit 16 that combines the virtual object placed by the virtual object placement unit 15 with the three-dimensional space generated by the three-dimensional space generation unit 14, and an image generation unit 17 that generates a brightness image for the stereo camera on the basis of the result of the combination by the virtual object combining unit 16.

Description

画像認識シミュレータ装置Image recognition simulator device
 本発明は、画像認識シミュレータ装置に関する。
 本願は、2020年9月7日に出願された日本国特願2020-149907号に基づき優先権を主張し、その内容をここに援用する。
The present invention relates to an image recognition simulator device.
This application claims priority based on Japanese Patent Application No. 2020-149907 filed on September 7, 2020, the contents of which are incorporated herein by reference.
 昨今、各種センサーを自動車に搭載して危険を検知又は回避する予防安全システムのテストが盛んに行われている。予防安全システムが必要な時に意図通り始動しないと事故につながる危険性があるので、多くの場合を想定してテストする必要がある。しかし、実際に車両を走らせて危険なシーンで該システムが始動するか否かのテストは、安全面などにおいて限界がある。このため、CG(Computer Graphics)などで擬似的に生成した走行環境と車両とを用いてテストを行う手法が求められている。 Recently, various sensors are mounted on automobiles to detect or avoid dangers, and preventive safety systems are being actively tested. If the preventive safety system is not started as intended when it is needed, there is a risk of accidents, so it is necessary to test in many cases. However, there is a limit in terms of safety and the like in the test of whether or not the system is actually started in a dangerous scene by actually driving the vehicle. Therefore, there is a demand for a method of performing a test using a driving environment and a vehicle simulated by CG (Computer Graphics) or the like.
 一例として、特許文献1では、実写画像データに天候外乱などを示す画像を重畳して擬似的に走行シーンのパターンを複数作成してテストを行う方法が提案されている。 As an example, Patent Document 1 proposes a method of superimposing an image showing a weather disturbance or the like on live-action image data to create a plurality of pseudo driving scene patterns and performing a test.
特開2010-33321号公報Japanese Unexamined Patent Publication No. 2010-33321
 しかしながら、特許文献1に記載の方法では、単に実写画像データに対してもう一つの画像を重畳するだけであり、リアリティに欠ける問題が生じる。例えば、実写画像にCGで作成した歩行者の画像を単に重畳した場合、遠近感が乱れるので、違和感のある画像になってしまう。 However, the method described in Patent Document 1 merely superimposes another image on the live-action image data, which causes a problem of lack of reality. For example, if an image of a pedestrian created by CG is simply superimposed on a live-action image, the perspective is disturbed, resulting in a strange image.
 本発明は、このような技術課題を解決するためになされたものであって、リアリティを有する合成画像を作成することができる画像認識シミュレータ装置を提供することを目的とする。 The present invention has been made to solve such a technical problem, and an object of the present invention is to provide an image recognition simulator device capable of creating a composite image having reality.
 本発明に係る画像認識シミュレータ装置は、少なくとも2つのカメラにより撮像された輝度画像に基づいてステレオマッチングで距離を計算し、計算した結果を画像で表した距離画像を出力する距離計算部と、前記輝度画像に対し領域分割を行うことにより領域分割画像を得る領域分割計算部と、前記領域分割計算部により領域分割された結果に基づいて前記距離画像からステレオマッチングの誤差を除外する距離画像誤差除外部と、前記距離画像誤差除外部により除外された距離画像に基づいて3次元空間を生成する3次元空間生成部と、車載カメラのアプリケーションの物標として認識される仮想物体を任意の位置と時刻に設置する仮想物体設置部と、前記仮想物体設置部により設置された仮想物体を、前記3次元空間生成部により生成された3次元空間に合成する仮想物体合成部と、前記仮想物体合成部により合成された結果に基づいて、前記2つのカメラの輝度画像を生成する画像生成部と、を備えることを特徴としている。 The image recognition simulator device according to the present invention includes a distance calculation unit that calculates a distance by stereo matching based on brightness images captured by at least two cameras, and outputs a distance image in which the calculated result is represented by an image. An area division calculation unit that obtains an area division image by performing area division on a brightness image, and a distance image error exclusion that excludes stereo matching errors from the distance image based on the result of the area division by the area division calculation unit. A unit, a 3D space generation unit that generates a 3D space based on a distance image excluded by the distance image error exclusion unit, and a virtual object recognized as a target of an in-vehicle camera application at an arbitrary position and time. The virtual object installation unit installed in the virtual object installation unit, the virtual object synthesis unit that synthesizes the virtual object installed by the virtual object installation unit into the three-dimensional space generated by the three-dimensional space generation unit, and the virtual object composition unit. It is characterized by including an image generation unit that generates a brightness image of the two cameras based on the combined result.
 本発明に係る画像認識シミュレータ装置では、距離画像誤差除外部が領域分割計算部により領域分割された結果に基づいて距離画像からステレオマッチングの誤差を除外するので、ステレオマッチングの誤差に影響されずに仮想物体を輝度画像に合成することができる。従って、リアリティを有する合成画像を作成することができる。 In the image recognition simulator device according to the present invention, the distance image error exclusion unit excludes the stereo matching error from the distance image based on the result of region division by the region division calculation unit, so that the stereo matching error is not affected. A virtual object can be combined with a brightness image. Therefore, it is possible to create a composite image having reality.
 本発明によれば、リアリティを有する合成画像を作成することができる。 According to the present invention, it is possible to create a composite image having reality.
実施形態に係る画像認識シミュレータ装置を示す概略構成図である。It is a schematic block diagram which shows the image recognition simulator apparatus which concerns on embodiment. 画像認識シミュレータ装置のCG-自然画像合成部を示す概略構成図である。It is a schematic block diagram which shows the CG-natural image synthesis part of an image recognition simulator apparatus. CG-自然画像合成部の動作を説明するための模式図である。It is a schematic diagram for demonstrating the operation of a CG-natural image synthesis part. CG-自然画像合成部の動作を説明するための模式図である。It is a schematic diagram for demonstrating the operation of a CG-natural image synthesis part. CG-自然画像合成部の動作を説明するための模式図である。It is a schematic diagram for demonstrating the operation of a CG-natural image synthesis part. CG-自然画像合成部の動作を説明するための模式図である。It is a schematic diagram for demonstrating the operation of a CG-natural image synthesis part.
 以下、図面を参照して本発明に係る画像認識シミュレータ装置の実施形態について説明する。 Hereinafter, embodiments of the image recognition simulator device according to the present invention will be described with reference to the drawings.
 図1は実施形態に係る画像認識シミュレータ装置を示す概略構成図である。本実施形態の画像認識シミュレータ装置1は、車載画像収集装置により収集された複数の時系列画像と同期した制御信号とに基づいて、自然画像、合成したいCG及びその挙動を用いて、認識アプリケーションのシミュレーションを行うための装置である。図示しないが、車載画像収集装置は、少なくとも2つのカメラ(ここでは、ステレオカメラ)を有する画像取得部を備えている。 FIG. 1 is a schematic configuration diagram showing an image recognition simulator device according to an embodiment. The image recognition simulator device 1 of the present embodiment is a recognition application using a natural image, CG to be synthesized, and its behavior based on a plurality of time-series images collected by the in-vehicle image collection device and a control signal synchronized with the image recognition simulator device 1. It is a device for performing simulation. Although not shown, the vehicle-mounted image acquisition device includes an image acquisition unit having at least two cameras (here, a stereo camera).
 ステレオカメラは、すなわち車載カメラであり、例えば互いの光軸が平行となるように所定の光軸間隔(基線長)で配置された左右一対のカメラからなり、自車両周囲の様子を撮像する。左右一対のカメラは、それぞれCMOSなどのイメージセンサや光学レンズなどにより構成されている。そして、このステレオカメラによって撮像された画像は、上述の自然画像である。 The stereo camera is an in-vehicle camera, for example, consisting of a pair of left and right cameras arranged at a predetermined optical axis spacing (baseline length) so that the optical axes are parallel to each other, and images the surroundings of the own vehicle. The pair of left and right cameras are each composed of an image sensor such as CMOS and an optical lens. The image captured by this stereo camera is the above-mentioned natural image.
 図1に示すように、画像認識シミュレータ装置1は、CG-自然画像合成部10と画像認識部20とを備えている。 As shown in FIG. 1, the image recognition simulator device 1 includes a CG-natural image synthesis unit 10 and an image recognition unit 20.
 図2は画像認識シミュレータ装置のCG-自然画像合成部を示す概略構成図である。図2に示すように、CG-自然画像合成部10は、距離計算部11と、領域分割計算部12と、距離画像誤差除外部13と、3次元空間生成部14と、仮想物体設置部15と、仮想物体合成部16と、画像生成部17とを備えている。 FIG. 2 is a schematic configuration diagram showing a CG-natural image synthesis unit of an image recognition simulator device. As shown in FIG. 2, the CG-natural image synthesis unit 10 includes a distance calculation unit 11, a region division calculation unit 12, a distance image error exclusion unit 13, a three-dimensional space generation unit 14, and a virtual object installation unit 15. And a virtual object synthesis unit 16 and an image generation unit 17.
 距離計算部11は、ステレオカメラにより撮像された輝度画像に基づいてステレオマッチングで距離を計算し、計算した結果を画像で表した距離画像を出力する。より具体的には、距離計算部11は、まず、ステレオカメラによって撮像された2枚の輝度画像に基づいて、ステレオマッチングを用いて距離を計算する。このとき、距離計算部11は、例えば三角測量の原理で画素ごとの視差を取得することで距離を計算する。取得された視差は、使用したステレオカメラの仕様から距離へ変換することができる。例えば、ステレオカメラの基線長をL、CMOSサイズをμ、光学レンズの焦点距離をV、視差をdとした場合、距離をVL/dμで計算することができる。次に、距離計算部11は、上述のように計算した結果を画像で表した距離画像を距離画像誤差除外部13に出力する。 The distance calculation unit 11 calculates the distance by stereo matching based on the brightness image captured by the stereo camera, and outputs a distance image representing the calculated result as an image. More specifically, the distance calculation unit 11 first calculates the distance using stereo matching based on the two luminance images captured by the stereo camera. At this time, the distance calculation unit 11 calculates the distance by acquiring the parallax for each pixel, for example, by the principle of triangulation. The acquired parallax can be converted from the specifications of the stereo camera used to the distance. For example, if the baseline length of a stereo camera is L, the CMOS size is μ, the focal length of an optical lens is V, and the parallax is d, the distance can be calculated by VL / dμ. Next, the distance calculation unit 11 outputs a distance image representing the result of the calculation as described above as an image to the distance image error exclusion unit 13.
 ステレオマッチングの例として、局所的な画像情報に基づいて実行する手法が挙げられる。この手法では、着目した画像付近に窓を設定し、その窓内の特徴量が左右の画像で類似度を計算することで得られる。ここで、窓は幅W、高さHとし、着目画素を中心に設定するものと考えられる。類似度の計算として、SAD(Sum of Absolute Difference)が挙げられる。 As an example of stereo matching, there is a method of executing based on local image information. In this method, a window is set near the image of interest, and the feature amount in the window is obtained by calculating the similarity between the left and right images. Here, it is considered that the window has a width W and a height H, and is set centering on the pixel of interest. SAD (Sum of Absolute Difference) can be mentioned as a calculation of similarity.
 そして、右カメラ画像の座標と輝度をpR=(x,y)T,I(pR)、左カメラ画像の座標と輝度をpL=[x,y]T,I(pL)、視差をD=[d,0]T、窓内の移動量をs=[w, h]Tとするとき、視差D=[d,0]Tを得る類似度R(D)として、下記式(1)で求めることができる。 Then, the coordinates and brightness of the right camera image are p R = (x, y) T , I (p R ), and the coordinates and brightness of the left camera image are p L = [x, y] T , I (p L ), When the parallax is D = [d, 0] T and the amount of movement in the window is s = [w, h] T , the following equation is used as the similarity R (D) to obtain the parallax D = [d, 0] T. It can be obtained in (1).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 また、画素ごとに得られる視差は上述で設定した窓と評価関数によって決定されるため、真値ではないことを注意する必要がある。本来の視差と異なる視差が得られる(誤マッチと呼ぶ)場合を距離誤差があると呼ぶ。 Also, it should be noted that the parallax obtained for each pixel is not a true value because it is determined by the window and evaluation function set above. When a parallax different from the original parallax is obtained (called a mismatch), it is called a distance error.
 領域分割計算部12は、ステレオカメラにより撮像された輝度画像に対し領域分割を行うことにより領域分割画像を得る。領域分割とは、エッジや輝度など特性が似通った領域ごとに画像を分割し、その分割した領域ごとにラベル付けを行うことを指す。このとき、例えばConvolutional Neural Network(CNN)を応用したアルゴリズムが用いられる。なお、ここでのラベルは、例えば道路、自動車、歩行者、草むらなどであり、それぞれIDが設定されるのが好ましい。 The region division calculation unit 12 obtains a region division image by performing region division on the luminance image captured by the stereo camera. Area division refers to dividing an image into areas with similar characteristics such as edges and brightness, and labeling each divided area. At this time, for example, an algorithm applying Convolutional Neural Network (CNN) is used. The label here is, for example, a road, a car, a pedestrian, a grassy area, etc., and it is preferable that an ID is set for each.
 また、このとき、領域分割計算部12は、上述の輝度画像に対して奥側よりも手前側を細かく領域分割することが好ましい。これは、画像に対して奥側よりも手前側の方が自車両に近いので、細かく分割することで誤差などを少なくし、安全性を高めることができるからである。 Further, at this time, it is preferable that the region division calculation unit 12 finely divides the front side of the above-mentioned luminance image into the front side rather than the back side. This is because the front side of the image is closer to the own vehicle than the back side, so it is possible to reduce errors and improve safety by finely dividing the image.
 距離画像誤差除外部13は、領域分割計算部12により領域分割された結果に基づいて、距離計算部11により出力された距離画像からステレオマッチングの誤差を除外する。上述したように、距離画像には誤差が含まれている。このため、距離画像誤差除外部13は、距離画像と領域分割画像からステレオマッチングの誤差を除外し、誤差を除外した距離画像を3次元空間生成部14に出力する。 The distance image error exclusion unit 13 excludes the stereo matching error from the distance image output by the distance calculation unit 11 based on the result of the area division by the area division calculation unit 12. As mentioned above, the distance image contains an error. Therefore, the distance image error exclusion unit 13 excludes the stereo matching error from the distance image and the region division image, and outputs the distance image excluding the error to the three-dimensional space generation unit 14.
 具体的には、距離画像誤差除外部13は、領域分割計算部12により領域分割された領域分割画像に基づいて画像を分割する画像分割部131と、画像分割部131によって分割された画像ごとに距離を取得してステレオマッチングの誤差を除外する距離取得部132とを有する。画像分割部131は、領域分割画像に基づいて画像を分割し、更に領域分割画像で指定されたIDを分割した画像に設定する。 Specifically, the distance image error exclusion unit 13 has an image segmentation unit 131 that divides an image based on the region segmentation image segmented by the region segmentation calculation unit 12, and each image segmented by the image segmentation unit 131. It has a distance acquisition unit 132 that acquires a distance and excludes an error in stereo matching. The image segmentation unit 131 divides the image based on the region division image, and sets the ID specified in the region division image to the divided image.
 距離取得部132は、領域分割画像に設定されたIDを基に距離分布を取得し、更に所定の距離を取得してステレオマッチングの誤差の除外を行う。すなわち、距離取得部132は、画像のIDごとに特徴付けられた距離を取得した上、該特徴付けられた距離と比べて不自然な距離を誤差として除去する。 The distance acquisition unit 132 acquires a distance distribution based on the ID set in the region-divided image, and further acquires a predetermined distance to exclude stereo matching errors. That is, the distance acquisition unit 132 acquires the distance characterized for each ID of the image, and removes the unnatural distance as an error as compared with the characterized distance.
 具体的には、距離取得部132は奥行きの有無によって取得するべき距離を変更する仕組みを有する。画像のIDは、整数で割り当てられ、内部でその整数と種別が対応付けられており、例えば表1のような対応関係になっていることが考えられる。距離取得部132は、このIDごとに対応した距離の取得方法を有する。 Specifically, the distance acquisition unit 132 has a mechanism for changing the distance to be acquired depending on the presence or absence of depth. The image ID is assigned as an integer, and the integer and the type are internally associated with each other. For example, it is conceivable that the image ID has a correspondence relationship as shown in Table 1. The distance acquisition unit 132 has a distance acquisition method corresponding to each ID.
Figure JPOXMLDOC01-appb-T000002
Figure JPOXMLDOC01-appb-T000002
 例えばID=1の場合、種別が道路であるので、距離取得部132は道路で取得するのに適した距離の取得方法を実施する。画像収集は、道路を走っている前提であり、道路の距離は自車両の位置を0mとして消失点の方法に向かい徐々に遠くなり無限遠点で最大となる。距離取得部132は、この前提で画素ごとに設定された距離を補正する。なお、距離はX軸方向に類似していて、さらにY軸方向へ減衰していくため、近似曲線を描くことができる。このように距離を補正することによって、ステレオマッチングの誤差が除外されることになる。 For example, when ID = 1, since the type is a road, the distance acquisition unit 132 implements a distance acquisition method suitable for acquisition on the road. Image collection is based on the premise that the vehicle is running on the road, and the distance of the road gradually increases toward the vanishing point method with the position of the own vehicle as 0 m and becomes maximum at the point at infinity. The distance acquisition unit 132 corrects the distance set for each pixel on this premise. Since the distance is similar to the X-axis direction and further attenuates in the Y-axis direction, an approximate curve can be drawn. By correcting the distance in this way, the error of stereo matching is excluded.
 また、ID=2,3,4の場合、種別がそれぞれ自動車(例えば先行車、対向車)、歩行者、2輪車である。自動車、歩行者及び2輪車がそれぞれ自身で移動するので、ID=1の道路と異なる。また、大きさと向きも様々であるため、領域分割された画像の距離も様々な値が含まれることが予想できる。このとき、距離取得部132は合成するのに適した距離を取得することが目的であるので、必ずしも正しい距離を取得する必要はない。そして、距離取得部132は、距離画像と領域分割画像の時間変化を取得することにより、時間的及び空間的な距離の分布を取得して距離の変動を均一化する。このように距離の変動を均一化することにより、ステレオマッチングの誤差が除外されることになる。なお、時間的又は空間的な距離の分布は、画像領域全体でも、X軸またはY軸へ投影しても、時間的に重みを付けて計算しても良い。 Also, when ID = 2,3,4, the types are automobiles (for example, preceding vehicles and oncoming vehicles), pedestrians, and two-wheeled vehicles, respectively. It is different from the road with ID = 1 because cars, pedestrians and two-wheeled vehicles move by themselves. In addition, since the size and orientation are various, it can be expected that the distance of the region-divided image also includes various values. At this time, since the purpose of the distance acquisition unit 132 is to acquire a distance suitable for synthesizing, it is not always necessary to acquire the correct distance. Then, the distance acquisition unit 132 acquires the distribution of the temporal and spatial distances by acquiring the temporal changes of the distance image and the region-divided image, and makes the fluctuations of the distance uniform. By making the variation of the distance uniform in this way, the error of stereo matching is excluded. The distribution of the temporal or spatial distance may be calculated over the entire image area, projected onto the X-axis or the Y-axis, or weighted in time.
 また、ID=5の場合、種別が草むらである。草むらや芝生は不規則なパターンを有しており、本実施形態において考えている窓を用いた局所的なステレオマッチングが不得意とするケースである。従って、誤マッチングが比較的に多く、或いは距離が求められないことが多くなるのが想定される。このような場合は、草むらそのものではなく、正しい距離を比較的推定しやすい道路の距離を草むらの距離として推定して使うことができる。このようにすることで、ステレオマッチングの誤差が除外されることになる。なお、その際には、領域分割して得た一つの画像を路面での距離の変化に応じて分割すると、後述するCGの合成が実行しやすくなる。 Also, if ID = 5, the type is grassy. The grass and the lawn have an irregular pattern, and this is a case where the local stereo matching using the window considered in the present embodiment is not good. Therefore, it is expected that there will be a relatively large number of mismatches, or the distance will not be determined in many cases. In such a case, the distance of the road, which is relatively easy to estimate the correct distance, can be estimated and used as the distance of the grass, not the grass itself. By doing so, the error of stereo matching is excluded. In that case, if one image obtained by dividing the area is divided according to the change in the distance on the road surface, it becomes easier to perform CG composition described later.
 本実施形態では、距離取得部132が、領域分割画像に設定されたIDごとに対応した距離をそれぞれ取得し、取得した距離に基づいてステレオマッチングの誤差の除外を行うので、ステレオマッチングの誤差を除外する精度を高めることができる。 In the present embodiment, the distance acquisition unit 132 acquires the corresponding distances for each ID set in the region-divided image, and excludes the stereo matching error based on the acquired distances. The accuracy of exclusion can be improved.
 3次元空間生成部14は、距離画像誤差除外部13により除外された距離画像に基づいて3次元空間を生成する。より具体的には、3次元空間生成部14は、距離画像誤差除外部13によって除外された距離画像とステレオにより撮像された輝度画像とから3次元空間を生成する。輝度画像は正射影で得られたものを想定しているので、ステレオカメラの使用に基づき距離画像と組み合わせることで、3次元空間の座標を得ることができる。 The three-dimensional space generation unit 14 generates a three-dimensional space based on the distance image excluded by the distance image error exclusion unit 13. More specifically, the three-dimensional space generation unit 14 generates a three-dimensional space from the distance image excluded by the distance image error exclusion unit 13 and the brightness image captured by the stereo. Since the brightness image is assumed to be obtained by orthographic projection, the coordinates in the three-dimensional space can be obtained by combining with the distance image based on the use of the stereo camera.
 仮想物体設置部15は、車載カメラのアプリケーションの物標として認識される仮想物体を任意の位置と時刻に設置する。より具体的には、仮想物体設置部15は、合成すべき仮想物体のCGの種別を決定し、その仮想物体の設置する時刻と位置を決定する。このとき、仮想物体設置部15は、その仮想物体をどのように動かすかの情報と自車両の制御信号から得られる自車両の移動距離とに基づいて仮想物体の設置位置と時刻を決定することが好ましい。このようにすれば、より実画像に近い合成画像を得ることができ、シミュレーションの信頼性を向上する効果を奏する。また、仮想物体設置部15は、決定した結果に基づいて仮想物体を設置する。 The virtual object installation unit 15 installs a virtual object recognized as a target of the application of the in-vehicle camera at an arbitrary position and time. More specifically, the virtual object installation unit 15 determines the type of CG of the virtual object to be synthesized, and determines the time and position at which the virtual object is installed. At this time, the virtual object installation unit 15 determines the installation position and time of the virtual object based on the information on how to move the virtual object and the moving distance of the own vehicle obtained from the control signal of the own vehicle. Is preferable. By doing so, it is possible to obtain a composite image closer to the real image, which has the effect of improving the reliability of the simulation. Further, the virtual object installation unit 15 installs a virtual object based on the determined result.
 なお、仮想物体とは、車載カメラのアプリケーションの物標として認識される物体であり、自動車、歩行者、二輪車等が例に挙げられる。また、仮想物体は擬似的に生成できるので、その速度や大きさは自由に設定することができる。 The virtual object is an object recognized as a target of an in-vehicle camera application, and examples thereof include automobiles, pedestrians, and motorcycles. Moreover, since a virtual object can be generated in a pseudo manner, its speed and size can be freely set.
 仮想物体合成部16は、仮想物体設置部15により設置された仮想物体を、3次元空間生成部14により生成された3次元空間に合成する。このとき、仮想物体合成部16は、仮想物体設置部15が設置した仮想物体を3次元空間の所定位置に合成する。 The virtual object synthesis unit 16 synthesizes the virtual object installed by the virtual object installation unit 15 into the three-dimensional space generated by the three-dimensional space generation unit 14. At this time, the virtual object synthesis unit 16 synthesizes the virtual object installed by the virtual object installation unit 15 at a predetermined position in the three-dimensional space.
 画像生成部17は、仮想物体合成部16により合成された結果に基づいて、上記ステレオカメラの輝度画像を生成する。このとき、画像生成部17は、仮想物体を合成した3次元空間からステレオカメラで得られる左右カメラの輝度画像を生成する。 The image generation unit 17 generates a luminance image of the stereo camera based on the result synthesized by the virtual object synthesis unit 16. At this time, the image generation unit 17 generates a brightness image of the left and right cameras obtained by the stereo camera from the three-dimensional space in which the virtual object is synthesized.
 一方、画像認識部20は、画像生成部17により生成された左右カメラの輝度画像を認識する。 On the other hand, the image recognition unit 20 recognizes the luminance images of the left and right cameras generated by the image generation unit 17.
 以下、図3~図6を参照してCG-自然画像合成部10の動作を説明する。 Hereinafter, the operation of the CG-natural image synthesizing unit 10 will be described with reference to FIGS. 3 to 6.
 まず、距離計算部11は、ステレオカメラにより撮像された輝度画像に基づいてステレオマッチングで距離を計算する(図3参照)。図3において、左図はステレオカメラで取得した輝度画像を模擬的に表現したものであり、右図は距離計算部11の計算によって得られた距離画像を色の濃淡で表現したものである。右図に示す色の濃淡は、物体ごとに距離の遠近に基づいて設定されている。 First, the distance calculation unit 11 calculates the distance by stereo matching based on the luminance image captured by the stereo camera (see FIG. 3). In FIG. 3, the left figure is a simulated representation of a luminance image acquired by a stereo camera, and the right figure is a representation of a distance image obtained by the calculation of the distance calculation unit 11 in shades of color. The shades of color shown in the figure on the right are set based on the perspective of the distance for each object.
 また、右図に示す距離画像には、不規則な点が複数存在している。これらの不規則な点はステレオマッチングの誤差によって生じた距離誤差を表現したものである。このような距離誤差が距離画像に含まれていると、CGを合成する際に物体との前後関係が変わってしまい、違和感のある画像になる。すなわち、これらの距離誤差を含む距離画像に対して仮想物体を合成すると、例えば本来仮想物体より奥にあるはずの芝生の一部が手前にあるように見える問題が生じ、部分的に仮想物体が遮蔽されてしまうので、不自然な画像となる。 In addition, there are multiple irregular points in the distance image shown on the right. These irregular points represent the distance error caused by the stereo matching error. If such a distance error is included in the distance image, the context with the object changes when CG is synthesized, resulting in a strange image. That is, when a virtual object is synthesized with a distance image including these distance errors, for example, there is a problem that a part of the lawn that should be behind the virtual object seems to be in front, and the virtual object is partially formed. Since it is shielded, the image becomes unnatural.
 続いて、領域分割計算部12は、ステレオカメラにより撮像された輝度画像に対し領域分割を行う(図4参照)。図4に示すように、ステレオカメラで取得した輝度画像に対して領域分割が行われた結果、例えば「自動車」、「道路」、「芝生」、「それ以外」といったラベル付けがされている。 Subsequently, the region division calculation unit 12 divides the luminance image captured by the stereo camera into regions (see FIG. 4). As shown in FIG. 4, as a result of region division on the luminance image acquired by the stereo camera, labels such as "automobile", "road", "lawn", and "other than that" are given.
 続いて、距離画像誤差除外部13は、領域分割計算部12により領域分割された結果に基づいて距離画像からステレオマッチングの誤差を除外する(図5参照)。具体的には、距離画像誤差除外部13の距離取得部132は、上述の領域分割で得られたラベル付けに基づいて、ラベルごとに距離を再計算し、距離誤差(すなわち、図3の右図中の不規則な点)を除去する。このように領域分割によってラベル付けした結果から距離を再計算したことで、ステレオマッチングで生じる誤差を効果的に除外することができ、不自然な画像の合成を抑制することができる。 Subsequently, the distance image error exclusion unit 13 excludes the stereo matching error from the distance image based on the result of the area division by the area division calculation unit 12 (see FIG. 5). Specifically, the distance acquisition unit 132 of the distance image error exclusion unit 13 recalculates the distance for each label based on the labeling obtained by the above-mentioned area division, and the distance error (that is, the right side of FIG. 3). Remove the irregular points in the figure). By recalculating the distance from the result of labeling by region division in this way, the error caused by stereo matching can be effectively excluded, and unnatural image composition can be suppressed.
 続いて、3次元空間生成部14は距離誤差を除去した距離画像に基づいて3次元空間を生成し、仮想物体設置部15は仮想物体(ここでは、歩行者)を任意の位置と時刻に設置する。 Subsequently, the three-dimensional space generation unit 14 generates a three-dimensional space based on the distance image from which the distance error is removed, and the virtual object installation unit 15 installs a virtual object (here, a pedestrian) at an arbitrary position and time. do.
 続いて、仮想物体合成部16は、仮想物体としての歩行者を、3次元空間生成部14により生成された3次元空間に配置させてCGを合成する(図6の左図参照)。画像生成部17は、仮想物体合成部16で合成したCGに基づいて、ステレオカメラの輝度画像を生成する(図6の右図参照)。 Subsequently, the virtual object synthesizing unit 16 arranges a pedestrian as a virtual object in the three-dimensional space generated by the three-dimensional space generating unit 14 to synthesize CG (see the left figure of FIG. 6). The image generation unit 17 generates a luminance image of the stereo camera based on the CG synthesized by the virtual object synthesis unit 16 (see the right figure of FIG. 6).
 本実施形態の画像認識シミュレータ装置1では、距離画像誤差除外部13が領域分割計算部12により領域分割された結果に基づいて距離画像からステレオマッチングの誤差を除外するので、ステレオマッチングの誤差に影響されずに仮想物体を輝度画像に合成することができる。従って、リアリティを有する合成画像を作成することができる。 In the image recognition simulator device 1 of the present embodiment, the distance image error exclusion unit 13 excludes the stereo matching error from the distance image based on the result of the area division by the area division calculation unit 12, which affects the stereo matching error. It is possible to synthesize a virtual object into a brightness image without doing so. Therefore, it is possible to create a composite image having reality.
 また、ステレオカメラにより撮像された自然画像を用いるので、全てCGで作成する場合と比較してリアリティを更に高めることができるとともに、シミュレーションの信頼性を向上することができる。更に、自然画像に対して自動車や歩行者や二輪車などのCG画像(すなわち、仮想物体)を合成するので、簡単に画像のバリエーションを増加することができる。 In addition, since a natural image captured by a stereo camera is used, the reality can be further enhanced and the reliability of the simulation can be improved as compared with the case where all are created by CG. Furthermore, since a CG image (that is, a virtual object) of a car, a pedestrian, a motorcycle, or the like is combined with a natural image, the variation of the image can be easily increased.
 以上、本発明の実施形態について詳述したが、本発明は、上述の実施形態に限定されるものではなく、特許請求の範囲に記載された本発明の精神を逸脱しない範囲で、種々の設計変更を行うことができるものである。 Although the embodiments of the present invention have been described in detail above, the present invention is not limited to the above-described embodiments, and various designs are designed without departing from the spirit of the present invention described in the claims. You can make changes.
1  画像認識シミュレータ装置
10  CG-自然画像合成部
11  距離計算部
12  領域分割計算部
13  距離画像誤差除外部
14  3次元空間生成部
15  仮想物体設置部
16  仮想物体合成部
17  画像生成部
20  画像認識部
131  画像分割部
132  距離取得部
1 Image recognition simulator device 10 CG-Natural image synthesis unit 11 Distance calculation unit 12 Area segmentation calculation unit 13 Distance image error exclusion unit 14 3D space generation unit 15 Virtual object installation unit 16 Virtual object composition unit 17 Image generation unit 20 Image recognition Part 131 Image division part 132 Distance acquisition part

Claims (4)

  1.  少なくとも2つのカメラにより撮像された輝度画像に基づいてステレオマッチングで距離を計算し、計算した結果を画像で表した距離画像を出力する距離計算部と、
     前記輝度画像に対し領域分割を行うことにより領域分割画像を得る領域分割計算部と、
     前記領域分割計算部により領域分割された結果に基づいて前記距離画像からステレオマッチングの誤差を除外する距離画像誤差除外部と、
     前記距離画像誤差除外部により除外された距離画像に基づいて3次元空間を生成する3次元空間生成部と、
     車載カメラのアプリケーションの物標として認識される仮想物体を任意の位置と時刻に設置する仮想物体設置部と、
     前記仮想物体設置部により設置された仮想物体を、前記3次元空間生成部により生成された3次元空間に合成する仮想物体合成部と、
     前記仮想物体合成部により合成された結果に基づいて、前記2つのカメラの輝度画像を生成する画像生成部と、
    を備えることを特徴とする画像認識シミュレータ装置。
    A distance calculation unit that calculates the distance by stereo matching based on the luminance images captured by at least two cameras and outputs the distance image that represents the calculated result as an image.
    An area division calculation unit that obtains an area division image by performing area division on the luminance image, and
    A distance image error exclusion unit that excludes stereo matching errors from the distance image based on the result of region division by the region division calculation unit.
    A three-dimensional space generation unit that generates a three-dimensional space based on a distance image excluded by the distance image error exclusion unit, and a three-dimensional space generation unit.
    A virtual object installation unit that installs a virtual object recognized as a target of an in-vehicle camera application at an arbitrary position and time,
    A virtual object synthesis unit that synthesizes a virtual object installed by the virtual object installation unit into a three-dimensional space generated by the three-dimensional space generation unit, and a virtual object synthesis unit.
    An image generation unit that generates luminance images of the two cameras based on the result synthesized by the virtual object synthesis unit, and an image generation unit.
    An image recognition simulator device characterized by being equipped with.
  2.  前記領域分割計算部は、前記輝度画像に対して奥側よりも手前側を細かく領域分割する請求項1に記載の画像認識シミュレータ装置。 The image recognition simulator device according to claim 1, wherein the region division calculation unit divides the front side of the luminance image into smaller regions than the back side.
  3.  前記距離画像誤差除外部は、前記領域分割計算部により領域分割された領域分割画像に基づいて画像を分割する画像分割部と、画像分割部によって分割された画像ごとに距離を取得してステレオマッチングの誤差を除外する距離取得部と、を有する請求項1又は2に記載の画像認識シミュレータ装置。 The distance image error exclusion unit is an image segmentation unit that divides an image based on an area segmented image segmented by the region segmentation calculation unit, and a distance is acquired for each image segmented by the image segmentation unit for stereo matching. The image recognition simulator device according to claim 1 or 2, further comprising a distance acquisition unit that excludes the error of the above.
  4.  前記仮想物体設置部は、仮想物体の動きの情報と車両の移動距離とに基づいて前記仮想物体の設置位置と時刻を決定し、決定した結果に基づいて前記仮想物体を設置する請求項1~3のいずれか一項に記載の画像認識シミュレータ装置。 The virtual object installation unit determines the installation position and time of the virtual object based on the information on the movement of the virtual object and the moving distance of the vehicle, and installs the virtual object based on the determined result. The image recognition simulator device according to any one of 3.
PCT/JP2021/027676 2020-09-07 2021-07-27 Image recognition simulator device WO2022049926A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
DE112021003088.4T DE112021003088T5 (en) 2020-09-07 2021-07-27 IMAGE RECOGNITION SIMULATOR DEVICE
JP2022546154A JP7373079B2 (en) 2020-09-07 2021-07-27 Image recognition simulator device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020149907 2020-09-07
JP2020-149907 2020-09-07

Publications (1)

Publication Number Publication Date
WO2022049926A1 true WO2022049926A1 (en) 2022-03-10

Family

ID=80491959

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/027676 WO2022049926A1 (en) 2020-09-07 2021-07-27 Image recognition simulator device

Country Status (3)

Country Link
JP (1) JP7373079B2 (en)
DE (1) DE112021003088T5 (en)
WO (1) WO2022049926A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012079251A (en) * 2010-10-06 2012-04-19 Konica Minolta Holdings Inc Image processing apparatus and image processing system
JP2014098986A (en) * 2012-11-13 2014-05-29 Hitachi Advanced Digital Inc Image recognition function verification device and processing method of the same
JP2018060511A (en) * 2016-10-06 2018-04-12 株式会社アドバンスド・データ・コントロールズ Simulation system, simulation program, and simulation method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010033321A (en) 2008-07-29 2010-02-12 Mitsubishi Heavy Ind Ltd Evaluation system for image processing algorithm
JP7114516B2 (en) 2019-03-14 2022-08-08 日本製鉄株式会社 Metal materials for separators, fuel cell separators and fuel cells

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012079251A (en) * 2010-10-06 2012-04-19 Konica Minolta Holdings Inc Image processing apparatus and image processing system
JP2014098986A (en) * 2012-11-13 2014-05-29 Hitachi Advanced Digital Inc Image recognition function verification device and processing method of the same
JP2018060511A (en) * 2016-10-06 2018-04-12 株式会社アドバンスド・データ・コントロールズ Simulation system, simulation program, and simulation method
JP2018060512A (en) * 2016-10-06 2018-04-12 株式会社アドバンスド・データ・コントロールズ Image generating system, program and method, and simulation system, program and method

Also Published As

Publication number Publication date
JPWO2022049926A1 (en) 2022-03-10
DE112021003088T5 (en) 2023-04-27
JP7373079B2 (en) 2023-11-01

Similar Documents

Publication Publication Date Title
CN107472135B (en) Image generation device, image generation method, and recording medium
US20200356790A1 (en) Vehicle image verification
Abdi et al. In-vehicle augmented reality traffic information system: a new type of communication between driver and vehicle
JP2021508027A (en) Systems and methods for positioning vehicles under poor lighting conditions
KR20190102665A (en) Calibration system and method using real-world object information
JP5011049B2 (en) Image processing system
CN105551020B (en) A kind of method and device detecting object size
JP2014138420A (en) Depth sensing method and system for autonomous vehicle
CN103123687A (en) Fast obstacle detection
US11887336B2 (en) Method for estimating a relative position of an object in the surroundings of a vehicle and electronic control unit for a vehicle and vehicle
JP2013190421A (en) Method for improving detection of traffic-object position in vehicle
JP5146330B2 (en) Vehicle road sign recognition device
JP6471522B2 (en) Camera parameter adjustment device
JP2011100174A (en) Apparatus and method for detecting vehicle on lane
WO2018134897A1 (en) Position and posture detection device, ar display device, position and posture detection method, and ar display method
JP6493000B2 (en) Road marking detection device and road marking detection method
WO2022049926A1 (en) Image recognition simulator device
JP2023184572A (en) Electronic apparatus, movable body, imaging apparatus, and control method for electronic apparatus, program, and storage medium
WO2010113253A1 (en) Three-dimensional information display device and three-dimensional information display method
JP7229032B2 (en) External object detection device
Roessing et al. Intuitive visualization of vehicle distance, velocity and risk potential in rear-view camera applications
JP2017211791A (en) Image processing device, imaging device, mobile body equipment control system, image processing method, and program
KR102002228B1 (en) Apparatus and Method for Detecting Moving Object
JP7261006B2 (en) External environment recognition device
JP2006157728A (en) System, method, and program for processing image information, and automobile

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21863984

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022546154

Country of ref document: JP

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 21863984

Country of ref document: EP

Kind code of ref document: A1