WO2011113447A1 - Procédé de montage de caméra dans un véhicule - Google Patents

Procédé de montage de caméra dans un véhicule Download PDF

Info

Publication number
WO2011113447A1
WO2011113447A1 PCT/EP2010/001713 EP2010001713W WO2011113447A1 WO 2011113447 A1 WO2011113447 A1 WO 2011113447A1 EP 2010001713 W EP2010001713 W EP 2010001713W WO 2011113447 A1 WO2011113447 A1 WO 2011113447A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
vehicle
virtual
pixel
imager
Prior art date
Application number
PCT/EP2010/001713
Other languages
English (en)
Inventor
Juergen Wille
Derek Savage
Patrick Eoghan Denny
Original Assignee
Valeo Schalter Und Sensoren Gmbh
Connaught Electronics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Valeo Schalter Und Sensoren Gmbh, Connaught Electronics Ltd. filed Critical Valeo Schalter Und Sensoren Gmbh
Priority to EP10719240A priority Critical patent/EP2548178A1/fr
Priority to PCT/EP2010/001713 priority patent/WO2011113447A1/fr
Publication of WO2011113447A1 publication Critical patent/WO2011113447A1/fr

Links

Classifications

    • G06T5/80
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B37/00Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
    • G03B37/04Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates in general to the field of driving assistance solutions supporting drivers of a vehicle.
  • driving assistance solutions comprise vision systems using at least one camera.
  • the present invention relates to a method for camera mounting in a vehicle.
  • the present invention relates to a data processing program and a computer program product for performing the method for camera mounting in a vehicle. Description of the Related Art
  • Driving Assistance solutions support drivers to steer their car.
  • Cameras integrated in bumpers and mirrors provide information about obstacles distance to avoid collisions.
  • Each cam- era covers a Field of View (FoV) up to 190° and all cameras together are optimized to give the driver a 360° view around the car.
  • MoV Field of View
  • the cameras deliver raw view images that show all optical dis- tortion of fish eye lenses. Through mathematical processing corrected undistorted images are calculated. With help of ECU algorithms, corrected images are stitched together to visualize the Bird-Eye-View on a display to enable driver to deter- mine obstacles near the car . _
  • a vision field analysis device for a vehicle is disclosed.
  • a vehicle model and an image acquisition file set in a three-dimensional CAD Computer- puter Aided Design
  • a light source model disposed inside a drivers cab of the vehicle model a camera model for photographing the light source model, disposed outside the vehicle model, and a plurality of photographing positions of the camera model are set, and a plurality of images photographed by the camera model in each position of the plurality of photographing positions are acquired by use of the three- dimensional CAD.
  • a monitor image simulation device In the Patent Abstracts of Japan JP 2009064372 A a monitor image simulation device, a monitor image simulation method, and a monitor image simulation program are disclosed, providing means for deciding whether a camera is correctly arranged or not before actually mounting the camera on a camera mounting object such as a vehicle in a monitor image simulation device.
  • the disclosed monitor image simulation device is provided with a processing part, an input part, and a display.
  • the process- ing part is provided with a data acquisition means for acquiring camera arrangement data showing a camera arrangement state, camera peripheral component data including the shapes and positions of components to be arranged in the peripheral part of the camera, and monitor data showing a monitor display- state, a virtual projection plane setting means for setting a first virtual projection plane and a second virtual projection plane for projecting components on a virtual three-dimensional space on a virtual two-dimensional plane; and a monitor display image generation means for generating a monitor display image on the basis of the camera arrangement data, the camera peripheral component data, the monitor data and the virtual projection planes.
  • a monitor image simulation device In the Patent Abstracts of Japan JP 2009064373 A a monitor image simulation device, a monitor image simulation method, and a monitor image simulation program are disclosed, providing means for effectively deciding whether a camera is correctly arranged or not before actually mounting the camera on a camera loading object such as a vehicle even in rotating an image to be displayed by a monitor with respect to an image to be generated by a camera in a monitor image simulation device.
  • the disclosed monitor image simulation device is provided with a processing part, and a display.
  • the processing part is provided with a data acquisition means for acquiring camera arrangement data showing a camera arrangement state and camera peripheral component data including the shapes and positions of components arranged in the peripheral part of the camera, a virtual projection plane setting means for setting a virtual projection plane for projecting components on a virtual three- dimensional space on a virtual two-dimensional plane, a camera image generation means for generating a camera image on the basis of the camera arrangement data, the camera peripheral component data, and the virtual projection plane; and an enlarged/reduced image rotating means for rotating an enlarged/reduced image generated from the camera image.
  • a data acquisition means for acquiring camera arrangement data showing a camera arrangement state and camera peripheral component data including the shapes and positions of components arranged in the peripheral part of the camera
  • a virtual projection plane setting means for setting a virtual projection plane for projecting components on a virtual three- dimensional space on a virtual two-dimensional plane
  • a camera image generation means for generating a camera image on the basis of the camera arrangement data, the camera peripheral component data, and the virtual projection plane
  • the technical problem underlying the present invention is to provide a method for camera mounting in a vehicle, which is able to save time and money for mounting at least one camera in a vehicle, and to provide a data processing program and a computer program product to perform the method for camera mounting in a vehicle.
  • this problem is solved by providing a method for camera mounting in a vehicle having ⁇ the features of claim 1, a data processing program for performing the method for camera mounting in a vehicle having the features of claim 16, and a computer program product causing a computer to perform the method for camera mounting in a vehicle having the features of claim 17.
  • Advantageous embodiments of the present invention are mentioned in the sub claims.
  • a method for camera mounting in a vehicle comprises defining at least one potential camera position in a three dimensional computer aided design model of the vehicle; defining at least one virtual camera by assigning optical and/or imager sensor parameters; placing the defined virtual camera to the at least one potential camera position; calculating raw view image data of the at least one virtual camera, simulating a virtual pro- jection grid based on the raw view image data, wherein at least one imager pixel is retraced as projection pixel through the projection grid during simulation, overlaying the virtual projected grid on a three dimensional environment of the vehicle seen from the virtual camera at a corresponding camera po- sition.
  • Embodiments of the present invention save time and money due to performing simulation during performing the method for camera mounting in a vehicle.
  • vir- tual cameras have to be defined and positioned inside the vehicle CAD model.
  • the virtual cameras have to be defined through optical and imager sensor parameters.
  • Simula- tion it is possible to retrace imager pixel through a projected grid that is overlaid on the environment seen from the virtual camera. This procedure is also called "pixel-grid- projection" .
  • the pixel-grid projection of each camera can be used to optimize field of view (FoV) overlapping areas.
  • the overlapping area information can be presented to vehicle manufacturer to optimize camera positions and to demonstrate vehicle geometry impact.
  • Another benefit of the camera mounting simulation is to get earlier results to calculate a "Bird-Eye-View" .
  • the corrected images are stitched together.
  • Stitching algorithms can be created and presented to the vehicle manu- facturer based on simulated camera raw views.
  • a potential customer and/or development teams can see the implications of a camera design in term of tolerances, autoexposure control, and auto-white balance, lens properties (e.g., MTF, large angle effects), overlap of more than one camera, pixel size representation on the ground and its implications for pixel mosaicing, topographic changes and their implication for the introduction of three dimensional effects, representations of three dimensional object displace- merits in terms of two dimensional aliases and so forth.
  • the effects of tolerances on vehicle and/or structural vignetting can be asserted.
  • such a tolerance analysis can be used by a vision system itself in determining the relative reliability of object detection by multi camera and/or stereo camera applications.
  • a simulated ray from at least one pixel on the camera imager is projected through the camera lens into at least one object space of the camera lens.
  • a simulated ray from a region of interest on the camera imager comprising a certain number of pixels is projected through the camera lens into the object space of the camera lens .
  • the calcu- lated raw view image data consider a distortion of the camera lens and/or the camera imager and/or a shape of the vehicle and/or a three dimensional model of the environment of the vehicle .
  • the simulated data are presented to a user using a CAD image of the vehicle including the object space of the camera lens of the at least one virtual camera. Additionally or alternatively the simulated data are provided to a post-processing step for further analysis or computation. In the case where the simulated data are provided to a post -processing step for further analysis or computation the simulated data may or may not include object space data since it may not be necessary to use the whole object space of the camera lens. Instead it may be sufficient to pass on simpler data from the simulation such as a percentage of occlusion for the camera lens of the at least one virtual camera and not the whole object space.
  • the projected grid representing at least one pixel of a field of view of said virtual camera is projected through the CAD of surrounding objects of the vehicle.
  • various tolerances and/or limitations of the at least one virtual camera and/or the at least one potential camera position are considered.
  • the various tolerances and/or limitations comprise position and/or orientation tolerances with respect to objects in the field of view of the camera due to mechanical-mounting of the camera.
  • the various tolerances and/or limitations comprise effects of occlusions of the field of view on the camera imager and/or effects on pixel and/or regions of interest at imager level.
  • the simulated data are used to choose appropriate regions of interest in image space for autoexposure control and/or automatic gain control and/or autowhite balance for the at least one virtual camera based on projections of the corresponding object space.
  • overlapping areas are identified if at least two virtual cameras are used, wherein the at least two virtual cameras are placed at different potential camera positions. Additionally a relationship between mechanical tolerances and a resolution in image space and expected location of the object space region of interest as seen by the respective cameras is determined; if a common region of interest of two or more cameras in object space is given. Further minimum resolution requirements in the object space are translated into pixel densities in image space and corresponding optical requirements of the at least one virtual camera. Further data reflecting reliability of picking up a location of an object in object space as seen by more than one camera with sufficient accuracy and/or precision is determined .
  • a first virtual camera is placed at a vehicle front position
  • a second virtual camera is placed at a vehicle left side position
  • a third virtual camera is placed at a vehicle right side position
  • a fourth virtual camera is placed at a vehicle rear position, wherein images of the four cameras are merged to- gether to calculate a Bird-Eye-View of the vehicle environment.
  • the Bird-Eye-View of the vehicle shows information regarding close vehicle environment.
  • the inventive method for camera mounting in a vehicle can be _ implemented as an entirely software embodiment, or an embodiment containing both hardware and software elements.
  • the present invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • the core idea of the present invention is to implement a pixel-grid-projection to retrace imager pixel through a projected grid that is overlaid on the environment seen from the virtual camera.
  • the pixel-grid-projection of each camera can then be used to optimize the field of view overlapping areas.
  • the overlapping area information can be presented to vehicle manufacturer to optimize camera positions and to demonstrate car geometry impact .
  • Embodiments of the present invention allow also to see and subsequently compensate for effects of deformation between the object space and the image space, due to effects taken together, in isolation or in various combinations such as general mechanical tolerances of mounting to vehicle, lens distortion, changes in topography, changes due to the "settling tolerances" as the relative mechanical positions of mounting elements between the cameras and the ground settle over time, e.g., body settling on chassis, wing mirrors containing cam- eras increasing tolerances over time, effect of diverse loading of vehicle, different levels of tyre pressures and/or effect of towing a load on the configuration of the cameras.
  • Fig. 1 is a schematic flow diagram of a method for camera
  • Fig. 2 is a schematic top view of a vehicle, showing fields of view of four cameras placed at four different potential camera positions.
  • Fig. 3 is a schematic perspective view of a virtual camera.
  • Fig. 4 is a schematic Bird-Eye-View of a vehicle, shown on a display unit inside the vehicle.
  • a method for camera mounting in a vehicle defines at least one potential camera position 12, 14, 16, 18 in a three dimensional computer aided design (CAD) model of the vehicle 1 during step S100.
  • step S200 at least one virtual camera 50 is defined by assigning optical and/or imager sensor 54 parameters.
  • step S300 the defined virtual cameras 50 are placed to the at least one potential camera position 12, 14, 16, 18.
  • step S400 raw view image data of the at least one virtual camera 50 are calculated.
  • the calcu- lated raw view image data consider a distortion of a camera lens 52 and/or the camera imager 54 and/or a shape of the vehicle 1 and/or a three dimensional model of the environment of the vehicle 1.
  • virtual pro- jection grid 40 is simulated during step S500, wherein at least one imager pixel 56 is retraced as projection pixel 58 through the projection grid 40 during simulation.
  • the virtual projected grid 40 is overlaid on a three dimensional environment of the vehicle 1 seen from the virtual camera 50 at a corresponding camera position 12, 14, 16, 18. This procedure is also called pixel-grid-projection.
  • the pixel-grid projection of each camera 50 can be used to optimize the field of view (FoV) overlapping areas 32, 34, 36, 38.
  • the overlapping area information can be presented to vehicle manufacturer to optimize camera positions 12, 14, 16, 18 and to demonstrate vehicle geometry impact.
  • a Bird- Eye-View 61 of the vehicle 1 shown in Fig. 4 can also be calculated.
  • the Bird-Eye-View 61 may be displayed by using a display unit 60 of a vision system in the vehicle 1.
  • the imager 54 of the camera 50 consists of a two dimensional array of pixel 56. Based on the number and size of the imager pixels 56 a virtual projected grid 40 could be simulated.
  • the projected grid 40 will be overlaid on the environment like a thin blanket. If at least two cameras 50 are considered there will be overlapping areas 32, 34, 36, 38, where at least two cameras 50 could deliver information. In the shown embodiment four cameras 50 are used to give the driver a total view around the vehicle 1.
  • So camera images of the front, left, right and rear camera positions 12, 14, 16, 18 are merged to- gether to have the Bird-Eye-View 61.
  • the Bird-Eye-View 61 is calculated from the corrected images and shows information regarding the close vehicle environment 62, 64, 66, 68.
  • a first object 72 is identified in the front environment 62 of the vehicle 1, represented by a optical representation 70
  • a second object 74 is identified in the right environment 64 of the vehicle 1
  • a third object 78 is identi- fied in the rear environment 66 of the vehicle 1
  • a empty space 76 is identified in the left environment 66 of the vehicle 1.
  • a special stitch has to be made when the pictures are merged together.
  • Embodiments of the present invention have the advantage that potential customer and development teams can see the implications of a camera design in term of tolerances, autoexposure control, auto-white balance, lens properties (e.g., MTF, large angle effects) , overlap of more than one camera, pixel size representation on the ground and its implications for pixel mosaicing, topographic changes and their implication for the introduction of three dimensional effects, representations of three dimensional object displacements in terms of two dimensional aliases and so forth. Also, the effects of tolerance on vehicle and/or structural vignetting can be asserted also. Furthermore, such a tolerance analysis can be used by a vision system itself in determining the relative reliability of object detection by multi camera and/or stereo camera applications . Embodiments of the present invention also offer a weighting concept for image merging which uses such data based on comparative brightness of respective scenes.
  • lens properties e.g., MTF, large angle effects
  • a common problem during the design process of a vision system is determining at an early stage in the design what the impact of the position(s), orientation (s) and occlusion(s) of the camera (s) 50 will have on the performance of the vision system so that it may inform the system design. For this reason it is very useful to be able to have a simulation whereby a field of view projection of a camera 50 is projected through the CAD of surrounding objects such as the ground near the camera 50 or object occlusions of the field of view 22, 24, 26, 28 by the camera 50.
  • the first step is to have a method that can perform a simu- lated ray projection from a region of interest on an imager 54, through the lens 52 and into object space 58 of the lens 52, i.e. the outside world, with the ray intersecting objects in the world and to be able to present this data to a user by either displaying this in a CAD image or. providing data that - shows that the ray intersects a specific object.
  • these include effects of flare on the cameras 50 at high angles and/or effects on pixel and/or region of interest at imager level, i.e., determining and/or displaying pixel and/or region of interest by projection the size of a region in object space corresponding to a pixel or other region of interest at the imager array in image space. This allows a limit of the pixel size to be seen and also ensures that the corresponding camera 50 will not be under de- signed by using insufficiently high resolution imagers, lenses or other optical elements for the application or will be over- designed by using too high resolution imagers or lenses.
  • These include also effects of pixel mosaicing at high angles. For several video formats, pixel mosaics are used, whereby combi- nations of adjacent pixels are used to generate an output pixel value. This is typical in color sensors which use Bayer mo- saicing .
  • AEC and/or AGC autoexposure control and/or automatic gain control
  • a B autowhite balance
  • a camera 50 looks at the world, it has to determine what the optimal exposure and gain it should apply to the pixels 56 in order to provide a reasonable picture to the vehicle system. If it decides on too short an exposure time or too low a gain, the picture will look too dark and can lose information by clipping it out (“black-out”) and if it decides on too long an exposure time or too large a gain, then the picture will be too bright and can lose Information by clipping it ("white out”) .
  • a camera 50 uses automatic exposure and/or gain control to determine the best compromise based on pixels 56 from a region of interest on the pixel array of the camera imager 54.
  • this region of interest is usually the complete active video pixel array of the camera imager 54.
  • undesirable occlusions or objects in the field of view 22, 24, 26, 28 of the corresponding camera 50 can skew the exposure and/or gain values.
  • much of the field of view 22, 24, 26, 28 can be taken up with a black vehicle body, which may be seen by the camera 50 but not by the end user or not used by the video processing system.
  • the projection method may also be used to select a zone for color reference, an example of which is "white patch" white balance algorithms.
  • Embodiments of the present invention allow to see and subsequently compensate for effects of deformation between the ob- ject space and the image space, due to following effects taken together, in isolation or in various combinations such as general mechanical tolerances of mounting to vehicle, lens distortion, changes in topography, changes due to the "settling tolerances" as the relative mechanical positions of mounting elements between the cameras and the ground settle over time, e.g., body settling on chassis, wing mirrors containing cameras increasing tolerances over time, effect of diverse loading of vehicle, different levels of tyre pressures and/or effect of towing a load on the configuration of the cameras 50.
  • the inventive method for camera mounting in a vehicle can be implemented as an entirely software embodiment, or an embodiment containing both hardware and software elements.
  • the present invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • the present invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
  • a computer- usable or computer-readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • the medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
  • Examples of a computer- readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM) , a read-only memory (ROM) , a rigid magnetic disk, and an optical disk.
  • Current examples of optical disks include compact disk - read only memory (CD-ROM) , compact disk - read/write (CD-R/W) , and DVD.
  • a data processing system sui- table for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus.
  • the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide tem- porary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • I/O devices including but not limited to keyboards, displays, pointing devices, etc .
  • I/O controllers can be coupled to the system either directly or through intervening I/O controllers.
  • Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks.
  • Modems, cable modems, and Ethernet cards are just a few of the currently available types of network adapters.

Abstract

L'invention porte sur un procédé amélioré de montage de caméra dans un véhicule, lequel procédé comprend les étapes suivantes : la définition d'au moins une position potentielle de caméra (12, 14, 16, 18) dans un modèle tridimensionnel de conception assistée par ordinateur (CAD) dudit véhicule (1) : la définition d'au moins une caméra virtuelle par attribution de paramètres de capteur optique et/ou d'imageur; la mise en place de ladite caméra virtuelle définie dans ladite position potentielle de caméra (12, 14, 16, 18); le calcul de données brutes d'image de visualisation de ladite ou desdites caméras virtuelles; la simulation d'une grille de projection virtuelle (40) sur la base desdites données brutes d'image de visualisation, au moins un pixel d'imageur étant retracé comme pixel de projection sur ladite grille de projection (40) durant la simulation; la superposition de ladite grille projetée virtuelle (40) sur un environnement tridimensionnel dudit véhicule (1) vu à partir de ladite caméra virtuelle dans une position de caméra correspondante (12, 14, 16, 18).
PCT/EP2010/001713 2010-03-18 2010-03-18 Procédé de montage de caméra dans un véhicule WO2011113447A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP10719240A EP2548178A1 (fr) 2010-03-18 2010-03-18 Procédé de montage de caméra dans un véhicule
PCT/EP2010/001713 WO2011113447A1 (fr) 2010-03-18 2010-03-18 Procédé de montage de caméra dans un véhicule

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2010/001713 WO2011113447A1 (fr) 2010-03-18 2010-03-18 Procédé de montage de caméra dans un véhicule

Publications (1)

Publication Number Publication Date
WO2011113447A1 true WO2011113447A1 (fr) 2011-09-22

Family

ID=44148482

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2010/001713 WO2011113447A1 (fr) 2010-03-18 2010-03-18 Procédé de montage de caméra dans un véhicule

Country Status (2)

Country Link
EP (1) EP2548178A1 (fr)
WO (1) WO2011113447A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014114629A1 (de) * 2014-10-09 2016-04-14 Connaught Electronics Ltd. Prüfen eines Kamerasystems eines Kraftfahrzeugs
US10096158B2 (en) 2016-03-24 2018-10-09 Ford Global Technologies, Llc Method and system for virtual sensor data generation with depth ground truth annotation
CN110399622A (zh) * 2018-04-24 2019-11-01 上海欧菲智能车联科技有限公司 车载摄像头的布置方法及车载摄像头的布置系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1302365A2 (fr) * 2001-10-15 2003-04-16 Matsushita Electric Industrial Co., Ltd. Appareil de surveillance de l'environnement d'un véhicule et procédé d'ajustage
JP2007241398A (ja) 2006-03-06 2007-09-20 Hitachi Constr Mach Co Ltd 車両の視界解析装置
JP2009064373A (ja) 2007-09-10 2009-03-26 Toyota Motor Corp モニタ画像シミュレーション装置、モニタ画像シミュレーション方法及びモニタ画像シミュレーションプログラム
JP2009064372A (ja) 2007-09-10 2009-03-26 Toyota Motor Corp モニタ画像シミュレーション装置、モニタ画像シミュレーション方法及びモニタ画像シミュレーションプログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1302365A2 (fr) * 2001-10-15 2003-04-16 Matsushita Electric Industrial Co., Ltd. Appareil de surveillance de l'environnement d'un véhicule et procédé d'ajustage
JP2007241398A (ja) 2006-03-06 2007-09-20 Hitachi Constr Mach Co Ltd 車両の視界解析装置
JP2009064373A (ja) 2007-09-10 2009-03-26 Toyota Motor Corp モニタ画像シミュレーション装置、モニタ画像シミュレーション方法及びモニタ画像シミュレーションプログラム
JP2009064372A (ja) 2007-09-10 2009-03-26 Toyota Motor Corp モニタ画像シミュレーション装置、モニタ画像シミュレーション方法及びモニタ画像シミュレーションプログラム

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
EHLGEN T ET AL: "Eliminating Blind Spots for Assisted Driving", IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, IEEE, PISCATAWAY, NJ, USA, vol. 9, no. 4, 1 December 2008 (2008-12-01), pages 657 - 665, XP011238136, ISSN: 1524-9050, DOI: DOI:10.1109/TITS.2008.2006815 *
ELEANOR G RIEFFEL ET AL: "Geometric Tools for Multicamera Surveillance Systems", DISTRIBUTED SMART CAMERAS, 2007. ICDSC '07. FIRST ACM/IEEE INTERN ATIONAL CONFERENCE ON, IEEE, PI, 1 September 2007 (2007-09-01), pages 132 - 139, XP031151274, ISBN: 978-1-4244-1353-9 *
HUGHES C ET AL: "Wide-angle camera technology for automotive applications: a review", IET INTELLIGENT TRANSPORT SYSTEMS,, vol. 3, no. 1, 9 March 2009 (2009-03-09), pages 19 - 31, XP006032372, ISSN: 1751-9578, DOI: DOI:10.1049/IET-ITS:20080017 *
JUNQING CHEN ET AL: "Digital Camera Imaging System Simulation", IEEE TRANSACTIONS ON ELECTRON DEVICES, IEEE SERVICE CENTER, PISACATAWAY, NJ, US, vol. 56, no. 11, 1 November 2009 (2009-11-01), pages 2496 - 2505, XP011277947, ISSN: 0018-9383, DOI: DOI:10.1109/TED.2009.2030995 *
RACZKOWSKY J ET AL: "SIMULATION OF CAMERAS IN ROBOT APPLICATIONS", IEEE COMPUTER GRAPHICS AND APPLICATIONS, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 9, no. 1, 1 January 1989 (1989-01-01), pages 16 - 25, XP000115878, ISSN: 0272-1716, DOI: DOI:10.1109/38.20330 *
STATE ANDREI; WELCH GREG; ILIE ADRIAN: "An interactive camera placement and visibility simulator for image-based VR applications", PROCEEDINGS OF SPIE - THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING - STEREOSCOPIC DISPLAYS AND VIRTUAL REALITY SYSTEMS XIII - PROCEEDINGS OF SPIE-IS AND T ELECTRONIC IMAGING 2006 SPIE US, vol. 6055, 16 January 2006 (2006-01-16) - 31 July 2006 (2006-07-31), Stereoscopic Displays and Virtual Reality Systems XIII 20060116 to 20060119 San Jose, CA, pages 1 - 12, XP055003991, ISSN: 0277-786X, ISBN: 0819460958, DOI: 10.1117/12.660341 *
TARABANIS K A ET AL: "A Survev of Sensor Planning in Computer Vision", IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, IEEE INC, NEW YORK, US, vol. 11, no. 1, 1 February 1995 (1995-02-01), pages 86 - 104, XP007913917, ISSN: 1042-296X *
YI-YUAN CHEN ET AL: "An embedded system for vehicle surrounding monitoring", POWER ELECTRONICS AND INTELLIGENT TRANSPORTATION SYSTEM (PEITS), 2009 2ND INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 19 December 2009 (2009-12-19), pages 92 - 95, XP031624185, ISBN: 978-1-4244-4544-8, DOI: DOI:10.1109/PEITS.2009.5406797 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014114629A1 (de) * 2014-10-09 2016-04-14 Connaught Electronics Ltd. Prüfen eines Kamerasystems eines Kraftfahrzeugs
US10096158B2 (en) 2016-03-24 2018-10-09 Ford Global Technologies, Llc Method and system for virtual sensor data generation with depth ground truth annotation
US10510187B2 (en) 2016-03-24 2019-12-17 Ford Global Technologies, Llc Method and system for virtual sensor data generation with depth ground truth annotation
US10832478B2 (en) 2016-03-24 2020-11-10 Ford Global Technologies, Llc Method and system for virtual sensor data generation with depth ground truth annotation
CN110399622A (zh) * 2018-04-24 2019-11-01 上海欧菲智能车联科技有限公司 车载摄像头的布置方法及车载摄像头的布置系统

Also Published As

Publication number Publication date
EP2548178A1 (fr) 2013-01-23

Similar Documents

Publication Publication Date Title
JP7448921B2 (ja) リアビュー可視化のためのリアスティッチされたビューパノラマ
CN111223038B (zh) 一种车载环视图像的自动拼接方法及显示装置
JP6764533B2 (ja) キャリブレーション装置、キャリブレーション用チャート、チャートパターン生成装置、およびキャリブレーション方法
JP2009129001A (ja) 運転支援システム、車両、立体物領域推定方法
US11303807B2 (en) Using real time ray tracing for lens remapping
KR20170135952A (ko) 차량의 주변 영역을 표시하기 위한 방법
US20180184078A1 (en) Calibration of a Surround View Camera System
US10897600B1 (en) Sensor fusion based perceptually enhanced surround view
CN111986270A (zh) 一种全景泊车标定方法、装置及存储介质
US11715218B2 (en) Information processing apparatus and information processing method
WO2011113447A1 (fr) Procédé de montage de caméra dans un véhicule
EP3098777B1 (fr) Appareil, procédé et programme de dessin
JP7074546B2 (ja) 画像処理装置および方法
WO2021004642A1 (fr) Procédé d'étalonnage de caméra, programme informatique, support d'enregistrement lisible par ordinateur et système d'étalonnage de caméra
JP5413502B2 (ja) ハレーションシミュレーション方法、装置、及びプログラム
CN113065999B (zh) 车载全景图生成方法、装置、图像处理设备及存储介质
JP2009077022A (ja) 運転支援システム及び車両
JP2013200840A (ja) 映像処理装置、映像処理方法、映像処理プログラム、及び映像表示装置
CN115004683A (zh) 成像装置、成像方法和程序
US11858420B2 (en) Below vehicle rendering for surround view systems
CN112967173A (zh) 一种图像生成方法、装置及系统
CN116777755A (zh) 一种畸变矫正方法、装置、车载设备及车辆
CN116485907A (zh) 一种环视相机外参确定方法、装置及环视鸟瞰图采集系统
CN115409973A (zh) 增强现实抬头显示成像方法、装置、设备以及存储介质
CN114638747A (zh) 一种全景倒车图像处理方法、装置及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10719240

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2010719240

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE