WO2015149970A1 - Procédé et dispositif d'adaptation d'une surface de projection tridimensionnelle pour la projection d'une pluralité d'images de caméra voisines - Google Patents

Procédé et dispositif d'adaptation d'une surface de projection tridimensionnelle pour la projection d'une pluralité d'images de caméra voisines Download PDF

Info

Publication number
WO2015149970A1
WO2015149970A1 PCT/EP2015/052156 EP2015052156W WO2015149970A1 WO 2015149970 A1 WO2015149970 A1 WO 2015149970A1 EP 2015052156 W EP2015052156 W EP 2015052156W WO 2015149970 A1 WO2015149970 A1 WO 2015149970A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
section
camera
selecting
projection surface
Prior art date
Application number
PCT/EP2015/052156
Other languages
German (de)
English (en)
Inventor
Wolfgang Niem
Steffen Abraham
Patrick Klie
Hartmut Loos
Original Assignee
Robert Bosch Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch Gmbh filed Critical Robert Bosch Gmbh
Publication of WO2015149970A1 publication Critical patent/WO2015149970A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • G06T3/08
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present invention relates to a method of fitting a three-dimensional projection surface for projecting a plurality
  • Pictures of several cameras can be connected to form an overall picture.
  • a projection screen is required.
  • JP 2012-73836 A describes an image display system. DISCLOSURE OF THE INVENTION Against this background, with the approach presented here, a method for adapting a three-dimensional projection surface for projecting a plurality of adjacent camera images, furthermore a device which uses this method and finally a corresponding computer program according to the main claims are presented. Advantageous embodiments emerge from the respective subclaims and the following description.
  • a panorama can be composed of a plurality of images.
  • the images are projected onto a virtual three-dimensional projection surface.
  • a form of the projection surface influences significantly, how realistically the thus calculated panorama appears.
  • the panorama can be the basis for a variety of predictable visualizations. For example, within the
  • Objects in the panorama are displayed with display errors.
  • the objects can be displayed multiple times or distorted grotesquely. Therefore, it is desirable to take the form of
  • Shaping projection surface taking into account objects to be displayed.
  • the stated representation errors arise essentially in areas of the projection area where the underlying images projected thereon overlap one another. In these areas, the objects are displayed in at least two images, but from different perspectives.
  • the approach presented here describes a method for comparing partial areas of at least two overlapping images with one another in order to obtain an optimized shape of the projection area.
  • a method for adjusting a three-dimensional projection surface for projecting a plurality of adjacent camera images comprises the following steps:
  • Section of the second camera image using the second selection rule wherein the first selection rule represents a first predetermined shape of the projection surface, and the second selection rule represents a second predetermined shape of the projection surface; Comparing an image content of the first section with an image content of the second section using a processing rule to obtain a first evaluation criterion of the first form and comparing an image content of the further first section with an image content of the further second section using the processing function to a second To obtain the evaluation criterion of the second form; and
  • a projection surface can be understood as a virtual surface.
  • a projection can be a computational transfer of a substantially
  • a camera image may be an image file from a camera.
  • the camera image may have a plurality of pixels.
  • Each pixel may be characterized by a coordinate value relative to a reference point of the camera image.
  • each pixel can be characterized by at least one intensity information or
  • Adjacent camera images may have an overlap in at least one edge region. That is, objects imaged in the edge region are imaged in both camera images.
  • Selecting can be understood to mean masking.
  • a mask predefined by the selection rule is placed over the camera image in order to define a section of the camera image. In this case, an image content, ie the intensity information of the pixels of the camera image during selection or masking is ignored.
  • Selection rule defines the limits of the section. The limits depend on the shape of the screen.
  • the image content of at least two sections is compared with one another in order to quantify a coincidence of the two image contents.
  • a section of the first camera image and a section of the second camera image form a pair of cutouts. It will be the excerpts of each
  • At least one additional first detail from the first camera image and an additional second detail from the second camera image can be selected using a further selection rule that represents another predetermined shape of the projection surface.
  • An image content of the additional first section may be compared to an image content of the additional second section using the processing rule in order to obtain a further evaluation criterion of the further form.
  • Selection rules can be compared to many forms of projection surfaces. The larger the number of shapes, the greater the likelihood of a good match with the objects in the
  • At least a third detail of at least one third camera image can be selected using the first selection rule. Furthermore, at least one further third detail can be selected from the third camera image using the second selection rule.
  • an image content of the third section may be compared to the image content of the first section and / or the image content of the second section using the processing rule to obtain an additional first evaluation criterion of the first shape.
  • an image content of the further third detail can be compared using the processing specification with the image content of the further first detail and / or the second detail to obtain an additional second evaluation criterion of the second form.
  • a feature content of the clippings may be compared to obtain the evaluation criteria.
  • a feature content may represent imaged objects in the clippings.
  • the processing rule may describe a sum of squared differences between intensity values of corresponding image pixels in the first clipping and the second clipping.
  • the image pixels may have a common coordinate value on the shape to be evaluated.
  • An image pixel can be called a pixel.
  • a direct comparison of individual image pixels or image pixel groups with one another makes it possible to dispense with a compute-intensive object search.
  • the processing rule may describe a sum of absolute differences between intensity values of corresponding image pixels in the first clipping and the second clipping.
  • the image pixels may have a common coordinate value on the shape to be evaluated.
  • a direct comparison of individual image pixels or image pixel groups with one another makes it possible to dispense with a compute-intensive object search. As a result, the method presented here can be carried out particularly quickly.
  • the coordinate values of the image pixels to be compared can be selected using a form-dependent interpolation rule.
  • the interpolation rule can be stored in the selection rule.
  • the interpolation rule can mathematically map a form of the matching sections. Through a mathematical illustration everyone can. Of a section to be assigned to a point of the other section.
  • the method may include a step of projecting the camera images using the selected shape to match the image content of the image
  • At least one parameter of the selected shape may be provided to define the projection surface when projecting.
  • a form of the projection surface can be mathematically correctly described by a few parameters. If only the best possible shape of the projection surface is to be determined by the method presented here, the shape can easily be transferred to another device using the at least one parameter with a small data volume.
  • Projection surface for projecting a plurality of adjacent camera images presented, the device comprising the following features: a selection device for selecting a first section of a first camera image using a first selection rule, for selecting at least one further first section of the first
  • Camera image using at least a second selection rule for selecting a second section of a second camera image using the first selection rule, and for selecting at least one further second section from the second camera image below
  • Selection rule represents a first predetermined shape of the projection surface, and the second selection rule represents a second predetermined shape of the projection surface; a comparing means for comparing an image content of the first one
  • Projection surface whose evaluation criterion represents a smaller deviation of the image content from each other.
  • a device can be understood as meaning an electrical device which processes sensor signals and outputs control and / or data signals in dependence thereon.
  • the device may have an interface, which may be formed in hardware and / or software.
  • the interfaces can be part of a so-called system ASIC, for example, which contains a wide variety of functions of the device.
  • the interfaces are their own integrated circuits or at least partially consist of discrete components.
  • the interfaces may be software modules that are present, for example, on a microcontroller in addition to other software modules.
  • a computer program product or computer program with program code which can be stored on a machine-readable carrier or storage medium such as a semiconductor memory, a hard disk memory or an optical memory and for carrying out, implementing and / or controlling the steps of the method according to one of the above
  • 1 shows a representation of a vehicle with four cameras.
  • FIG. 2 is an illustration of a projection surface around a vehicle according to an embodiment of the present invention
  • FIG. 3 is a cross-sectional view of a projection surface for projecting a plurality of adjacent camera images according to an embodiment of the present invention
  • Fig. 4 is a block diagram of a device for adjusting a
  • FIG. 5 shows an illustration of an overlapping region of two camera images on a projection surface according to an exemplary embodiment of the present invention
  • FIG. 6 shows an illustration of an overlapping region of two camera images on a projection surface according to a further exemplary embodiment of the present invention
  • FIG. 9 is an illustration of a flow of a method for generating a plurality of selection rules according to an embodiment of the present invention.
  • FIG. 10 is an illustration of steps of a method of adjusting a three-dimensional projection surface for projecting a plurality of adjacent camera images according to an embodiment of the present invention.
  • FIG. 10 is an illustration of steps of a method of adjusting a three-dimensional projection surface for projecting a plurality of adjacent camera images according to an embodiment of the present invention.
  • the vehicle 100 here is a passenger vehicle 100.
  • the vehicle may also be, for example, a lorry or a construction machine.
  • the four cameras 102, 104, 106, 108 are arranged on all four sides of the vehicle 100.
  • the first camera 102 is disposed at a front of the vehicle 100.
  • the first camera 102 is aligned in a direction of travel of the vehicle 100 and configured to form a first camera image 110 of a
  • the second camera 104 is disposed on a right side mirror of the vehicle 100.
  • the second camera 104 is oriented transversely to the vehicle 100.
  • the second camera 104 is adapted to a second camera image 112 of the
  • the third camera 106 is disposed at a rear of the vehicle 100.
  • the third camera 106 is opposite to the direction of travel of the vehicle 100
  • the third camera 106 is configured to display a third camera image 114 from the surroundings of the vehicle 100 behind the vehicle 100
  • the fourth camera 108 is disposed on a left side mirror of the vehicle 100.
  • the fourth camera 108 is also aligned across the vehicle 100.
  • the fourth camera 108 is configured to provide a fourth camera image 116 from the environment of the vehicle 100 to the left of the vehicle 100.
  • the four camera images 110, 112, 114, 116 cover the surroundings of the vehicle 100 approximately completely.
  • the camera images 110, 112, 114, 116 are recorded with a small focal length and therefore have a strong distortion.
  • the camera images 110, 112, 114, 116 are distorted barrel-shaped.
  • the camera images 110, 112, 114, 116 have a large image angle.
  • the camera images 110, 112, 114, 116 have approximately an angle of view of 180 °.
  • the camera images 110, 112, 114, 116 have overlapping edge regions.
  • FIG. 2 shows an illustration of a projection area 200 around a vehicle 100 according to an embodiment of the present invention.
  • Projection area 200 is virtual and does not correspond to an actual surface of an object.
  • the projection surface 200 has a bowl-like rotationally symmetrical 3-D shape.
  • the camera images of side cameras for a 3-D surround-view system, the camera images of side cameras,
  • Front cameras and rear cameras, as described for example in Fig. 1, are projected.
  • the camera images are distorted because the projection surface 200 is spatially bent.
  • the shape of the screen 200 is selected according to the approach presented here.
  • Fig. 3 shows a cross-sectional view of a projection surface 200 for
  • the projection surface 200 essentially corresponds to the projection surface in FIG. 2.
  • the projection surface 200 has a 3-D shape.
  • the shape consists of a circular disk 300 or ellipse disk of variable diameter D and a parabolic wall 302 of variable steepness a.
  • the projection surface 200 may be referred to as a 3-D shape template 200. If the projection surface 200 is round, it has a rotation axis 304.
  • the parameters D and a are typically preconfigured manually.
  • the approach presented here sets the two parameters automatically, depending on the obstacles surrounding the vehicle.
  • the approach presented here describes an automatic parameterization of a generic 3-D form template 200 for 3-D environment models.
  • FIG. 4 shows a block diagram of a device 400 for adapting a three-dimensional projection surface for projecting a plurality
  • the device 400 has a selection device 402, a comparison device 404 and a selection device 406.
  • the selection device 402 has a selection device 402, a comparison device 404 and a selection device 406.
  • Selection device 402 is connected to a storage device 408 and at least two cameras 102, 104 of a vehicle 100.
  • the first camera 102 and the second camera 104 point in at least one edge region
  • the cameras 102, 104 are designed to image their detection areas 410, 412 in a respective camera image or moving image.
  • the selection device 402 is provided with a
  • the Sensor system connected to at least two sensors.
  • the sensor system is designed to detect an environment of the vehicle 100.
  • Sensors are designed to each provide a sensor image of a part of the environment.
  • the selection device 402 is configured to select a first section from the first camera image of the first camera 102 using a first selection rule.
  • the first selection rule is stored in the memory device 408. Furthermore, the selection device 402 is designed to at least one further first section of the first
  • the second selection rule is also in the
  • the selection device 402 is further configured to select a second section from the second camera image of the second camera 104 using the first selection rule. Furthermore, the selection device 402 is designed to at least one further second section of the second camera image below
  • the first selection rule represents a first predetermined, for example precalculated, shape of the projection surface, as shown for example in FIGS.
  • the second selection rule represents a second predetermined, for example precalculated, shape of the projection surface.
  • the comparator 404 is configured to use a picture content of the first section with an image content of the second section using a Compare processing instructions to obtain a first evaluation criterion of the first form. Furthermore, the comparison device 404 is configured to compare an image content of the further first detail with an image content of the further second detail using the processing function in order to obtain a second evaluation criterion of the second form.
  • the processing rule may include a sum of squared and / or absolute differences between intensity values of corresponding image pixels in the first clipping and the second
  • the image pixels can share a common
  • Coordinates worth be assigned to the form to be evaluated so that they would be superimposed on the form superimposed. Coordinate values of the image pixels to be compared in the underlying camera images can be described in an interpolation rule dependent on the respective shape.
  • the selection device 406 is designed to select and provide the form 414 for the projection surface whose evaluation criterion represents a smaller deviation of the image contents from one another. To do so, at least one parameter of the selected shape 414 may be provided to define the projection surface when projecting.
  • FIG. 5 shows a representation of an overlapping region 500 of two
  • the projection surface 200 corresponds to an exemplary embodiment of a projection surface, as in FIG. 3.
  • the diameter of the circular disk is relatively large and the wall is relatively steep.
  • only a quarter of the projection area 200 is shown. Therefore, only one half of the camera images 110, 112 is shown.
  • the camera images 110, 112 have a large image angle.
  • the angle of view is between 180 ° and 160 °. This is in both
  • the objects can also be displayed from different sides.
  • the overlapping region 500 results.
  • objects can be duplicated. Since the angle of view is fixed here, a size of the overlapping area 500 is dependent on a geometry of the projection area 200. The larger a diameter of the projection area 200, the larger the overlapping area 500
  • FIG. 6 shows a representation of an overlapping region 500 of two
  • Camera images 110, 112 on a projection surface 200 according to another embodiment of the present invention.
  • the representation essentially corresponds to the illustration in FIG. 5.
  • FIG. 5 In contrast to FIG.
  • FIG. 7 shows a representation of a projection of partial areas 700 of the 3-D form template 200 in camera images 110, 112 of different positions.
  • the 3-D shape template 200 corresponds to an exemplary embodiment of the projection surface 200 in FIG. 3.
  • Sections 702 represented by further projection surfaces.
  • the further projection surfaces each have different parameters for determining their shape than the projection surface 200 shown.
  • the further partial surfaces 702 are likewise projected into the camera images 110, 112. This is the one
  • Projection surface 200 is selected, the projected partial surfaces 700, 702 in the camera images 110, 112 lead to a maximum match of the image contents or feature contents of the cutouts within the edges of the partial surfaces 700, 702.
  • the projection surface 200 is selected with the shape in which the objects represented within the projected sub-areas 700, 702 can best be matched or superimposed.
  • different parameterized predetermined for example
  • 3-D form templates 200 used to get one as well as possible appropriate 3D environment model, for example, for surround-view applications to determine.
  • a quality measure is very efficiently determined for each 3-D shape hypothesis 200.
  • the partial areas 700, 702 of the 3-D form template 200 are projected into the camera images 110, 112.
  • a partial area A 700 of the 3-D model 200 is projected into the image 110 of a first camera.
  • a projection of the partial area A 700 of the 3-D model 200 into the image 112 of a second camera takes place.
  • the adaptation of the 3-D geometry 200 takes place in stages.
  • the proposed method avoids the need for troublesome stereo methods or SfM (structure-from-motion) methods for determining the 3-D form 200.
  • a major advantage here is also the high proportion of precalculated method steps.
  • FIG. 8 shows a flowchart of a method 800 for adjusting a three-dimensional projection surface for projecting a plurality
  • the method 800 includes a step 802 of selecting, a step 804 of comparing, and a step 806 of selecting.
  • step 802 of the selection a first section is selected from a first camera image using a first selection rule.
  • a second section is selected from a second camera image using the first selection rule. Furthermore, will at least one further second section selected from the second camera image using the second selection rule.
  • the first selection rule represents a first precalculated form of the projection surface
  • the second selection rule represents a second precalculated form of the projection surface.
  • step 804 of the comparing an image content of the first section is compared with an image content of the second section using a processing rule to obtain a first evaluation criterion of the first form.
  • an image content of the further first section is compared with an image content of the further second section using the processing function to obtain a second evaluation criterion of the second form.
  • the shape for the projection surface is selected whose evaluation criterion represents a smaller deviation of the image contents from one another.
  • Further precalculated forms of projection surfaces can be represented by further selection instructions. Then, in step 802 of the selection, additional first clippings from the first camera image and additional second clippings from the second camera image can be selected using the further selection rules. The image contents of the additional clippings may be compared in step 804 of the comparison using the processing rule to obtain further evaluation criteria of the further shapes.
  • a third section may be selected from a third camera image using the first selection rule.
  • an image content of the third section may be compared to the image content of the first section and / or the image content of the second section using the processing rule to obtain an additional first evaluation criterion of the first shape.
  • An image content of the further third section may be further using the processing instruction with the image content first section and / or the further second section to obtain an additional second evaluation criterion of the second form.
  • FIG. 9 shows an illustration of a sequence of a method 900 for generating a plurality of selection instructions in accordance with an embodiment of the present invention.
  • the method 900 includes a step 902 of FIG.
  • step 902 of the pre-calculation 3-D shape templates with different parameters are calculated in advance.
  • step 904 of the calibration cameras of a camera system are externally and internally calibrated.
  • step 906 of the determination camera overlap areas for all pre-calculated 3-D shape templates are projected into the respective one
  • step 908 of determining coordinates of the corresponding image areas in the respective camera planes are determined.
  • step 910 of the calculation parameters of an interpolation rule, for example a bilinear interpolation, are calculated with the aid of which for each pixel of an image area of a first camera image the corresponding value can be calculated from the second camera image.
  • the results of the method 900 are stored in a memory or a look-up table.
  • the information stored in this way can be applied to camera images 110, 112, 114, 116 from the all-round view in accordance with the approach presented here.
  • the camera images 110, 112, 114, 116 can be processed at runtime. In this case, in a step 804 of the comparison, image contents of the
  • Camera images 110, 112, 114, 116 are compared for the overlapping areas of respective image pairs, for example with SAD and / or SSD, and a quality number for each form template are determined. Based on this, in a step 806 of selecting, the shape template may be the best one
  • Quality number are selected as the 3-D environment model to be used.
  • FIG. 10 shows a representation of steps 802, 804 of a method 800 for adapting a three-dimensional projection surface for projecting a A plurality of adjacent camera images 110, 112 according to one
  • Steps 802, 804 correspond to the steps in FIG. 8.
  • the camera images 110, 112 are processed.
  • a partial area per camera image 110, 112 and per form of the projection surface to be evaluated are processed.
  • a subarea 1000, 1002, 1004, 1006 may be referred to as a clipping or region of interest. The rest of the camera images 110, 112 remain unnoticed or faded out.
  • Subareas 1000, 1002, 1004, 1006 are selected using selection rules 1008, 1010 that have been pre-calculated based on the shape of the particular screen. Thus, the selection rules 1008, 1010 have no relation to objects 1012, which are depicted in the camera images 110, 112. A contour of the partial regions 1000, 1002, 1004, 1006 depends only on the overlapping region, as shown in FIGS. 5 and 6.
  • step 804 of the comparison the image contents of the cutouts 1000, 1002, 1004, 1006 are used to determine which shape of the cutouts 1000, 1002, 1004, 1006 are used to determine which shape of the cutouts 1000, 1002, 1004, 1006 are used to determine which shape of the cutouts 1000, 1002, 1004, 1006 are used to determine which shape of the cutouts 1000, 1002, 1004, 1006 are used to determine which shape of the cutouts 1000, 1002, 1004, 1006 are used to determine which shape of the
  • Projection surface better matches the displayed objects 1012.
  • the image content of the first detail 1000 from the first camera image 110 is compared with the image content of the second detail 1002 from the second camera image 112.
  • the comparison takes place pixel by pixel, since each image pixel of the first detail 1000 has a corresponding image pixel in the second detail 1002.
  • Intensity values of the image pixels are compared with one another and an intensity difference is calculated in each case.
  • the evaluation criterion is calculated from the intensity differences of all image pixels. The more similar the image contents of the sections 1000 and 1002 are, the lower the intensity differences.
  • the same procedure is followed for the further first cutout 1004 and the further second cutout 1006. Using the calculated
  • Evaluation criteria is then selected for the projection better matching shape of the screen.
  • the shape of the "bowl,” that is, a bowl or bowl-like shape becomes the rendering during runtime of the projection of overlap areas 100, 1002, 1004, 1006 of the Bowl in Figures 110, 112 and the subsequent comparison as well as a selection of predefined bowl shapes.
  • the shape of the bowl is adjusted as efficiently as possible during the runtime.
  • at least one area of the bowl is selected in which an image overlap exists. In particular, in current systems these are typically four areas.
  • the area 1000, 1002, 1004, 1006 is projected into the two images 110, 112 of the cameras, which overlap in the respective area 1000, 1002, 1004, 1006.
  • 1000, 1002, 1004, 1006 is compared.
  • the comparison is made by their shape, structure, ride or feature content.
  • the comparison is preferably carried out in all four overlapping areas 1000, 1002, 1004, 1006.
  • a selection of a predefined bowl shape from a stored shape library is carried out according to best-fit criteria.
  • the approach presented here makes it possible to dispense with a computation-intensive comparison by means of stereo comparisons in the overlap areas 1000, 1002, 1004, 1006. It can save computing capacity.
  • the method 800 presented here may be performed on a graphics processor.
  • an exemplary embodiment comprises an "and / or" link between a first feature and a second feature, then this is to be read so that the embodiment according to one embodiment, both the first feature and the second feature and according to another embodiment either only first feature or only the second feature.

Abstract

L'invention concerne un procédé (800) d'adaptation d'une surface de projection tridimensionnelle en vue de la projection d'une pluralité d'images de caméra voisines. Ledit procédé (800) comporte une étape de sélection, une étape de comparaison et une étape (806) de choix. L'étape de sélection consiste à sélectionner un premier extrait (1000) d'une première image de caméra à l'aide d'une première consigne de sélection (1008). De plus, au moins un autre premier extrait (1004) de la première image de caméra (1100) est sélectionné à l'aide d'au moins une deuxième consigne de sélection (1010). Par ailleurs, un deuxième extrait (1002) d'une deuxième image de caméra est sélectionné à l'aide de la première consigne de sélection (1008). En outre, au moins un autre deuxième extrait (1006) de la deuxième image de caméra est sélectionné à l'aide de la deuxième consigne de sélection (1010), la première consigne de sélection (1008) représentant une première forme prédéfinie de la surface de projection tandis que la deuxième consigne de sélection (1010) représente une deuxième forme prédéfinie de la surface de projection. L'étape de comparaison consiste à comparer un contenu d'image (1012) du premier extrait (1000) à un contenu d'image (1012) du deuxième extrait (1002) à l'aide d'une consigne de traitement afin d'obtenir un premier critère d'évaluation de la première forme. Par ailleurs, un contenu d'image (1012) de l'autre premier extrait (1004) est comparé à un contenu d'image (1012) de l'autre deuxième extrait (1006) à l'aide de la consigne de traitement afin d'obtenir un deuxième critère d'évaluation de la deuxième forme. L'étape de choix consiste à choisir la forme pour la surface de projection dont le critère d'évaluation représente une différence moindre des contenus d'image (1012) l'un de l'autre.
PCT/EP2015/052156 2014-04-02 2015-02-03 Procédé et dispositif d'adaptation d'une surface de projection tridimensionnelle pour la projection d'une pluralité d'images de caméra voisines WO2015149970A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102014206246.2A DE102014206246A1 (de) 2014-04-02 2014-04-02 Verfahren und Vorrichtung zum Anpassen einer dreidimensionalen Projektionsfläche zum Projizieren einer Mehrzahl benachbarter Kamerabilder
DE102014206246.2 2014-04-02

Publications (1)

Publication Number Publication Date
WO2015149970A1 true WO2015149970A1 (fr) 2015-10-08

Family

ID=52462310

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/052156 WO2015149970A1 (fr) 2014-04-02 2015-02-03 Procédé et dispositif d'adaptation d'une surface de projection tridimensionnelle pour la projection d'une pluralité d'images de caméra voisines

Country Status (2)

Country Link
DE (1) DE102014206246A1 (fr)
WO (1) WO2015149970A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015221340A1 (de) * 2015-10-30 2017-05-04 Conti Temic Microelectronic Gmbh Vorrichtung und Verfahren zur Bereitstellung einer Fahrzeugumgebungsansicht für ein Fahrzeug

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102021214952A1 (de) 2021-12-22 2023-06-22 Robert Bosch Gesellschaft mit beschränkter Haftung Verfahren zur Anzeige einer virtuellen Ansicht einer Umgebung eines Fahrzeugs, Computerprogramm, Steuergerät und Fahrzeug

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6393162B1 (en) * 1998-01-09 2002-05-21 Olympus Optical Co., Ltd. Image synthesizing apparatus
EP2192552A1 (fr) * 2008-11-28 2010-06-02 Fujitsu Limited Appareil de traitement d'images, procédé de traitement d'images et support d'enregistrement
WO2013016409A1 (fr) * 2011-07-26 2013-01-31 Magna Electronics Inc. Système de vision pour véhicule
DE102012018326A1 (de) * 2012-09-15 2014-03-20 DSP-Weuffen GmbH Verfahren und Vorrichtung für ein bildgebendes Fahrerassistenzsystem mit verdeckungsfreier Umsichtfunktion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012073836A (ja) 2010-08-30 2012-04-12 Fujitsu Ten Ltd 画像表示システム、画像処理装置、および、画像表示方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6393162B1 (en) * 1998-01-09 2002-05-21 Olympus Optical Co., Ltd. Image synthesizing apparatus
EP2192552A1 (fr) * 2008-11-28 2010-06-02 Fujitsu Limited Appareil de traitement d'images, procédé de traitement d'images et support d'enregistrement
WO2013016409A1 (fr) * 2011-07-26 2013-01-31 Magna Electronics Inc. Système de vision pour véhicule
DE102012018326A1 (de) * 2012-09-15 2014-03-20 DSP-Weuffen GmbH Verfahren und Vorrichtung für ein bildgebendes Fahrerassistenzsystem mit verdeckungsfreier Umsichtfunktion

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015221340A1 (de) * 2015-10-30 2017-05-04 Conti Temic Microelectronic Gmbh Vorrichtung und Verfahren zur Bereitstellung einer Fahrzeugumgebungsansicht für ein Fahrzeug
US20170120822A1 (en) * 2015-10-30 2017-05-04 Conti Temic Microelectronic Gmbh Device and Method For Providing a Vehicle Environment View For a Vehicle
US10266117B2 (en) 2015-10-30 2019-04-23 Conti Temic Microelectronic Gmbh Device and method for providing a vehicle environment view for a vehicle
DE102015221340B4 (de) * 2015-10-30 2021-02-25 Conti Temic Microelectronic Gmbh Vorrichtung und Verfahren zur Bereitstellung einer Fahrzeugumgebungsansicht für ein Fahrzeug

Also Published As

Publication number Publication date
DE102014206246A1 (de) 2015-10-08

Similar Documents

Publication Publication Date Title
DE102006055758B4 (de) Verfahren zur Kalibrierung von Kameras und Projektoren
DE102013108070A1 (de) Bildkalibrierung und Entzerrung einer Weitwinkelkamera
DE102015209391A1 (de) Verfahren und Vorrichtung zum Erzeugen einer Maskierungsvorschrift sowie zum Maskieren einer Bildinformation einer Kamera
DE102014201271A1 (de) Verfahren und Steuergerät zum Erkennen einer Veränderung eines relativen Gierwinkels innerhalb eines Stereo-Video-Systems für ein Fahrzeug
DE102012223373A1 (de) Vorrichtung und Verfahren zum Korrigieren einer Bildverzerrung von einer Heckkamera
DE102018102047A1 (de) Verfahren zum Darstellen eines Umgebungsbereichs eines Kraftfahrzeugs mit virtuellen, länglichen Abstandsmarkierungen in einem Bild, Computerprogrammprodukt, sowie Anzeigesystem
DE102018113559A1 (de) Verfahren zum Erkennen einer Fahrbahnmarkierung durch Validieren anhand der Linienbreite; Steuereinrichtung; Stellplatzerkennungssystem; sowie Fahrerassistenzsystem
DE102015104065A1 (de) Verfahren zum Bestimmen einer Position eines Objekts in einem dreidimensionalen Weltkoordinatensystem, Computerprogrammprodukt, Kamerasystem und Kraftfahrzeug
WO2015149970A1 (fr) Procédé et dispositif d'adaptation d'une surface de projection tridimensionnelle pour la projection d'une pluralité d'images de caméra voisines
WO2016177508A1 (fr) Procédé de représentation d'un environnement d'un véhicule
DE102015105529A1 (de) Verfahren zum Transformieren eines Bildes einer virtuellen Kamera, Computerprogrammprodukt, Anzeigesystem und Kraftfahrzeug
DE102017115587A1 (de) Verfahren zum Bestimmen einer räumlichen Unsicherheit in Bildern eines Umgebungsbereiches eines Kraftfahrzeugs, Fahrerassistenzsystem sowie Kraftfahrzeug
EP3420533B1 (fr) Procédé d'étalonnage d'un système de mesure optique
DE102015204213A1 (de) Verfahren zum Zusammensetzen von zwei Bildern einer Fahrzeugumgebung eines Fahrzeuges und entsprechende Vorrichtung
DE102014219428A1 (de) Selbstkalibrierung eines Stereokamerasystems im Auto
DE102017221381A1 (de) Verfahren, Vorrichtung und Computerprogramm zum Ermitteln eines Abstandes zu einem Objekt
DE102017117594A1 (de) Automatisierte Erkennung einer Scheinwerferfehlstellung
DE102014219418B4 (de) Verfahren zur Stereorektifizierung von Stereokamerabildern und Fahrerassistenzsystem
EP3465608B1 (fr) Procédé et dispositif de détermination d'une transition entre deux images affichées et véhicule
WO2005025237A1 (fr) Procede d'auto-etalonnage d'un systeme photographique
DE102015101190A1 (de) Verfahren zum Bestimmen eines Bildtiefenwerts abhängig von einem Bildbereich, Kamerasystem und Kraftfahrzeug
DE102019008892A1 (de) Verfahren zum Betreiben eines Kamerasystems mit einem kameragestützten Bildstitching oder einem merkmalsgestützten Bildstitching, sowie Kamerasystem
DE102014211709B4 (de) Verfahren zur rechnergestützten dreidimensionalen Rekonstruktion einer Oberfläche eines Objekts aus digitalen Bilddaten
DE102015209284A1 (de) Verfahren zum Erzeugen einer Ansicht einer Fahrzeugumgebung
EP3555808A1 (fr) Dispositif de fourniture d'une détection d'obstacle améliorée

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15703055

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase
122 Ep: pct application non-entry in european phase

Ref document number: 15703055

Country of ref document: EP

Kind code of ref document: A1