US20190279008A1 - Visual surround view system for monitoring vehicle interiors - Google Patents
Visual surround view system for monitoring vehicle interiors Download PDFInfo
- Publication number
- US20190279008A1 US20190279008A1 US16/288,888 US201916288888A US2019279008A1 US 20190279008 A1 US20190279008 A1 US 20190279008A1 US 201916288888 A US201916288888 A US 201916288888A US 2019279008 A1 US2019279008 A1 US 2019279008A1
- Authority
- US
- United States
- Prior art keywords
- vehicle interior
- cam
- img
- cameras
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
-
- G06K9/00832—
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/008—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles allowing the driver to see passengers, e.g. for busses
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R11/00—Arrangements for holding or mounting articles, not otherwise provided for
- B60R11/04—Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
- G02B30/50—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B35/00—Stereoscopic photography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30268—Vehicle interior
Definitions
- the present disclosure relates to the field of driver assistance systems, in particular a monitoring system for vehicle interiors.
- Driver assistance systems are electronic devices for motor vehicles in-tended to support the driver regarding safety aspects and to increase driving ease.
- So-called awareness assistants (also referred to as “driver state detection systems” or “drowsiness detection systems”) belong to driver assistance systems.
- Such awareness assistants comprise sensor systems for monitoring the driver, which follow the movements and eyes of the driver, for example, thus detecting drowsiness or distraction, and potentially issuing a warning signal.
- driver assistance systems that monitor the vehicle interior.
- these systems In order for the person responsible for driving to be able to oversee the state of the vehicle interior, these systems have one or more cameras that monitor the interior.
- a system for monitoring a vehicle interior based on infrared beams is known form the German patent specification DE 4 406 906 A1.
- Parking and maneuvering assistants for detecting the close proximity of the vehicle are also known. Parking and maneuvering assistants normally comprise a rear view camera. Modern parking and maneuvering assistants, also referred to as surround-view systems, also comprise, in addition to a rear view camera, other wide-angle cameras, e.g. on the front and sides of the vehicle. Such surround-view systems generate an image of the vehicle from a bird's eye view, thus from above. Such a surround-view system is known for example from the European patent application EP 2 511 137 A1.
- German patent specification DE 10 2015 205 507 B3 discloses a surround-view system for a vehicle in this regard, with numerous evaluation units for processing images recorded by numerous exterior cameras.
- the evaluation unit generates a bird's eye view from the recorded images by computing a projection of the recorded images onto a virtual projection surface, and by computing a bird's eye view of the projection onto the virtual projection surface.
- the object of the present invention is to create a driver assistance system that further increases the safety of the vehicle.
- a device comprising a processing unit configured to project at least one camera image from at least one camera onto a virtual projection surface, in order to create a virtual view of the vehicle interior.
- the one or more cameras can be black-and-white or color cameras, stereo cameras, or time-of-flight cameras.
- the cameras preferably have wide-angle lenses.
- the cameras can be distributed, for example, such that every area of the vehicle interior lies within the perspective of at least one camera. Typical seating positions of the passengers can be taken into account, such that the passengers do not block certain views, or only block them to a minimal extent.
- the camera images are composed of numerous pixels, each of which define a grey value, a color value, or depth of field information.
- the vehicle interior can be the entire vehicle interior, for example, or part of the vehicle interior, e.g. the loading region of a transporter, or the living area of a mobile home.
- the interior of larger vehicles, such as transporters, campers, and SUVs, as well as normal passenger cars is often difficult to see, and cannot be registered immediately, because of passengers, loads, luggage, etc. It is also difficult for the driver to oversee the interior while driving because he must pay attention to the street in front of him, and cannot look back. Furthermore, a child may become unbuckled from its safety belt during travel, and for this reason, it is not sufficient for the driver, or the person responsible for driving (in an autonomous vehicle), to examine the interior only once when commencing the drive.
- an autonomous vehicle as well as with larger vehicles, e.g. camper vehicles, loads or heavy objects might be moved by passengers during travel. As a result, hazards may arise during travel because someone is not buckled in, or loads are not correctly secured.
- the virtual view or virtual image of the vehicle interior generated by means of the present invention can be transmitted from the processing unit to a display, for example, and displayed therein.
- the display can be a display screen, for example, located in the interior of the vehicle such that the driver can see it.
- the virtual image of the vehicle interior displayed on the display can assist the driver, for example, in checking whether all of the passengers have reached their seats and are buckled in, and whether there are dangerous objects in the vehicle interior.
- the advantage of a virtual depiction of the interior on this manner is that the virtual camera can be placed at an arbitrary position, e.g. even outside the vehicle, or at positions that are difficult to access inside the vehicle, such that an image of the interior of the vehicle can be generated that provides a particularly comprehensive view thereof.
- a virtual camera placed high above the vehicle a bird's eye view of the vehicle interior can be generated, for example, like that from a camera with a telephoto lens, located above the vehicle interior, i.e. eliminating distortions that occur with wide-angle images recorded by cameras that are close to the object they are recording.
- a virtual image of the vehicle interior can be obtained from a bird's eye view, providing the driver with clear view of the vehicle interior.
- a potentially dangerous object on a table, or a child in the back who has become unbuckled can be identified immediately.
- the images from numerous cameras are combined with his method to form a virtual image, thus improving the clarity for the driver.
- the virtual image of the vehicle interior displayed on the monitor is generated in accordance with the exemplary embodiments described below, in that one or more camera images from respective cameras are projected onto a virtual projection surface.
- the projection surface can be a model of the vehicle interior, for example, in particular a surface composed of numerous rectangular or polygonal surfaces, or curved surfaces, such as spheres and hemispheres, etc. thus representing a vehicle interior.
- the model of the vehicle interior serves as a projection surface for the camera image data.
- the projection surface does not necessary have to be a plane. Instead, the projection surface can be a two dimensional surface, for example, embedded in a three dimensional space. Rectangles or polygons, or curved surfaces that form 2D surfaces can be arranged at certain angles in relation to one another, such that the 2D surface represents a vehicle interior.
- the surfaces forming the projection surface can be interconnected, such that a uniform projection surface is formed.
- the projection surface can be composed of separate projection sub-surfaces, each of which is composed of one or more projection surfaces.
- the projection of the a least one camera image can be obtained, for example, through virtual optical imaging processes, e.g. in that light beams from the respective pixels of the camera image intersect with the projection surface.
- a 3D model of the vehicle interior is supplied with current camera data, such that the 3D model comprises current textures and color data.
- the processing unit can be a processor or the like.
- the processing unit can be the central processing unit of an on-board computer of a vehicle, that can also assume other functions in the vehicle, in addition to generating a virtual image of the vehicle interior.
- the processing unit can also be a component, however, dedicated to generating a virtual image of the vehicle interior.
- the processing unit is configured to generate a virtual image of the image projected onto the virtual projection surface.
- a virtual image of the vehicle interior can be reconstructed from the camera images projected onto the virtual projection surface by means of virtual optical technologies.
- the virtual image can be generated, for example, in that the camera images projected onto the project surface are “photographed” by a virtual camera.
- This virtual photography can take place in turn by means of optical imaging processes, e.g. in that light beams from the respective pixels of the virtual camera image intersect with the projection surface.
- the processing unit can thus be configured, for example, to “film” a reconstructed 3D scenario of the vehicle interior recorded by a virtual camera (or a virtual observer) based on the camera images.
- Both the position and the orientation of this camera can be selected for this.
- a virtual position in the center, above the vehicle, with a camera pointing straight down, or a virtual position somewhat to the side, with a diagonal orientation can be selected. It is also possible for the user of the system to determine the position and orientation of the virtual camera himself, e.g. via a touchscreen.
- the processing unit can also be configured to project numerous camera images from respective cameras onto the virtual projection surface, in order to create a composite virtual image. This can be accomplished in that the numerous camera images are combined to form a composite camera image by means of “stitching” technologies, known to the person skilled in the art, which is then projected onto the virtual projection surface. “Stitching” technologies can comprise alpha blending, for example, meaning that certain portions of pixel values are removed from the camera images.
- the camera images can be projected individually onto the projection surface, and these individual projections of the images can then be combined by means of the “stitching” technologies to form a composite projection, which is then “photographed” by the virtual camera.
- the camera images can be projected individually and “photographed,” in order to then combine the virtual camera images by means of “stitching” technologies to obtain a virtual composite camera image.
- the virtual projection surface can be derived from a model of the vehicle interior, in particular a 3D model of the vehicle interior.
- the model of the vehicle interior can comprise both static and temporary components of the vehicle interior.
- a current 3D model of the static (i.e. seats or tables, etc.) and moving or temporary components (passengers, luggage, etc.) can be generated from the images recorded by the camera during the runtime of the processing unit.
- the 3D model of the vehicle interior can comprise a collection of three dimensional coordinates, pixels, surfaces, or other geometric shapes.
- the processing unit can also be configured to detect common features of an object in numerous camera images, in order to generate a 3D model of the vehicle interior.
- the detection of common features of an object can take place, for example, by correlating camera images with one another.
- a common feature can be a correlated pixel or correlated group of pixels, or certain structural or color patterns in the camera images.
- camera images can be correlated with one another in order to identify correlating pixels or features, wherein the person skilled in the art can make use of known image correlating methods, such as those described by Olivier Faugeras et al. in the research paper “Real-time correlation-based stereo: algorithm, implementations and applications,” RR-2013, INRIA 1993.
- two camera images can be correlated with one another. In order to increase the precision of the reconstruction, more than two camera images can also be correlated with one another.
- the processing unit can be configured to reconstruct the vehicle interior from current camera images using stereoscopic technologies.
- the generation of the 3D model can thus comprise a reconstruction of the three dimensional position of an object, e.g. a pixel or feature, by means of stereoscopic technologies.
- the 3D model of the vehicle interior obtained in this manner can be in the form of a collection of the three dimensional coordinates of all of the pixels identified in the correlation process.
- this collection of three dimensional points can also be approximated by surfaces in order to obtain a 3D model with surface areas.
- the processing unit can also be configured to generate the model of the vehicle interior taking into account depth of field information, provided by at least one of the cameras.
- depth of field information is provided, for example, by stereoscopic cameras or time-of-flight cameras.
- Such cameras supply depth of field information for individual pixels, which can be referenced with the pixel coordinates in order to generate the model.
- the model of the vehicle interior from which the projection surface is derived can be a predefined reference model of the vehicle interior, for example.
- the processing unit can also be configured to combine such a predefined reference model of the vehicle interior with a model of the vehicle interior obtained from current stereoscopic camera images.
- the processing unit can be configured to create at least portions of the model of the vehicle interior taking the reference model into account.
- the processing unit can recreate static or permanent components of the 3D model by detecting common features of an object in numerous images, taking into account an existing 3D reference model of the static or permanent vehicle interior.
- a collection of three dimensional coordinates of pixels or other features identified in a correlation process can be compared with an existing 3D reference model of the vehicle interior, in order to determine where the objects currently in the vehicle interior correlate to or deviate from the 3D model.
- a reference model of the vehicle interior comprises static components of the vehicle interior, for example, such as permanent objects in the vehicle interior.
- the 3D reference model can comprise seats, tables, interior walls, fixtures, etc.
- the reference model preferably also takes the vehicle interior into account resulting from rotating, moving, or tilting the seats and other static interior elements of the vehicle.
- the reference model can be stored in the processing unit, for example.
- a reference model of the (permanent) vehicle interior without loads and passengers, can be stored in a memory, e.g. a RAM, ROM, Flash, or SSD or hard disk memory of the processing unit for preparing and processing the individual camera images to form an overview comprising the entire interior.
- the reference model can be generated in advance, for example in a calibration phase of an interior monitoring system, from camera images taken by the interior cameras.
- the reference model can also comprise 3D data regarding the vehicle interior, provided by the vehicle manufacturer, and stored in the processing unit. These can relate to structural details of a vehicle model known from the production process, such as grid models of interior components such as seats and fixtures, or grid models of vehicle walls and panels.
- the 3D reference model can also define colors and textures of surfaces, in order to obtain a particularly realistic model of the vehicle interior.
- the exemplary embodiments described in greater detail below also disclose a monitoring system for a vehicle interior, comprising one or more cameras and a device, as described above, configured to create a virtual image of the vehicle interior based on one or more camera images from the cameras.
- the interior monitoring system can also comprise a communication system, via which the processing unit receives data from the interior cameras, for example, and transmits data, e.g. the generated virtual image of the vehicle interior, to the display device.
- the communication system can be a CAN bus for a motor vehicle, for example.
- the exemplary embodiments described in greater detail below also relate to a method comprising the projection of at least one camera image from at least one camera onto a virtual projection surface, in order to create a virtual image of the vehicle interior.
- the method can comprise all of the processes that have been described above in conjunction with a processing unit or a monitoring system for a vehicle interior.
- the methods described herein can also be executed as computer programs.
- the exemplary embodiments thus also relate to a computer program comprising instructions for projecting at least one camera image from at least one camera onto a virtual projection surface when the program is executed, in order to create a virtual image of the vehicle interior.
- the computer program can comprise all of the processes that have been described above in conjunction with a processing unit or a monitoring system for a vehicle interior.
- the computer program can be stored in a memory, for example.
- FIG. 1 shows a schematic top view of a vehicle equipped with an interior monitoring system
- FIG. 2 shows a block diagram of the interior monitoring system
- FIG. 3 shows a schematic example of a virtual projection surface
- FIG. 4 shows an exemplary process for projecting a camera image onto a virtual projection surface
- FIG. 5 a shows a flow chart for a process for generating a model of the vehicle interior from numerous camera images of the vehicle interior;
- FIG. 5 b shows a flow chart for an alternative process for generating a model of the vehicle interior from numerous camera images of the vehicle interior;
- FIG. 6 shows a process for correlating two camera images, in order to identify correlating pixels
- FIG. 7 shows an exemplary process for reconstructing the three dimensional position of a pixel by means of stereoscopic technologies
- FIG. 8 shows a schematic example of the comparison of a collection of three dimensional coordinates of pixels identified in the correlation process with a 3D reference model
- FIG. 9 shows how an image of a 3D model of the vehicle interior can be obtained by means of a virtual camera.
- FIG. 10 shows an example of a virtual image of the vehicle interior from a bird's eye view.
- FIG. 1 shows a schematic bird's eye view of a vehicle 1 , in this case a mobile home, by way of example, equipped with an interior monitoring system.
- the interior monitoring system comprises an exemplary arrangement of interior cameras Cam 1 -Cam 8 .
- Two of the interior cameras Cam 1 , Cam 2 are located at the front end of the vehicle interior 2
- two of the cameras Cam 3 , Cam 4 are located on the right side of the vehicle interior 2
- two interior cameras Cam 5 , Cam 6 are located at the back end of the vehicle interior 2
- two interior cameras Cam 7 , Cam 8 are located on the left side of the vehicle interior 2 .
- Each interior camera Cam 1 -Cam 8 records a portion of the interior 2 of the vehicle 1 .
- the exemplary equipping of the vehicle 1 with interior cameras is configured such that the interior cameras Cam 1 -Cam 8 can observe the entire vehicle interior when more people are in the vehicle.
- the permanent interior fittings of the vehicle interior include four seats S 1 - 4 and a table T.
- the cameras can be black-and-white or color cameras with wide angle lenses, for example.
- FIG. 2 shows a block diagram, schematically illustrating the interior monitoring system.
- the interior monitoring system comprises a processing unit 3 and a display 4 .
- the images recorded by the various cameras Cam 1 -Cam 8 are transmitted via a communication system 5 (e.g. a CAN bus) to the processing unit 3 for processing in the processing unit 3 .
- the processing unit 3 is configured to correlate camera images in order to generate a 3D model of the interior, as is shown in FIGS. 3, 4, and 5 , and shall be explained in greater detail below.
- the processing unit 3 then recreates a bird's eye view of the vehicle interior 2 .
- the reproduction of the bird's eye view is then shown on the display 4 , which is located inside the vehicle 1 where it can be seen by a driver.
- the vehicle 1 can be a mobile home, for example.
- the bird's eye view shown on the display 4 can help the driver check whether all of the passengers have taken seats, and are buckled in, and whether there are any dangerous objects in the vehicle interior.
- FIG. 3 shows a schematic example of a virtual projection surface derived from a 3D model of the vehicle interior.
- the projection surface comprises surfaces F 1 , F 2 , F 3 , F 4 of a 3D model of the vehicle interior, representing a floor of the vehicle interior.
- the projection surface comprises a surface F 5 of a 3D model of the vehicle interior that represents a side wall of the vehicle interior, as well as surfaces F 6 , F 7 , F 8 of a 3D model of the vehicle interior that represent a table in the vehicle interior.
- a portion of the projection surface Proj, or the model of the vehicle interior, is shown in FIG. 3 .
- the surfaces F 1 to F 8 are combined to form a (two dimensional) projection surface Proj of the vehicle interior, which can be used as a virtual projection surface for camera images, as shall be explained in greater detail in reference to FIG. 4 .
- a (two dimensional) projection surface of the vehicle interior can be defined manually, for example, and stored in a memory in the processing unit as a reference model.
- a 2D projection surface can also be derived or reconstructed from a 3D CAD model of the vehicle interior, as is obtained from a construction processes for the vehicle.
- a projection surface of the vehicle interior can also be obtained with stereoscopic methods, as shall be explained in reference to FIGS. 5 a , 5 b , 6 , and 7 in greater detail below.
- a collection of 3D points can be determined with stereoscopic methods that represent a 2D surface of the vehicle interior, as is shown, for example, in the aforementioned article by Olivier Faugeras et al. in the research report, “Real-time correlation-based stereo: algorithm, implementations and applications,” RR-2013, INRIA 1993.
- the projection surface Proj can also be derived from a composite model, which is generated by combining a predefined reference model with a current, reconstructed model, as shall be explained in greater detail below in reference to FIG. 5 .
- FIG. 4 shows an exemplary process for projecting a camera image Img 1 onto a virtual projection surface Proj.
- the virtual projection surface Proj can be one or more of the surfaces F 1 to F 8 shown in FIG. 3 .
- the projection surface Proj represents a model of the vehicle interior, defining a two dimensional surface embedded in a three dimensional space, not yet containing any textural or color information.
- a light beam OS 1 or OS 2 is calculated for each pixel P 1 , P 2 of the camera image Img 1 from the known position and orientation of the camera Cam 1 , and the likewise known position and orientation of the image plane of the camera image Img 1 .
- the pixels P 1 and P 2 correspond to the image of a passenger M 1 in the vehicle interior generated by the camera Cam 1 .
- the intersection of the two light beams OS 1 and OS 2 with the virtual projection surface Proj produces projections P 1 ′ and P 2 ′ of the pixels P 1 and P 2 .
- a model of the vehicle interior is composed of these projections of the camera image that defines a two dimensional surface containing textural and color information.
- virtual images P 1 ′′ and P 2 ′′ of points P 1 and P 2 can be obtained in a virtual image plane ImgV of the virtual camera CamV.
- a virtual image ImgV of the portion of the vehicle interior recorded by the camera Cam 1 can be generated on the virtual projection surface Proj from the perspective of the virtual camera CamV.
- the process shown in FIG. 4 can be carried out for all of the surfaces of the vehicle interior model (e.g. F 1 to F 8 in FIG. 3 ) in order to generate a projection of the camera image Img 1 on the 2D surface of the vehicle interior.
- the virtual image plane ImgV and the virtual camera CamV are used for computing a view of the scenario, as would only be possible in reality from an observation point located far above the vehicle.
- the process shown in FIG. 4 can furthermore be carried out for camera images from numerous, or all, of the interior cameras (e.g. Cam 1 to Cam 8 in FIG. 1 and FIG. 2 ), in order to obtain a complete image of the vehicle interior. If numerous cameras are used, positioned on different sides of the vehicle, as shown in FIG. 1 , numerous camera images are recorded simultaneously, resulting in numerous images calculated for the virtual observer represented by the virtual camera ImgV. For this reason, the processing unit 3 is also configured to combine the numerous images to form a single, seamlessly composed bird's eye view, which is then shown in the display 4 .
- the interior cameras e.g. Cam 1 to Cam 8 in FIG. 1 and FIG. 2
- the processing unit 3 can be configured to combine numerous images to form a composite image by means of technologies known to the person skilled in the art as “stitching.”
- the projection onto the projection surface, as well as the bird's eye view, can be calculated based on the individual composite images.
- the processing unit 3 is configured to combine numerous projection images of the recorded images to form a single projection image on the virtual projection surface by means of “stitching” technologies, and then compute a bird's eye view based on this single projection image.
- FIG. 5 a shows a flow chart for a process for generating a model of the vehicle interior from numerous camera images of the vehicle interior.
- the process can be carried out in the processing unit ( 3 in FIG. 2 ), for example.
- step 501 camera images Img 1 -Img 8 recorded by the interior cameras (Cam 1 to Cam 8 in FIGS. 1 and 2 ) are correlated with one another in order to identify correlating pixels in the camera images Img 1 -Img 8 .
- step 502 a 3D model Mod3D of the vehicle interior is reconstructed from the information obtained in step 501 regarding correlating pixels.
- FIG. 5 b shows a flow chart for an alternative process for generating a model of the vehicle interior from numerous camera images of the vehicle interior and an existing model of the vehicle interior.
- the process can be carried out in the processing unit ( 3 in FIG. 2 ), for example.
- step 501 camera images Img 1 -Img 8 recorded by the interior cameras (Cam 1 - 8 in FIGS. 1 and 2 ) are correlated with one another in order to identify correlating pixels in the camera images Img 1 -Img 8 .
- step 502 a 3D model Mod3Da of the vehicle interior is reconstructed from the information obtained in step 501 regarding correlating pixels.
- the reconstructed model Mod3Da is combined with a predefined 3D reference model Mod3Db of the vehicle interior (e.g. with a 3D-CAD model obtained from the assembly process for the vehicle) in order to obtain a definitive 3D model Mod3D of the vehicle interior.
- the combining of the reconstructed model with the reference model can comprise the detection of objects that have been displaced with respect to the reference model, as shall be explained in greater detail below in reference to FIG. 8 . Additionally or alternatively, the combining of the reconstructed model with the reference model can also comprise a detection of additional objects that are not contained in the reference model, e.g. luggage or passengers in the vehicle interior.
- FIG. 6 shows, by way of example, a process for correlating two camera images in order to identify correlating pixels.
- Two interior cameras with known positions and orientations provide a first camera image Img 1 and a second camera image Img 2 . These can be camera images Img 1 and Img 2 from the two interior cameras Cam 1 and Cam 2 in FIG. 1 .
- the positions and orientations of the two cameras are different, such that the two camera images Img 1 and Img 2 show an exemplary object Obj from two different perspectives.
- Each of the camera images Img 1 and Img 2 are composed of individual pixels corresponding to the resolution and depth of color of the camera.
- the two camera images Img 1 and Img 2 are correlated with one another in order to identify correlating pixels, wherein the person skilled in the art can make use of known image correlation methods such as those specified above. It is determined in the correlation process that an object Obj is recorded in camera image Img 1 and in camera image Img 2 , and that pixel P 1 in camera image Img 1 correlates to pixel P 2 in camera image Img 2 .
- the position of the object Obj in camera image Img 1 differs from the position of the object Obj in camera image Img 2 due to the different camera positions and orientations.
- the form of the camera image of the object Obj in the second camera image likewise differs from the form of the image of the object in the first camera image due to the different perspective.
- the position of the pixel in three dimensional space can be determined from the different positions of the pixel P 1 in image Img 1 in comparison with pixel P 2 in image Img 2 by means of stereoscopic technologies (cf. FIG. 7 and the description below).
- the correlation process provides the positions of numerous pixels from objects in the vehicle interior, from which a 3D model of the vehicle interior can be reconstructed.
- FIG. 7 shows an exemplary process for reconstructing the three dimensional position of a pixel by means of stereoscopic technologies.
- a light beam OS 1 and OS 2 is computed for each pixel P 1 , P 2 from the known positions and orientations of the two cameras Cam 1 and Cam 2 , and the likewise known positions and orientations of the image planes of the camera images Img 1 and Img 2 .
- the intersection of the two light beams OS 1 and OS 2 provides the three dimensional positions P3D of the pixels forming the pixels P 1 and P 2 in the two camera images Img 1 and Img 2 .
- Two camera images are evaluated in the exemplary process shown in FIG. 7 , in order to determine the three dimensional position of two correlated pixels.
- the images from individual pairs of cameras Cam 1 /Cam 2 , Cam 3 /Cam 4 , Cam 5 /Cam 6 , or Cam 7 /Cam 8 can be correlated with one another in order to generate the 3D model.
- numerous camera images can be correlated with one another. If three or more camera images are to be correlated with one another, a first camera image can be selected as the reference image, in relation to which a disparity chart is calculated for each of the other camera images. The disparity charts are then combined in that the correlation with the best results is selected.
- the model of the vehicle interior obtained in this manner can be regarded as a collection of the three dimensional coordinates of all of the pixels identified in the correlation process. Furthermore, this collection of three dimensional points can also be approximated by surfaces, in order to obtain a model with surfaces. The collection of three dimensional coordinates of all of the pixels identified in the correlation process can also be compared with an existing reference model of the vehicle interior, in order to determine correlations and deviations of the current objects in the vehicle interior to the reference model.
- FIG. 8 shows an example in which two vehicle seats S 1 and S 2 defined in the reference model are identified by comparing the collection of three dimensional coordinates of pixels identified in the correlation process with the reference model.
- a displacement Dis of the vehicle seat S 1 from its standard position S 1 sdt can be detected from the deviations of the three dimensional coordinates S 1 of the first seat from the camera images.
- an adjustment of the backrests of seats S 1 and S 2 from their standard tilt can also be detected.
- FIG. 9 shows how an image of a model of the vehicle interior can be obtained by means of a virtual camera CamV.
- the image can be shown to the driver, for example, such that said driver can easily see that the vehicle interior is in a safe state.
- a three dimensional position P3D of a point of an object in the vehicle interior can be reconstructed from two images recorded by two cameras Cam 1 and Cam 2 by means of stereoscopic technologies. The color data of the pixel is projected onto this point.
- a virtual image P of the point P3D can be determined in a virtual image plane ImgV of the virtual camera CamV.
- the virtual image plane ImgV and the virtual camera CamV are used to calculate a view of the scenario (“photograph”), which would only be possible in reality from an observation point located high above the vehicle.
- a virtual image ImgV of the model of the vehicle interior can be generated with the color information from the actual cameras projected thereon from the perspective of the virtual camera CamV.
- a surface F of a seat S 1 detected in the model estimated from the collection of three dimensional points can be photographed on the image plane ImgV of the virtual camera CamV.
- the seat S 1 and the reconstructed point P3D represent a portion of a 3D model or a projection surface, by way of example, onto which textural and color information from the physical cameras is projected. In this manner, a virtual image of the vehicle interior from a bird's eye view can be obtained, providing the driver with a clear overview of the vehicle interior.
- FIG. 10 shows an example of a virtual image ImgV of the vehicle interior from a bird's eye view, such as would be obtained from the method described above.
- the image ImgV of the vehicle interior is determined, for example, from eight cameras in the vehicle interior, as described in reference to FIG. 1 .
- a model of the vehicle interior is generated from the correlation of the camera images in accordance with the process described in reference to FIG. 6 and FIG. 7 , which is then “photographed” by a virtual camera (CamV in FIG. 7 ) in accordance with the process shown in FIG. 9 .
- the 3D model constructed from the camera images comprises points, surfaces, or other geometric shapes belonging to seats S 1 , S 2 , S 3 , and S 4 in the vehicle interior.
- the 3D model constructed from the images also comprises points, surfaces, or other geometric shapes, that describe a table T, as well as points, surfaces, or other geometric shapes, that describe people M 1 , M 2 , M 3 in the vehicle interior.
- the virtual image ImgV of the 3D model of the vehicle interior shows the driver that the person M 3 has stood up, in violation of the safety guidelines, while the vehicle is in motion, and is consequently not buckled into the seat S 1 .
- the virtual image ImgV of the vehicle interior also shows the driver that there is an object Obj on the table T that is not secured, which could fly around in the vehicle interior if the brakes were applied suddenly in the vehicle, thus jeopardizing the safety of the people M 1 , M 2 , M 3 in the vehicle interior.
- the image ImgV of the vehicle interior in FIG. 10 illustrates a possible image of the vehicle interior from above, that could not be taken by an actual camera (Cam 1 - 8 in FIG. 1 ) due to the perspective.
- the computed image from the camera system can be displayed on any display screen or on an app on a smartphone.
- Both color and black-and-white cameras are used in the exemplary embodiments explained above, in order to obtain recordings of the vehicle interior.
- time-of-flight cameras or stereo cameras can also be used. These cameras also provide additional depth of field information for the object being recorded, such that the 3D model of the interior can be more easily and robustly obtained in the run-time of the system, without requiring any feature points.
- Simple black-and-white or color cameras are typically less expensive than time-of-flight or stereo cameras.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Mechanical Engineering (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Optics & Photonics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
- Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
- The present disclosure relates to the field of driver assistance systems, in particular a monitoring system for vehicle interiors.
- Driver assistance systems are electronic devices for motor vehicles in-tended to support the driver regarding safety aspects and to increase driving ease.
- So-called awareness assistants (also referred to as “driver state detection systems” or “drowsiness detection systems”) belong to driver assistance systems. Such awareness assistants comprise sensor systems for monitoring the driver, which follow the movements and eyes of the driver, for example, thus detecting drowsiness or distraction, and potentially issuing a warning signal.
- There are also driver assistance systems that monitor the vehicle interior. In order for the person responsible for driving to be able to oversee the state of the vehicle interior, these systems have one or more cameras that monitor the interior. A system for monitoring a vehicle interior based on infrared beams is known form the German
patent specification DE 4 406 906 A1. - Parking and maneuvering assistants for detecting the close proximity of the vehicle are also known. Parking and maneuvering assistants normally comprise a rear view camera. Modern parking and maneuvering assistants, also referred to as surround-view systems, also comprise, in addition to a rear view camera, other wide-angle cameras, e.g. on the front and sides of the vehicle. Such surround-view systems generate an image of the vehicle from a bird's eye view, thus from above. Such a surround-view system is known for example from the European
patent application EP 2 511 137 A1. - When numerous cameras observe the exterior of the vehicle from numerous perspectives, and each individual camera image is displayed individually and unprocessed, this makes the information difficult for the driver to review and observe. The German patent specification DE 10 2015 205 507 B3 discloses a surround-view system for a vehicle in this regard, with numerous evaluation units for processing images recorded by numerous exterior cameras. The evaluation unit generates a bird's eye view from the recorded images by computing a projection of the recorded images onto a virtual projection surface, and by computing a bird's eye view of the projection onto the virtual projection surface.
- The object of the present invention is to create a driver assistance system that further increases the safety of the vehicle.
- This object is achieved by the inventive device according to
claim 1, the inventive monitoring system according to claim 9, and the inventive method according to claim 10. Further advantageous embodiments of the invention can be derived from the dependent claims and the following description of preferred exemplary embodiments of the present invention. - According to the exemplary embodiments described below, a device is created that comprises a processing unit configured to project at least one camera image from at least one camera onto a virtual projection surface, in order to create a virtual view of the vehicle interior.
- The one or more cameras can be black-and-white or color cameras, stereo cameras, or time-of-flight cameras. The cameras preferably have wide-angle lenses. The cameras can be distributed, for example, such that every area of the vehicle interior lies within the perspective of at least one camera. Typical seating positions of the passengers can be taken into account, such that the passengers do not block certain views, or only block them to a minimal extent. The camera images are composed of numerous pixels, each of which define a grey value, a color value, or depth of field information.
- The vehicle interior can be the entire vehicle interior, for example, or part of the vehicle interior, e.g. the loading region of a transporter, or the living area of a mobile home. The interior of larger vehicles, such as transporters, campers, and SUVs, as well as normal passenger cars is often difficult to see, and cannot be registered immediately, because of passengers, loads, luggage, etc. It is also difficult for the driver to oversee the interior while driving because he must pay attention to the street in front of him, and cannot look back. Furthermore, a child may become unbuckled from its safety belt during travel, and for this reason, it is not sufficient for the driver, or the person responsible for driving (in an autonomous vehicle), to examine the interior only once when commencing the drive. With an autonomous vehicle, as well as with larger vehicles, e.g. camper vehicles, loads or heavy objects might be moved by passengers during travel. As a result, hazards may arise during travel because someone is not buckled in, or loads are not correctly secured.
- The virtual view or virtual image of the vehicle interior generated by means of the present invention can be transmitted from the processing unit to a display, for example, and displayed therein. The display can be a display screen, for example, located in the interior of the vehicle such that the driver can see it. The virtual image of the vehicle interior displayed on the display can assist the driver, for example, in checking whether all of the passengers have reached their seats and are buckled in, and whether there are dangerous objects in the vehicle interior.
- The advantage of a virtual depiction of the interior on this manner is that the virtual camera can be placed at an arbitrary position, e.g. even outside the vehicle, or at positions that are difficult to access inside the vehicle, such that an image of the interior of the vehicle can be generated that provides a particularly comprehensive view thereof. With a virtual camera placed high above the vehicle, a bird's eye view of the vehicle interior can be generated, for example, like that from a camera with a telephoto lens, located above the vehicle interior, i.e. eliminating distortions that occur with wide-angle images recorded by cameras that are close to the object they are recording. In this manner, a virtual image of the vehicle interior can be obtained from a bird's eye view, providing the driver with clear view of the vehicle interior. By way of example, a potentially dangerous object on a table, or a child in the back who has become unbuckled, can be identified immediately. Furthermore, the images from numerous cameras are combined with his method to form a virtual image, thus improving the clarity for the driver.
- The virtual image of the vehicle interior displayed on the monitor is generated in accordance with the exemplary embodiments described below, in that one or more camera images from respective cameras are projected onto a virtual projection surface.
- The projection surface can be a model of the vehicle interior, for example, in particular a surface composed of numerous rectangular or polygonal surfaces, or curved surfaces, such as spheres and hemispheres, etc. thus representing a vehicle interior. The model of the vehicle interior serves as a projection surface for the camera image data. The projection surface does not necessary have to be a plane. Instead, the projection surface can be a two dimensional surface, for example, embedded in a three dimensional space. Rectangles or polygons, or curved surfaces that form 2D surfaces can be arranged at certain angles in relation to one another, such that the 2D surface represents a vehicle interior. The surfaces forming the projection surface can be interconnected, such that a uniform projection surface is formed. Alternatively, the projection surface can be composed of separate projection sub-surfaces, each of which is composed of one or more projection surfaces.
- The projection of the a least one camera image can be obtained, for example, through virtual optical imaging processes, e.g. in that light beams from the respective pixels of the camera image intersect with the projection surface. By projecting one or more camera images, a 3D model of the vehicle interior is supplied with current camera data, such that the 3D model comprises current textures and color data.
- The processing unit can be a processor or the like. The processing unit can be the central processing unit of an on-board computer of a vehicle, that can also assume other functions in the vehicle, in addition to generating a virtual image of the vehicle interior. The processing unit can also be a component, however, dedicated to generating a virtual image of the vehicle interior.
- According to a preferred embodiment, the processing unit is configured to generate a virtual image of the image projected onto the virtual projection surface. A virtual image of the vehicle interior can be reconstructed from the camera images projected onto the virtual projection surface by means of virtual optical technologies. The virtual image can be generated, for example, in that the camera images projected onto the project surface are “photographed” by a virtual camera. This virtual photography can take place in turn by means of optical imaging processes, e.g. in that light beams from the respective pixels of the virtual camera image intersect with the projection surface. The processing unit can thus be configured, for example, to “film” a reconstructed 3D scenario of the vehicle interior recorded by a virtual camera (or a virtual observer) based on the camera images. Both the position and the orientation of this camera can be selected for this. By way of example, a virtual position in the center, above the vehicle, with a camera pointing straight down, or a virtual position somewhat to the side, with a diagonal orientation, can be selected. It is also possible for the user of the system to determine the position and orientation of the virtual camera himself, e.g. via a touchscreen.
- The processing unit can also be configured to project numerous camera images from respective cameras onto the virtual projection surface, in order to create a composite virtual image. This can be accomplished in that the numerous camera images are combined to form a composite camera image by means of “stitching” technologies, known to the person skilled in the art, which is then projected onto the virtual projection surface. “Stitching” technologies can comprise alpha blending, for example, meaning that certain portions of pixel values are removed from the camera images. Alternatively, the camera images can be projected individually onto the projection surface, and these individual projections of the images can then be combined by means of the “stitching” technologies to form a composite projection, which is then “photographed” by the virtual camera. Alternatively, the camera images can be projected individually and “photographed,” in order to then combine the virtual camera images by means of “stitching” technologies to obtain a virtual composite camera image.
- The virtual projection surface can be derived from a model of the vehicle interior, in particular a 3D model of the vehicle interior. The model of the vehicle interior can comprise both static and temporary components of the vehicle interior. Thus, a current 3D model of the static (i.e. seats or tables, etc.) and moving or temporary components (passengers, luggage, etc.) can be generated from the images recorded by the camera during the runtime of the processing unit. The 3D model of the vehicle interior can comprise a collection of three dimensional coordinates, pixels, surfaces, or other geometric shapes.
- The processing unit can also be configured to detect common features of an object in numerous camera images, in order to generate a 3D model of the vehicle interior. The detection of common features of an object can take place, for example, by correlating camera images with one another. A common feature can be a correlated pixel or correlated group of pixels, or certain structural or color patterns in the camera images. By way of example, camera images can be correlated with one another in order to identify correlating pixels or features, wherein the person skilled in the art can make use of known image correlating methods, such as those described by Olivier Faugeras et al. in the research paper “Real-time correlation-based stereo: algorithm, implementations and applications,” RR-2013, INRIA 1993. By way of example, two camera images can be correlated with one another. In order to increase the precision of the reconstruction, more than two camera images can also be correlated with one another.
- The processing unit can be configured to reconstruct the vehicle interior from current camera images using stereoscopic technologies. The generation of the 3D model can thus comprise a reconstruction of the three dimensional position of an object, e.g. a pixel or feature, by means of stereoscopic technologies. The 3D model of the vehicle interior obtained in this manner can be in the form of a collection of the three dimensional coordinates of all of the pixels identified in the correlation process. In addition, this collection of three dimensional points can also be approximated by surfaces in order to obtain a 3D model with surface areas.
- The processing unit can also be configured to generate the model of the vehicle interior taking into account depth of field information, provided by at least one of the cameras. Such depth of field information is provided, for example, by stereoscopic cameras or time-of-flight cameras. Such cameras supply depth of field information for individual pixels, which can be referenced with the pixel coordinates in order to generate the model.
- The model of the vehicle interior from which the projection surface is derived can be a predefined reference model of the vehicle interior, for example.
- The processing unit can also be configured to combine such a predefined reference model of the vehicle interior with a model of the vehicle interior obtained from current stereoscopic camera images. The processing unit can be configured to create at least portions of the model of the vehicle interior taking the reference model into account. By way of example, the processing unit can recreate static or permanent components of the 3D model by detecting common features of an object in numerous images, taking into account an existing 3D reference model of the static or permanent vehicle interior. By way of example, a collection of three dimensional coordinates of pixels or other features identified in a correlation process can be compared with an existing 3D reference model of the vehicle interior, in order to determine where the objects currently in the vehicle interior correlate to or deviate from the 3D model.
- A reference model of the vehicle interior comprises static components of the vehicle interior, for example, such as permanent objects in the vehicle interior. The 3D reference model can comprise seats, tables, interior walls, fixtures, etc. The reference model preferably also takes the vehicle interior into account resulting from rotating, moving, or tilting the seats and other static interior elements of the vehicle.
- As a result, when a vehicle seat has been moved from a reference position or the backrest has been adjusted, this can be derived from a change in the three dimensional coordinates of a seat determined from the camera images.
- The reference model can be stored in the processing unit, for example. As such, a reference model of the (permanent) vehicle interior, without loads and passengers, can be stored in a memory, e.g. a RAM, ROM, Flash, or SSD or hard disk memory of the processing unit for preparing and processing the individual camera images to form an overview comprising the entire interior.
- The reference model can be generated in advance, for example in a calibration phase of an interior monitoring system, from camera images taken by the interior cameras. Alternatively, the reference model can also comprise 3D data regarding the vehicle interior, provided by the vehicle manufacturer, and stored in the processing unit. These can relate to structural details of a vehicle model known from the production process, such as grid models of interior components such as seats and fixtures, or grid models of vehicle walls and panels. The 3D reference model can also define colors and textures of surfaces, in order to obtain a particularly realistic model of the vehicle interior.
- The exemplary embodiments described in greater detail below also disclose a monitoring system for a vehicle interior, comprising one or more cameras and a device, as described above, configured to create a virtual image of the vehicle interior based on one or more camera images from the cameras. The interior monitoring system can also comprise a communication system, via which the processing unit receives data from the interior cameras, for example, and transmits data, e.g. the generated virtual image of the vehicle interior, to the display device. The communication system can be a CAN bus for a motor vehicle, for example.
- The exemplary embodiments described in greater detail below also relate to a method comprising the projection of at least one camera image from at least one camera onto a virtual projection surface, in order to create a virtual image of the vehicle interior. The method can comprise all of the processes that have been described above in conjunction with a processing unit or a monitoring system for a vehicle interior.
- The methods described herein can also be executed as computer programs. The exemplary embodiments thus also relate to a computer program comprising instructions for projecting at least one camera image from at least one camera onto a virtual projection surface when the program is executed, in order to create a virtual image of the vehicle interior. The computer program can comprise all of the processes that have been described above in conjunction with a processing unit or a monitoring system for a vehicle interior. The computer program can be stored in a memory, for example.
- Embodiments are described only by way of example and in reference to the attached drawings, in which:
-
FIG. 1 shows a schematic top view of a vehicle equipped with an interior monitoring system; -
FIG. 2 shows a block diagram of the interior monitoring system; -
FIG. 3 shows a schematic example of a virtual projection surface; -
FIG. 4 shows an exemplary process for projecting a camera image onto a virtual projection surface; -
FIG. 5a shows a flow chart for a process for generating a model of the vehicle interior from numerous camera images of the vehicle interior; -
FIG. 5b shows a flow chart for an alternative process for generating a model of the vehicle interior from numerous camera images of the vehicle interior; -
FIG. 6 shows a process for correlating two camera images, in order to identify correlating pixels; -
FIG. 7 shows an exemplary process for reconstructing the three dimensional position of a pixel by means of stereoscopic technologies; -
FIG. 8 shows a schematic example of the comparison of a collection of three dimensional coordinates of pixels identified in the correlation process with a 3D reference model; -
FIG. 9 shows how an image of a 3D model of the vehicle interior can be obtained by means of a virtual camera; and -
FIG. 10 shows an example of a virtual image of the vehicle interior from a bird's eye view. -
FIG. 1 shows a schematic bird's eye view of avehicle 1, in this case a mobile home, by way of example, equipped with an interior monitoring system. The interior monitoring system comprises an exemplary arrangement of interior cameras Cam1-Cam8. Two of the interior cameras Cam1, Cam2 are located at the front end of thevehicle interior 2, two of the cameras Cam3, Cam4 are located on the right side of thevehicle interior 2, two interior cameras Cam5, Cam6 are located at the back end of thevehicle interior 2, and two interior cameras Cam7, Cam8 are located on the left side of thevehicle interior 2. Each interior camera Cam1-Cam8 records a portion of theinterior 2 of thevehicle 1. The exemplary equipping of thevehicle 1 with interior cameras is configured such that the interior cameras Cam1-Cam8 can observe the entire vehicle interior when more people are in the vehicle. Furthermore, the permanent interior fittings of the vehicle interior include four seats S1-4 and a table T. The cameras can be black-and-white or color cameras with wide angle lenses, for example. -
FIG. 2 shows a block diagram, schematically illustrating the interior monitoring system. In addition to the cameras Cam1-Cam8, the interior monitoring system comprises aprocessing unit 3 and adisplay 4. The images recorded by the various cameras Cam1-Cam8 are transmitted via a communication system 5 (e.g. a CAN bus) to theprocessing unit 3 for processing in theprocessing unit 3. Theprocessing unit 3 is configured to correlate camera images in order to generate a 3D model of the interior, as is shown inFIGS. 3, 4, and 5 , and shall be explained in greater detail below. Theprocessing unit 3 then recreates a bird's eye view of thevehicle interior 2. The reproduction of the bird's eye view is then shown on thedisplay 4, which is located inside thevehicle 1 where it can be seen by a driver. Thevehicle 1 can be a mobile home, for example. The bird's eye view shown on thedisplay 4 can help the driver check whether all of the passengers have taken seats, and are buckled in, and whether there are any dangerous objects in the vehicle interior. -
FIG. 3 shows a schematic example of a virtual projection surface derived from a 3D model of the vehicle interior. The projection surface comprises surfaces F1, F2, F3, F4 of a 3D model of the vehicle interior, representing a floor of the vehicle interior. Furthermore, the projection surface comprises a surface F5 of a 3D model of the vehicle interior that represents a side wall of the vehicle interior, as well as surfaces F6, F7, F8 of a 3D model of the vehicle interior that represent a table in the vehicle interior. A portion of the projection surface Proj, or the model of the vehicle interior, is shown inFIG. 3 . The surfaces F1 to F8 are combined to form a (two dimensional) projection surface Proj of the vehicle interior, which can be used as a virtual projection surface for camera images, as shall be explained in greater detail in reference toFIG. 4 . Such a (two dimensional) projection surface of the vehicle interior can be defined manually, for example, and stored in a memory in the processing unit as a reference model. Alternatively, such a 2D projection surface can also be derived or reconstructed from a 3D CAD model of the vehicle interior, as is obtained from a construction processes for the vehicle. Alternatively, a projection surface of the vehicle interior can also be obtained with stereoscopic methods, as shall be explained in reference toFIGS. 5a, 5b , 6, and 7 in greater detail below. By way of example, a collection of 3D points can be determined with stereoscopic methods that represent a 2D surface of the vehicle interior, as is shown, for example, in the aforementioned article by Olivier Faugeras et al. in the research report, “Real-time correlation-based stereo: algorithm, implementations and applications,” RR-2013, INRIA 1993. In particular, the projection surface Proj can also be derived from a composite model, which is generated by combining a predefined reference model with a current, reconstructed model, as shall be explained in greater detail below in reference toFIG. 5 . -
FIG. 4 shows an exemplary process for projecting a camera image Img1 onto a virtual projection surface Proj. The virtual projection surface Proj can be one or more of the surfaces F1 to F8 shown inFIG. 3 . The projection surface Proj represents a model of the vehicle interior, defining a two dimensional surface embedded in a three dimensional space, not yet containing any textural or color information. A light beam OS1 or OS2 is calculated for each pixel P1, P2 of the camera image Img1 from the known position and orientation of the camera Cam1, and the likewise known position and orientation of the image plane of the camera image Img1. The pixels P1 and P2 correspond to the image of a passenger M1 in the vehicle interior generated by thecamera Cam 1. The intersection of the two light beams OS1 and OS2 with the virtual projection surface Proj produces projections P1′ and P2′ of the pixels P1 and P2. A model of the vehicle interior is composed of these projections of the camera image that defines a two dimensional surface containing textural and color information. By computing the light beams OS1′ and OS2′ through these projections P1′ and P2′ with respect to a virtual camera CamV, virtual images P1″ and P2″ of points P1 and P2 can be obtained in a virtual image plane ImgV of the virtual camera CamV. When this process is carried out for all of the pixels of the camera image Img1, a virtual image ImgV of the portion of the vehicle interior recorded by the camera Cam1 can be generated on the virtual projection surface Proj from the perspective of the virtual camera CamV. - The process shown in
FIG. 4 can be carried out for all of the surfaces of the vehicle interior model (e.g. F1 to F8 inFIG. 3 ) in order to generate a projection of the camera image Img1 on the 2D surface of the vehicle interior. The virtual image plane ImgV and the virtual camera CamV are used for computing a view of the scenario, as would only be possible in reality from an observation point located far above the vehicle. - The process shown in
FIG. 4 can furthermore be carried out for camera images from numerous, or all, of the interior cameras (e.g. Cam1 to Cam8 inFIG. 1 andFIG. 2 ), in order to obtain a complete image of the vehicle interior. If numerous cameras are used, positioned on different sides of the vehicle, as shown inFIG. 1 , numerous camera images are recorded simultaneously, resulting in numerous images calculated for the virtual observer represented by the virtual camera ImgV. For this reason, theprocessing unit 3 is also configured to combine the numerous images to form a single, seamlessly composed bird's eye view, which is then shown in thedisplay 4. Alternatively, theprocessing unit 3 can be configured to combine numerous images to form a composite image by means of technologies known to the person skilled in the art as “stitching.” The projection onto the projection surface, as well as the bird's eye view, can be calculated based on the individual composite images. Another alternative is that theprocessing unit 3 is configured to combine numerous projection images of the recorded images to form a single projection image on the virtual projection surface by means of “stitching” technologies, and then compute a bird's eye view based on this single projection image. -
FIG. 5a shows a flow chart for a process for generating a model of the vehicle interior from numerous camera images of the vehicle interior. The process can be carried out in the processing unit (3 inFIG. 2 ), for example. Instep 501, camera images Img1-Img8 recorded by the interior cameras (Cam1 to Cam8 inFIGS. 1 and 2 ) are correlated with one another in order to identify correlating pixels in the camera images Img1-Img8. Instep 502, a 3D model Mod3D of the vehicle interior is reconstructed from the information obtained instep 501 regarding correlating pixels. -
FIG. 5b shows a flow chart for an alternative process for generating a model of the vehicle interior from numerous camera images of the vehicle interior and an existing model of the vehicle interior. The process can be carried out in the processing unit (3 inFIG. 2 ), for example. Instep 501, camera images Img1-Img8 recorded by the interior cameras (Cam1-8 inFIGS. 1 and 2 ) are correlated with one another in order to identify correlating pixels in the camera images Img1-Img8. Instep 502, a 3D model Mod3Da of the vehicle interior is reconstructed from the information obtained instep 501 regarding correlating pixels. Instep 503, the reconstructed model Mod3Da is combined with a predefined 3D reference model Mod3Db of the vehicle interior (e.g. with a 3D-CAD model obtained from the assembly process for the vehicle) in order to obtain a definitive 3D model Mod3D of the vehicle interior. The combining of the reconstructed model with the reference model can comprise the detection of objects that have been displaced with respect to the reference model, as shall be explained in greater detail below in reference toFIG. 8 . Additionally or alternatively, the combining of the reconstructed model with the reference model can also comprise a detection of additional objects that are not contained in the reference model, e.g. luggage or passengers in the vehicle interior. By combining the 3D reference model with the reconstructed model, a better model is obtained because not only reference data, but also current data regarding surfaces or objects in the vehicle interior, can be taken into account. -
FIG. 6 shows, by way of example, a process for correlating two camera images in order to identify correlating pixels. Two interior cameras with known positions and orientations provide a first camera image Img1 and a second camera image Img2. These can be camera images Img1 and Img2 from the two interior cameras Cam1 and Cam2 inFIG. 1 . The positions and orientations of the two cameras are different, such that the two camera images Img1 and Img2 show an exemplary object Obj from two different perspectives. Each of the camera images Img1 and Img2 are composed of individual pixels corresponding to the resolution and depth of color of the camera. The two camera images Img1 and Img2 are correlated with one another in order to identify correlating pixels, wherein the person skilled in the art can make use of known image correlation methods such as those specified above. It is determined in the correlation process that an object Obj is recorded in camera image Img1 and in camera image Img2, and that pixel P1 in camera image Img1 correlates to pixel P2 in camera image Img2. The position of the object Obj in camera image Img1 differs from the position of the object Obj in camera image Img2 due to the different camera positions and orientations. The form of the camera image of the object Obj in the second camera image likewise differs from the form of the image of the object in the first camera image due to the different perspective. The position of the pixel in three dimensional space can be determined from the different positions of the pixel P1 in image Img1 in comparison with pixel P2 in image Img2 by means of stereoscopic technologies (cf.FIG. 7 and the description below). As a result, the correlation process provides the positions of numerous pixels from objects in the vehicle interior, from which a 3D model of the vehicle interior can be reconstructed. -
FIG. 7 shows an exemplary process for reconstructing the three dimensional position of a pixel by means of stereoscopic technologies. A light beam OS1 and OS2 is computed for each pixel P1, P2 from the known positions and orientations of the two cameras Cam1 and Cam2, and the likewise known positions and orientations of the image planes of the camera images Img1 and Img2. The intersection of the two light beams OS1 and OS2 provides the three dimensional positions P3D of the pixels forming the pixels P1 and P2 in the two camera images Img1 and Img2. Two camera images are evaluated in the exemplary process shown inFIG. 7 , in order to determine the three dimensional position of two correlated pixels. In this manner, the images from individual pairs of cameras Cam1/Cam2, Cam3/Cam4, Cam5/Cam6, or Cam7/Cam8, can be correlated with one another in order to generate the 3D model. In order to increase the reconstruction precision, numerous camera images can be correlated with one another. If three or more camera images are to be correlated with one another, a first camera image can be selected as the reference image, in relation to which a disparity chart is calculated for each of the other camera images. The disparity charts are then combined in that the correlation with the best results is selected. - The model of the vehicle interior obtained in this manner can be regarded as a collection of the three dimensional coordinates of all of the pixels identified in the correlation process. Furthermore, this collection of three dimensional points can also be approximated by surfaces, in order to obtain a model with surfaces. The collection of three dimensional coordinates of all of the pixels identified in the correlation process can also be compared with an existing reference model of the vehicle interior, in order to determine correlations and deviations of the current objects in the vehicle interior to the reference model.
-
FIG. 8 shows an example in which two vehicle seats S1 and S2 defined in the reference model are identified by comparing the collection of three dimensional coordinates of pixels identified in the correlation process with the reference model. A displacement Dis of the vehicle seat S1 from its standard position S1sdt can be detected from the deviations of the three dimensional coordinates S1 of the first seat from the camera images. In the same manner, an adjustment of the backrests of seats S1 and S2 from their standard tilt can also be detected. -
FIG. 9 shows how an image of a model of the vehicle interior can be obtained by means of a virtual camera CamV. The image can be shown to the driver, for example, such that said driver can easily see that the vehicle interior is in a safe state. As explained in reference toFIG. 7 , a three dimensional position P3D of a point of an object in the vehicle interior can be reconstructed from two images recorded by two cameras Cam1 and Cam2 by means of stereoscopic technologies. The color data of the pixel is projected onto this point. By calculating the light beam from this point and the virtual camera CamV, a virtual image P of the point P3D can be determined in a virtual image plane ImgV of the virtual camera CamV. The virtual image plane ImgV and the virtual camera CamV are used to calculate a view of the scenario (“photograph”), which would only be possible in reality from an observation point located high above the vehicle. In that this process is carried out for all of the pixels identified in the correlation process, a virtual image ImgV of the model of the vehicle interior can be generated with the color information from the actual cameras projected thereon from the perspective of the virtual camera CamV. As is likewise shown inFIG. 9 , a surface F of a seat S1 detected in the model estimated from the collection of three dimensional points can be photographed on the image plane ImgV of the virtual camera CamV. The seat S1 and the reconstructed point P3D represent a portion of a 3D model or a projection surface, by way of example, onto which textural and color information from the physical cameras is projected. In this manner, a virtual image of the vehicle interior from a bird's eye view can be obtained, providing the driver with a clear overview of the vehicle interior. -
FIG. 10 shows an example of a virtual image ImgV of the vehicle interior from a bird's eye view, such as would be obtained from the method described above. The image ImgV of the vehicle interior is determined, for example, from eight cameras in the vehicle interior, as described in reference toFIG. 1 . A model of the vehicle interior is generated from the correlation of the camera images in accordance with the process described in reference toFIG. 6 andFIG. 7 , which is then “photographed” by a virtual camera (CamV inFIG. 7 ) in accordance with the process shown inFIG. 9 . The 3D model constructed from the camera images comprises points, surfaces, or other geometric shapes belonging to seats S1, S2, S3, and S4 in the vehicle interior. These seats S1-S4 can be detected through a comparison with a 3D reference model of the vehicle interior, even if the seats S1-S4 are partially or entirely blocked from the individual cameras by people M1, M2, M3 or other objects in the vehicle interior. The 3D model constructed from the images also comprises points, surfaces, or other geometric shapes, that describe a table T, as well as points, surfaces, or other geometric shapes, that describe people M1, M2, M3 in the vehicle interior. The virtual image ImgV of the 3D model of the vehicle interior shows the driver that the person M3 has stood up, in violation of the safety guidelines, while the vehicle is in motion, and is consequently not buckled into the seat S1. The virtual image ImgV of the vehicle interior also shows the driver that there is an object Obj on the table T that is not secured, which could fly around in the vehicle interior if the brakes were applied suddenly in the vehicle, thus jeopardizing the safety of the people M1, M2, M3 in the vehicle interior. The image ImgV of the vehicle interior inFIG. 10 illustrates a possible image of the vehicle interior from above, that could not be taken by an actual camera (Cam1-8 inFIG. 1 ) due to the perspective. The computed image from the camera system can be displayed on any display screen or on an app on a smartphone. - Both color and black-and-white cameras are used in the exemplary embodiments explained above, in order to obtain recordings of the vehicle interior. Alternatively or additionally, time-of-flight cameras or stereo cameras can also be used. These cameras also provide additional depth of field information for the object being recorded, such that the 3D model of the interior can be more easily and robustly obtained in the run-time of the system, without requiring any feature points. Simple black-and-white or color cameras, however, are typically less expensive than time-of-flight or stereo cameras.
-
- 1 vehicle
- 2 vehicle interior
- 3 processing unit
- 4 display
- 5 communication system
- Cam1-Cam8 interior cameras
- S1-S4 seats
- S1 std reference position of first seat
- Dis displacement of seat
- T table
- Img1-Img8 camera images
-
Mod3D 3D model of vehicle interior - Mod3Da reconstructed 3D mode of vehicle interior
- Mod3Db predefined 3D model of vehicle interior
- Obj object in vehicle interior
- P1, P2 pixels in camera images
- P1′, P2′ projected pixels of camera images
- P1″, P2″ virtual images of projected pixels
- P3D reconstructed three dimensional position
- OS1, OS2 light beams regarding first and second cameras
- OS2′ light beams regarding virtual cameras
- OSV light beam regarding virtual camera
- CamV virtual camera
- ImgV virtual image
- Proj virtual projection surface
- F1-F8 exemplary surfaces of a vehicle interior
- F surface on first seat
- M1, M2, M3 people in vehicle interior
Claims (18)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102018203405.2 | 2018-03-07 | ||
DE102018203405.2A DE102018203405A1 (en) | 2018-03-07 | 2018-03-07 | Visual surround view system for monitoring the vehicle interior |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190279008A1 true US20190279008A1 (en) | 2019-09-12 |
US11210538B2 US11210538B2 (en) | 2021-12-28 |
Family
ID=65408967
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/288,888 Active 2039-07-02 US11210538B2 (en) | 2018-03-07 | 2019-02-28 | Visual surround view system for monitoring vehicle interiors |
Country Status (4)
Country | Link |
---|---|
US (1) | US11210538B2 (en) |
EP (1) | EP3537384A3 (en) |
CN (1) | CN110246211A (en) |
DE (1) | DE102018203405A1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10882484B2 (en) * | 2018-02-21 | 2021-01-05 | Denso Corporation | Occupant detection apparatus |
US11176705B2 (en) | 2020-03-10 | 2021-11-16 | Shenzhen Fugui Precision Ind. Co., Ltd. | Method for optimizing camera layout for area surveillance and apparatus employing the method |
US11186233B2 (en) * | 2017-08-02 | 2021-11-30 | Ichikoh Industries, Ltd. | Electric retractable vehicle periphery viewing device |
US11356641B2 (en) * | 2017-05-24 | 2022-06-07 | Audi Ag | External depiction of photographs of a vehicle in interior in VR goggles |
US20230100857A1 (en) * | 2021-09-25 | 2023-03-30 | Kipling Martin | Vehicle remote control system |
CN115937907A (en) * | 2023-03-15 | 2023-04-07 | 深圳市亲邻科技有限公司 | Community pet identification method, device, medium and equipment |
US20230119137A1 (en) * | 2021-10-05 | 2023-04-20 | Yazaki Corporation | Driver alertness monitoring system |
WO2024079039A1 (en) * | 2022-10-11 | 2024-04-18 | Valeo Schalter Und Sensoren Gmbh | Method for operating a driver assistance system for a motor vehicle, driver assistance system for a motor vehicle, and motor vehicle comprising a driver assistance system |
EP4451213A1 (en) * | 2023-04-19 | 2024-10-23 | Aptiv Technologies AG | Depth estimation for interior sensing |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102020210766A1 (en) | 2020-08-25 | 2022-03-03 | Brose Fahrzeugteile Se & Co. Kommanditgesellschaft, Bamberg | Method for operating a motor vehicle having an interior |
DE102020210768A1 (en) | 2020-08-25 | 2022-03-03 | Brose Fahrzeugteile Se & Co. Kommanditgesellschaft, Bamberg | Method for operating a motor vehicle having an interior |
DE102020130113A1 (en) | 2020-11-16 | 2022-05-19 | Audi Aktiengesellschaft | Method for validating and/or adapting a virtual interior model and method for manufacturing a motor vehicle |
DE102022111941B3 (en) | 2022-05-12 | 2023-11-16 | Cariad Se | Method and system for generating images in a motor vehicle |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE4406906A1 (en) | 1994-03-03 | 1995-09-07 | Docter Optik Wetzlar Gmbh | Inside room monitoring device |
WO2004042662A1 (en) * | 2002-10-15 | 2004-05-21 | University Of Southern California | Augmented virtual environments |
SE527257C2 (en) * | 2004-06-21 | 2006-01-31 | Totalfoersvarets Forskningsins | Device and method for presenting an external image |
JP4714763B2 (en) * | 2008-06-13 | 2011-06-29 | 株式会社コナミデジタルエンタテインメント | Image processing program, image processing apparatus, and image control method |
US20110115909A1 (en) * | 2009-11-13 | 2011-05-19 | Sternberg Stanley R | Method for tracking an object through an environment across multiple cameras |
JP5302227B2 (en) * | 2010-01-19 | 2013-10-02 | 富士通テン株式会社 | Image processing apparatus, image processing system, and image processing method |
EP2511137B1 (en) | 2011-04-14 | 2019-03-27 | Harman Becker Automotive Systems GmbH | Vehicle Surround View System |
US9762880B2 (en) * | 2011-12-09 | 2017-09-12 | Magna Electronics Inc. | Vehicle vision system with customized display |
US10210597B2 (en) * | 2013-12-19 | 2019-02-19 | Intel Corporation | Bowl-shaped imaging system |
US10442355B2 (en) * | 2014-09-17 | 2019-10-15 | Intel Corporation | Object visualization in bowl-shaped imaging systems |
DE102015205479A1 (en) * | 2015-03-26 | 2016-09-29 | Robert Bosch Gmbh | A method of representing a vehicle environment of a vehicle |
DE102015205507B3 (en) | 2015-03-26 | 2016-09-29 | Zf Friedrichshafen Ag | Rundsichtsystem for a vehicle |
DE102015209284A1 (en) * | 2015-05-21 | 2016-11-24 | Robert Bosch Gmbh | A method for generating a view of a vehicle environment |
WO2017164835A1 (en) * | 2016-03-21 | 2017-09-28 | Ford Global Technologies, Llc | Virtual vehicle occupant rendering |
WO2018000038A1 (en) * | 2016-06-29 | 2018-01-04 | Seeing Machines Limited | System and method for identifying a camera pose of a forward facing camera in a vehicle |
DE102017000557A1 (en) * | 2017-01-21 | 2017-07-06 | Daimler Ag | Method for monitoring a vehicle interior |
US10252688B2 (en) * | 2017-03-22 | 2019-04-09 | Ford Global Technologies, Llc | Monitoring a vehicle cabin |
US10373316B2 (en) * | 2017-04-20 | 2019-08-06 | Ford Global Technologies, Llc | Images background subtraction for dynamic lighting scenarios |
US10275130B2 (en) * | 2017-05-12 | 2019-04-30 | General Electric Company | Facilitating transitioning between viewing native 2D and reconstructed 3D medical images |
US11106927B2 (en) * | 2017-12-27 | 2021-08-31 | Direct Current Capital LLC | Method for monitoring an interior state of an autonomous vehicle |
-
2018
- 2018-03-07 DE DE102018203405.2A patent/DE102018203405A1/en active Pending
-
2019
- 2019-02-11 EP EP19156463.2A patent/EP3537384A3/en not_active Withdrawn
- 2019-02-28 US US16/288,888 patent/US11210538B2/en active Active
- 2019-03-06 CN CN201910168496.6A patent/CN110246211A/en active Pending
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11356641B2 (en) * | 2017-05-24 | 2022-06-07 | Audi Ag | External depiction of photographs of a vehicle in interior in VR goggles |
US11186233B2 (en) * | 2017-08-02 | 2021-11-30 | Ichikoh Industries, Ltd. | Electric retractable vehicle periphery viewing device |
US10882484B2 (en) * | 2018-02-21 | 2021-01-05 | Denso Corporation | Occupant detection apparatus |
US11176705B2 (en) | 2020-03-10 | 2021-11-16 | Shenzhen Fugui Precision Ind. Co., Ltd. | Method for optimizing camera layout for area surveillance and apparatus employing the method |
US20230100857A1 (en) * | 2021-09-25 | 2023-03-30 | Kipling Martin | Vehicle remote control system |
US20230119137A1 (en) * | 2021-10-05 | 2023-04-20 | Yazaki Corporation | Driver alertness monitoring system |
US11861916B2 (en) * | 2021-10-05 | 2024-01-02 | Yazaki Corporation | Driver alertness monitoring system |
WO2024079039A1 (en) * | 2022-10-11 | 2024-04-18 | Valeo Schalter Und Sensoren Gmbh | Method for operating a driver assistance system for a motor vehicle, driver assistance system for a motor vehicle, and motor vehicle comprising a driver assistance system |
CN115937907A (en) * | 2023-03-15 | 2023-04-07 | 深圳市亲邻科技有限公司 | Community pet identification method, device, medium and equipment |
EP4451213A1 (en) * | 2023-04-19 | 2024-10-23 | Aptiv Technologies AG | Depth estimation for interior sensing |
Also Published As
Publication number | Publication date |
---|---|
DE102018203405A1 (en) | 2019-09-12 |
CN110246211A (en) | 2019-09-17 |
EP3537384A2 (en) | 2019-09-11 |
EP3537384A3 (en) | 2019-09-18 |
US11210538B2 (en) | 2021-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11210538B2 (en) | Visual surround view system for monitoring vehicle interiors | |
CN104883554B (en) | The method and system of live video is shown by virtually having an X-rayed instrument cluster | |
JP5072576B2 (en) | Image display method and image display apparatus | |
JP4323377B2 (en) | Image display device | |
EP1462762B1 (en) | Circumstance monitoring device of a vehicle | |
CN111414796A (en) | Adaptive transparency of virtual vehicles in analog imaging systems | |
EP1828979B1 (en) | Method for determining the position of an object from a digital image | |
CN106537904B (en) | Vehicle with an environmental monitoring device and method for operating such a monitoring device | |
JP2018531530A (en) | Method and apparatus for displaying surrounding scene of vehicle / towed vehicle combination | |
JP2018531530A6 (en) | Method and apparatus for displaying surrounding scene of vehicle / towed vehicle combination | |
US20170116710A1 (en) | Merging of Partial Images to Form an Image of Surroundings of a Mode of Transport | |
KR20180117882A (en) | Mtehod of detecting obstacle around vehicle | |
US20150325052A1 (en) | Image superposition of virtual objects in a camera image | |
US11410430B2 (en) | Surround view system having an adapted projection surface | |
CN104185009A (en) | enhanced top-down view generation in a front curb viewing system | |
KR20160145598A (en) | Method and device for the distortion-free display of an area surrounding a vehicle | |
CN104185010A (en) | Enhanced perspective view generation in a front curb viewing system | |
JP2004198212A (en) | Apparatus for monitoring vicinity of mobile object | |
KR20130025005A (en) | Apparatus and method for compensating around image of vehicle | |
JP2016203910A (en) | Occupant detection device and occupant detection method | |
JP2018533098A (en) | Panel conversion | |
CN105793909B (en) | The method and apparatus for generating warning for two images acquired by video camera by vehicle-periphery | |
WO2014181510A1 (en) | Driver state monitoring device and driver state monitoring method | |
CN114290998A (en) | Skylight display control device, method and equipment | |
JP7000383B2 (en) | Image processing device and image processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: ZF FRIEDRICHSHAFEN AG, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ABHAU, JOCHEN;VIEWEGER, WOLFGANG;SIGNING DATES FROM 20190206 TO 20190314;REEL/FRAME:048773/0550 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |