US20210314548A1 - Device comprising a multi-aperture imaging device for generating a depth map - Google Patents

Device comprising a multi-aperture imaging device for generating a depth map Download PDF

Info

Publication number
US20210314548A1
US20210314548A1 US17/352,744 US202117352744A US2021314548A1 US 20210314548 A1 US20210314548 A1 US 20210314548A1 US 202117352744 A US202117352744 A US 202117352744A US 2021314548 A1 US2021314548 A1 US 2021314548A1
Authority
US
United States
Prior art keywords
view
image
information
depth map
image sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US17/352,744
Other versions
US11924395B2 (en
Inventor
Frank Wippermann
Jacques Duparré
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Assigned to Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. reassignment Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DUPARRÉ, Jacques, WIPPERMANN, FRANK
Publication of US20210314548A1 publication Critical patent/US20210314548A1/en
Application granted granted Critical
Publication of US11924395B2 publication Critical patent/US11924395B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/236Image signal generators using stereoscopic image cameras using a single 2D image sensor using varifocal lenses or mirrors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • H04N5/2226Determination of depth image, e.g. for foreground/background separation
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B5/00Adjustment of optical system relative to image or object surface other than for focusing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/557Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/25Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B2205/00Adjustment of optical system relative to image or object surface other than for focusing
    • G03B2205/0053Driving means for the movement of one or more optical element
    • G03B2205/0061Driving means for the movement of one or more optical element using piezoelectric actuators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors

Definitions

  • the present invention relates to multi-aperture imaging devices, in particular to a device comprising a multi-aperture imaging device.
  • the device is configured to use the information contained within the multi-aperture imaging device to produce a depth map and/or to accumulate image information.
  • the present invention also refers to extracting depth information from focus stacks with an array camera and/or a self-portrait against a modified background with an array camera.
  • Multi-aperture imaging devices can image the object field by using multiple partial fields of view.
  • a beam deflection system such as a mirror
  • this vertical direction can be in one direction of the user's face or in the direction of the environment in front of him/her and can essentially be achieved by using switchable hinged mirrors.
  • a device may have: a multi-aperture imaging device comprising: an image sensor; an array of adjacently arranged optical channels, each optical channel comprising optics for projecting at least a partial field of view of a total field of view on an image sensor area of the image sensor, a beam deflector for deflecting a beam path of said optical channels, focusing means for adjusting a focal position of said multi-aperture imaging device; the device further has: control means adapted to control the focusing means and to receive image information from the image sensor; wherein the control means is adapted to set a sequence of focal positions in the multi-aperture imaging device so as to detect a corresponding sequence of image information of the total field of view, and to create a depth map for the detected total field of view on the basis of the sequence of image information.
  • One finding of the present invention is that by capturing a sequence of images in a sequence of focal positions, the depth map can be created from one's own image information, so that disparity information is not important and the use of such information can possibly be dispensed with.
  • a device comprises a multi-aperture imaging device.
  • the multi-aperture imaging device comprises an image sensor, an array of adjacently arranged optical channels, each optical channel comprising optics for projecting at least a partial field of view of a total field of view on an image sensor area of the image sensor.
  • the multi-aperture imaging device comprises beam deflection means for deflecting a beam path of the optical channels and focusing means for setting ing a focal position of the multi-aperture imaging device.
  • a control means of the device is adapted to control the focusing means and to receive image information from the image sensor.
  • the control means is configured to set a sequence of focal positions in the multi-aperture imaging device so as to detect a corresponding sequence of image information of the total field of view, and to create a depth map for the detected total field of view on the basis of the sequence of image information.
  • the advantage of this is that the depth map can be generated from the sequence of focal positions, so that even a small number of optical channels is sufficient to obtain depth information.
  • control means is configured to create the depth map from the sequence of image information, e.g. without any additional measurements or evaluations of the depth information by means of other methods.
  • the advantage of this is that the depth map can be generated from the sequence of focal positions, so that even a single shot of the total field of view from one viewing direction can provide a sufficient amount of information for creating the depth map.
  • the optical channels are configured to capture the total field of view at least stereoscopically.
  • Said control means is adapted to produce a preliminary depth map on the basis of disparity information obtained from said optical channels and to supplement said preliminary depth map on the basis of depth information based on said sequence of image information to obtain said depth map.
  • the control means is adapted, by contrast, to produce a preliminary depth map on the basis of the sequence of image information; and to supplement the preliminary depth map on the basis of depth information based on disparity information obtained from the optical channels, in order to obtain the depth map.
  • control means is configured to select areas of the total field of view in the preliminary depth map on the basis of a quality criterion for which improvement is a requirement, and to determine additional depth information to supplement the preliminary depth map for the selected areas, and not to determine same for non-selected areas.
  • the control means is configured to capture a corresponding number of groups of partial images in the sequence of focal positions. Each partial image is associated with an imaged partial field of view so that each of the groups of partial images has a common focal position.
  • the control means is configured to perform a comparison of local image sharpness information in the partial images and to create the depth map from this. This is made possible, for example, in that with a known focal position, which has been set by the control means for detecting the group of partial images, and by determining sharply imaged objects, that is to say objects which are in the respectively set focal position, information can be obtained about the fact that the respectively sharply imaged image areas have been captured at a distance from the multi-aperture imaging device which corresponds to the set focal position.
  • corresponding information can be generated for different objects and, thus, for the total field of view in order to achieve the depth map. This allows large-area or even complete mapping of the total field of view with regard to depth information.
  • the device is configured to control the focusing means such that the sequence of focal positions is distributed substantially equidistantly within the image space. This can be done by an equidistant arrangement that is as exact as possible, but also by taking into account a tolerance range of up to ⁇ 25%, ⁇ 15% or ⁇ 5%, wherein the sequence of focal positions is distributed between a minimum focal position and a maximum focal position of the sequence. Due to the equidistance in the image area, uniform precision of the depth map across different distances can be achieved.
  • the device is configured to generate a sequence of total images reproducing the total field of view on the basis of the sequence of image information, each total image based on a combination of partial images of the same focal position. This joining of partial images can be done while using the depth map in order to obtain a high image quality in the joined total image.
  • the device is configured to alter, on the basis of the depth map, a total image which renders the total field of view.
  • the depth map is known, different image manipulations can be carried out, for example subsequent focusing and/or defocusing of one or more image areas.
  • a first optical channel of the array is formed to image a first partial field of view of the total field of view
  • a second optical channel of the array is formed to image a second partial field of view of the total field of view
  • a third optical channel is configured to completely image the total field of view. This enables using additional imaging functionalities, for example with regard to a zoom range and/or an increase in resolution.
  • the focusing means has at least one actuator for setting the focal position.
  • the focusing means is configured to be at least partially disposed between two planes spanned by sides of a cuboid, the sides of the cuboid being aligned in parallel with each other and with a line extension direction of the array and of part of the beam path of the optical channels between the image sensor and the beam deflector.
  • the volume of the cuboid is minimal and yet designed to include the image sensor, array, and the beam deflector. This allows the multi-aperture imaging device to be designed with a small dimension along a depth direction normal to the planes.
  • the multi-aperture imaging device has a thickness direction that is arranged to be normal to the two planes.
  • the actuator has a dimension parallel to the thickness direction. A proportion of at most 50% of the actuator's dimension is arranged, starting from an area between the two planes, in such a way that it extends beyond the two planes. Arranging the actuator in a circumference of at least 50% between the planes results in a thin configuration of the multi-aperture imaging device, which also allows a thin configuration of the device.
  • the focusing means comprises an actuator for providing a relative movement between optics of at least one of the optical channels and the image sensor. This allows easy setting of the focal position.
  • the focusing means is adapted to perform the relative movement between the optics of one of the optical channels and the image sensor while performing a movement of the beam deflector that is simultaneous to the relative movement.
  • the focusing means is arranged in such a way that it protrudes by a maximum of 50% from the area between the planes of the cuboid.
  • At least one actuator of the focusing means is a piezoelectric bending actuator. This makes it possible to preserve the sequence of focal positions with a short time interval.
  • the focusing means comprises at least one actuator adapted to provide movement.
  • the focusing means further comprises mechanical means for transmitting the movement to the array for setting the focal position.
  • the actuator is arranged on a side of the image sensor which faces away from the array, and the mechanical means is arranged in such a way that a flux of force laterally passes the image sensor.
  • the actuator is arranged on a side of the beam deflector that faces away from the array, and the mechanical means is arranged in such a way that a flux of force laterally passes the beam deflector.
  • embodiments provide for arranging all of the actuators on that side of the image sensor which faces away from the array, for arranging all of the actuators on that side of the beam deflector which faces away from the array, or for arranging a subset of the actuators on that side of the image sensor which faces away from the array, and for arranging a subset, which is disjoint therefrom, on that side of the beam deflector which faces away from the array.
  • a relative position of the beam deflector is switchable between a first position and a second position so that in the first position, the beam path is deflected towards a first total field of view, and in the second position, the beam path is deflected towards a second total field of view.
  • the control means is adapted to direct the beam deflector to the first position to obtain imaging information of the first total field of view from the image sensor, wherein the control means is further adapted to direct the beam deflector to the second position to obtain imaging information of the second total field of view from the image sensor.
  • the control means is further adapted to insert a portion of said first imaging information into said second imaging information to obtain accumulated image information which in parts represents said first total field of view and in parts represents said second total field of view.
  • a further finding of the present invention is that one has recognized that by combining image information of different total fields of view, so that in an accumulated image information, the first total field of view and the second total field of view are reproduced in parts, respectively, easy handling of the device can be obtained since, for example, elaborate positioning of the user and/or of the device can be dispensed with.
  • a device comprises a multi-aperture imaging device having an image sensor, an array of adjacently arranged optical channels, and a beam deflector.
  • Each optical channel of the array comprises optics for projecting at least a partial field of view of a total field of view on an image sensor area of the image sensor.
  • the beam deflector is configured to deflect a beam path of the optical channels, wherein a relative position of the beam deflector is switchable between a first position and a second position, so that in the first position, the beam path is deflected toward a first total field of view, and in the second position, the beam path is deflected toward a second total field of view.
  • the device further comprises control means adapted to direct the beam deflector to the first position.
  • image information may be obtained from the control means which relates to the first total field of view projected on the image sensor.
  • the control means is configured to direct the beam deflector to the second position to obtain imaging information of the second total field of view from the image sensor.
  • An order of obtaining the imaging information of the first total field of view and of obtaining the imaging information of the second total field of view may be arbitrary.
  • Said control means is configured to insert a portion of said first imaging information into said second imaging information to obtain accumulated image information which in parts represents said first total field of view and in parts represents said second total field of view. This allows combining image contents of different total fields of view, so that time-consuming positioning of the device and/or image objects can be dispensed with.
  • the first total field of view is arranged along a direction corresponding to a user direction of the device or corresponding to an oppositely arranged world direction of the device. This allows combining image content in the first total field of view with a total field of view different therefrom.
  • the second total field of view is arranged along a direction corresponding to the other one of the user direction and the world direction, so that the two total fields of view together capture the world direction and the user direction.
  • contents from the world direction and contents from the user direction can thus be combined with each other.
  • control means is adapted to identify and segment, i.e. to separate, a person in the first image information, or to at least copy the image information relating to the person and to insert the image of the person into the second image information so as to obtain the accumulated image information. This allows inserting the image of the person in an image environment that is actually arranged along a different direction of the device.
  • the device is configured to automatically identify the person and to automatically insert the image of the person into the second image information. This makes it possible to obtain a picture of oneself (selfie) against an actually different background. This avoids complex positioning of the device, the person and/or the background. It also enables compensating for the fact that the background is mirror inverted.
  • the device is adapted to identify and/or segment the part, such as a person or at least a part thereof, while using a depth map generated by the device from the first imaging information.
  • the depth map can, for example, be created while using the first aspect or by other means. This enables easy implementation of the embodiment.
  • the device is adapted to create for the second image information a depth map having a plurality of depth planes and to insert the first image information in a predetermined depth plane of the second image information so as to obtain the accumulated image information.
  • This enables integrating the first image information into the second image information in a manner that is correct, in terms of depth, with regard to the predefined or predetermined depth plane.
  • the predefined depth plane within a tolerance range of 10% is equal to a distance of the first total field of view from the device. This enables the accumulated image information to be obtained in such a way that the second total field of view is represented as if the first image information, or the portion thereof, had been arranged along the other direction of the device.
  • the predefined depth plane is based on a user input associated with the placement of the first image information. This makes it possible that the depth plane that is to be considered can be varied between different pictures taken and/or can be adapted to the user's selection, by means of a user's input.
  • the device is configured to scale the first image information so as to obtain scaled first image information, and to insert the scaled first image information into the second image information so as to obtain the accumulated image information.
  • This makes it possible to insert the first image information into the second image information in such a way that a predefined perception is obtained in the accumulated image information, in particular with regard to the size of the first image information, which is advantageous in particular in combination with the adjustable depth plane into which the first image information is inserted, so that, in addition to the insertion which is correct in terms of depth, a representation which is correct in terms of size is also possible.
  • the device is configured to determine a distance of an object imaged in the first image information with respect to the device and to scale the first image information on the basis of a comparison of the determined distance with the predetermined depth plane in the second image information. This makes it possible to automatically take into account, by scaling, i.e. by adjusting the size, a distance of the first image information which is changed by the depth plane when inserting it into the second image information.
  • the device is configured to detect the first total field of view and the second total field of view within a time interval of at least 0.1 ms to at most 30 ms.
  • the lower limit is optional. Such fast capturing of both total fields of view makes it possible to reduce or even avoid changes in the total fields of view caused by time.
  • the device is configured to receive the accumulated image information as a video data stream.
  • the device may obtain a plurality of accumulated image information data for a plurality of sequential images of the first total field of view and/or the second total field of view and combine them into an image sequence as a video data stream.
  • embodiments provide for the provision of the accumulated image information as a still image.
  • the first image information comprises the image of a user
  • the second image information comprises a world view of the device.
  • the control means is configured to segment an image of the user from the first image information, possibly on the basis of depth map information generated with the device, and to insert the image of the user into the world view. This makes it easy to obtain a selfie by using the device.
  • the device is configured to insert the image of the user into the world view in a manner that is correct in terms of depth. This enables the impression that the user is standing in front of the world view, without the need for time-consuming positioning.
  • the device is configured to capture, with different focal positions, a sequence of images of the first total field of view and/or the second total field of view, and to create from the sequence of images a depth map for the first total field of view and/or the second total field of view.
  • This enables, in particular, combining the second image information with the first image information within a predefined depth plane and/or imaging that is correct in terms of depth; for this purpose, the advantages of the first aspect of the present invention can be exploited.
  • the first aspect can be combined with implementations of the second aspect and/or the second aspect can be combined with implementations of the first aspect.
  • the two aspects result in advantageous designs, which will be discussed later.
  • FIG. 1 a shows a schematic perspective view of a device according to the first aspect
  • FIG. 1 b shows a schematic perspective view of a device according to an embodiment of the second aspect
  • FIG. 1 c shows a schematic perspective view of a device according to an embodiment, combining the first aspect and the second aspect
  • FIG. 2 a shows a schematic view of different focal positions according to an embodiment, in which a device can be controlled according to the first aspect
  • FIG. 2 b shows a schematic representation of the utilization of a depth map generated from different focal positions according to an embodiment as well as its generation;
  • FIG. 3 a shows a schematic perspective view of a device according to an embodiment in which an image sensor spans an array of optical channels, and a beam deflector spans a cuboid in space;
  • FIG. 3 b shows a schematic lateral sectional view of the device of FIG. 3 a according to an embodiment, wherein the multi-aperture imaging device comprises a plurality of actuators;
  • FIG. 3 c shows a schematic lateral sectional view of the multi-aperture imaging device of FIG. 3 a and/or 3 b , in which different total fields of view can be detected on the basis of different positions of the beam deflector;
  • FIG. 4 a shows a schematic top view of a device according to a embodiment in which the actuator is formed as a piezoelectric bending actuator
  • FIG. 4 b shows a schematic lateral sectional view of the device of FIG. 4 a to illustrate the arrangement of the actuator between the planes of the cuboid that are described in relation to FIG. 3 a;
  • FIG. 5 a -5 d show schematic representations of arrangements of partial fields of view in a total field of view, according to embodiments
  • FIG. 6 shows a schematic perspective view of a device according to an embodiment of the second aspect
  • FIG. 7 a shows a schematic diagram illustrating processing of the image information that can be obtained by imaging the total fields of view according to an embodiment
  • FIG. 7 b shows a schematic representation of scaling of a portion of image information in the accumulated image information according to an embodiment
  • FIG. 8 shows parts of a multi-aperture imaging device according to an embodiment, which can be used in inventive devices of the first and/or second aspect(s).
  • FIG. 1 a shows a schematic perspective view of a device 10 1 according to the first aspect.
  • the device 10 1 comprises a multi-aperture imaging device comprising an image sensor 12 and an array 14 of adjacently arranged optical channels 16 a - e .
  • the multi-aperture imaging device further comprises a beam deflector 18 for deflecting a beam path of optical channels 16 a - d . This allows the beam paths of the optical channels 16 a - d to be deflected between a lateral course between the image sensor 12 by optics 22 a - d of the array 14 and the beam deflector 18 toward a non-lateral course.
  • Different optical channels 16 a - d are deflected in such a way that each optical channel 16 a - d projects a partial field of view 24 a - d of a total field of view 26 on an image sensor area 28 a - d of the image sensor 12 .
  • the partial fields of view 24 a - d can be distributed in space in a one-dimensional or two-dimensional manner, or, on the basis of different focal lengths of the optics 22 a - d , in a three-dimensional manner.
  • the total field of view 26 will be described below in such a way that the partial fields of view 24 a - d have a two-dimensional distribution, wherein adjacent partial fields of view 24 a - d can overlap each other. A total area of the partial fields of view results in the total field of view 26 .
  • the multi-aperture imaging device comprises focusing means 32 for setting a focal position of the multi-aperture imaging device. This can be done by varying a relative location or position between the image sensor 12 and the array 14 , wherein the focusing means 32 can be adapted to vary a position of the image sensor 12 and/or a position of the array 14 in order to obtain a variable relative position between the image sensor 12 and the array 14 , so as to set the focal position of the multi-aperture imaging device.
  • Setting the relative position can be channel-specific or apply to groups of optical channels or to all channels. For example, single optics 22 a - d may be moved, or a group of optics 22 a - d or all optics 22 a - d may be moved together. The same applies to the image sensor 12 .
  • the device comprises control means 34 adapted to control the focusing means 32 .
  • the control means 34 is adapted to receive image information 36 from the image sensor 12 . This could be, for example, the partial fields of view 24 a - d projected on the image sensor areas 28 a - d , or information or data corresponding to the projections. This does not preclude intermediate processing of the image information 36 , for example with regard to filtering, smoothing or the like.
  • the control means 34 is configured to set a sequence of focal positions in the multi-aperture imaging device so as to detect a corresponding sequence of image information of the total field of view 26 .
  • the control means 34 is adapted to create a depth map 38 for the total field of view 26 from the sequence of image information.
  • the depth map 38 can be provided via a corresponding signal.
  • the control means 34 is capable of capturing different images of the same field of view 26 on the basis of the different focal positions obtained by different relative positions between the image sensor 12 and the array 14 , and/or is capable of capturing differently focused partial images thereof in accordance with segmentation by the partial fields of view 24 a - d.
  • Depth maps can be used for different purposes, for example for image processing, but also for image merging.
  • the control means 34 may be adapted to connect individual images (single frames) obtained from the image sensor areas 28 a to 28 d while using the depth map 38 to obtain image information 42 representing the image of the total field of view 26 , that is, a total image.
  • image information 42 representing the image of the total field of view 26 , that is, a total image.
  • Using a depth map is particularly advantageous for such methods of merging partial images, also known as stitching.
  • the control means may be configured to assemble the partial images of a group of partial images to form a total image.
  • the depth map used for stitching can be generated from the very partial images to be stitched.
  • a sequence of total images representing the total field of view can be generated.
  • Each total image can be based on a combination of partial images with the same focal position.
  • at least two, several or all total images from the sequence can be combined to obtain a total image with extended information, e.g. to create a Bokeh effect.
  • the image can also be represented in such a way that the entire image is artificially sharp, i.e. a larger number of partial areas are in focus than is the case in the single frames, for example the entire image.
  • the device 10 1 is configured to create the image of the total field of view as a mono image and to create the depth map 38 from the sequence of mono images. Although multiple scanning of the total field of view 26 is also possible, the device 10 can generate the depth map from one mere mono image, which can spare the need for additional pictures to be taken from different viewing directions, e.g. by using multiple pictures taken with the same device or by means of redundant arrangement of additional optical channels.
  • FIG. 1 b shows a schematic perspective view of a device 10 2 according to an embodiment of the second aspect.
  • the device 10 2 has, instead of the control means 34 , a control means 44 configured to direct the beam deflector to different positions 18 1 and 18 2 .
  • the beam deflector 18 has different relative positions, so that in the different positions or locations, image information of different total fields of view 26 1 and 26 2 is obtained since the beam paths of the optical channels 16 a d are deflected in different directions influenced by the different positions 18 1 and 18 2 .
  • the device 10 2 comprises control means 44 configured to direct the beam deflector to the first position 18 1 to obtain imaging information of the first total field of view 26 1 from the image sensor 12 .
  • the control means 44 is adapted to direct the beam deflector 18 to the second position 18 2 to obtain image information of the second total field of view 26 2 from the image sensor 12 .
  • the control means 44 is adapted to insert a portion of the first image information 46 1 into the second image information 46 2 to obtain common or accumulated image information 48 .
  • the accumulated image information 48 may reproduce parts of the first total field of view 26 1 and parts of the second total field of view 26 2 , which involves steps of image manipulation or processing. This means that the accumulated image information 48 is based, in some places, on an image of the total field of view 26 1 and in other places on an image of the total field of view 26 2 .
  • the control means 44 may be adapted to provide a signal 52 containing or reproducing the accumulated image information 48 .
  • the image information 46 1 and/or 46 2 can also be output by signal 52 .
  • FIG. 1 c shows a schematic view of an device 10 3 according to an embodiment which comprises, instead of the control means 34 of FIG. 1 a and instead of the control means 44 of FIG. 1 b , a control means 54 which combines the functionality of the control means 34 and of the control means 44 and is adapted to produce the depth map 38 on the basis of a variable focal position of the device 10 3 , and to provide the accumulated image information 48 of FIG. 1 b.
  • FIG. 2 a shows a schematic view of different focal positions 56 1 to 56 5 which can be set in a device according to the first aspect, for example the device 10 1 , and the device 10 2 .
  • the different focal positions 56 1 to 56 5 can be understood as positions or distances 58 1 to 58 5 in which objects in the captured field of view are projected on the image sensor 12 in a focused manner.
  • a number of focal positions 56 can be arbitrary and be greater than 1.
  • Distances 62 1 to 62 4 between adjacent focal positions can refer to distances in the image space, where an implementation or transfer of the explanation to distances in the object space is also possible.
  • the advantage of viewing the image space is that the properties of the multi-aperture imaging device are taken into account, especially with regard to a minimum or maximum object distance.
  • Control means 34 and/or 54 may be adapted to control the multi-aperture imaging device to have two or more focal positions 56 1 to 56 5 . In the respective focal position, single frames 64 1 and 64 2 can be captured in accordance with the number of partial fields of view 24 captured.
  • the control means can determine, by analyzing the image information in terms of which of the image parts are imaged sharply, the distance at which these sharply imaged objects are arranged with respect to the device. This information regarding the distance can be used for the depth map 38 .
  • the control means can be configured to capture a corresponding number of groups of partial images in the sequence of focal positions 56 1 to 56 5 , each partial image being associated with a partial field of view imaged. The group of partial images can therefore correspond to those partial images that represent the total field of view in the set focal position.
  • the control means may be configured to produce the depth map from a comparison of local image sharpness information in the partial images.
  • the local sharpness information can indicate in which areas of the image objects are in focus, or are sharply imaged within a previously defined tolerance range.
  • a determination of the edge blurring function and a detection of the distances over which the edges extend can be used to determine whether a corresponding image area, a corresponding object or a part thereof is sharply imaged or is blurred on the image sensor.
  • the sequential-image or line-blurring function can be used as a quality criterion for the sharpness of image content.
  • any known optical sharpness metric as well as the known Modulation Transfer Function (MTF), can be used.
  • MTF Modulation Transfer Function
  • the sharpness of the same objects in adjacent images of the stack, association of the focus actuator position with the object distance via a calibrated lookup table and/or the direction of the through-focus scan can be used to obtain the depth information from adjacent images of the stack in a partly recursive manner and to avoid ambiguities. Knowing the set focal position which is uniquely correlated with an object distance which is imaged in focus, it is thus possible to infer from—the knowledge that the object is imaged in focus, at least within the predetermined tolerance range—a distance from the area of the image, of the object or the part thereof, which can be a basis for the depth map 38 .
  • control means may be configured to assemble the partial images of a group of partial images into a total image. This means that the depth map used for stitching can be generated from the partial images to be stitched themselves.
  • the device may be configured to control the focusing means 32 so that the sequence of focal positions 56 1 to 56 5 is distributed equidistantly in the image space between a minimum focal position and a maximum focal position within a tolerance range of ⁇ 25%, ⁇ 15% or ⁇ 5%, advantageously as close to 0% as possible.
  • a tolerance range of ⁇ 25%, ⁇ 15% or ⁇ 5% advantageously as close to 0% as possible.
  • FIG. 2 b shows a schematic representation of the use of the depth map 38 and its generation.
  • the partial images 64 1 and 64 2 can each be used to obtain partial information 38 1 to 38 5 of the depth map 38 from the respective focal position 56 1 to 56 5 since the objects sharply represented in the single frames 64 1 and 64 2 can be precisely determined with respect to their distance. Between the focal positions 56 1 and 56 5 , however, interpolation methods can also be used, so that sufficiently precise information can still be obtained for the depth map 38 even with slightly blurred objects.
  • the distance information contained in the partial information 38 1 to 38 5 may be combined by the control means to form the depth map 38 .
  • the depth map 38 can be used to combine the single frames 64 1 and 64 2 from the different focal positions 56 1 to 56 5 to form a corresponding number of total images 421 to 425 .
  • FIG. 3 a shows a schematic perspective view of a device 30 according to a embodiment.
  • the image sensor 12 , the array 14 and the beam deflector 18 can span a cuboid within the room.
  • the cuboid can also be understood as a virtual cuboid and can, for example, have a minimum volume and, in particular, a minimum vertical extension along a direction parallel to a thickness direction y that is parallel to a line extension direction 66 .
  • the line extension direction 66 for example, runs along a z direction and perpendicularly to an x direction, which is arranged parallel to a course of the beam paths between the image sensor 12 and the array 14 .
  • the directions x, y and z may span a Cartesian coordinate system.
  • the minimum volume of the virtual cuboid, or the minimum vertical extension thereof, can be such that the virtual cuboid nevertheless comprises the image sensor 12 , the array 14 and the beam deflector 18 .
  • the minimum volume can also be understood as describing a cuboid which is spanned by the arrangement and/or operational movement of the image sensor 12 , the array 14 and/or the beam deflector 18 .
  • the line extension direction 66 can be arranged in such a way that the optical channels 16 a and 16 b are arranged next to each other, possibly parallel to each other, along the line extension direction 66 .
  • the line extension direction 66 can be fixedly arranged within the room.
  • the virtual cuboid may have two sides arranged opposite each other in parallel, parallel to the line extension direction 66 of array 14 , and parallel to a portion of the beam path of the optical channels 16 a and 16 b between the image sensor 12 and the beam deflector 18 . In a simplified manner, but without a restrictive effect, these can be, for example, a top side and a bottom side of the virtual cuboid.
  • the two sides can span a first plane 68 a and a second plane 68 b . This means that both sides of the cuboid can be part of plane 68 a or 68 b , respectively.
  • Further components of the multi-aperture imaging device may be arranged entirely, but at least partially, within the area between planes 68 a and 68 b , so that an installation space requirement of the multi-aperture imaging device along the y direction, which is parallel to a surface normal of planes 68 a and/or 68 b , may be small, which is advantageous.
  • a volume of the multi-aperture imaging device may have a small or minimal installation space between planes 68 a and 68 b .
  • the installation space of the multi-aperture imaging device can be large or arbitrarily large.
  • the volume of the virtual cuboid is influenced, for example, by an arrangement of the image sensor 12 , the array 14 and the beam deflector 18 , the arrangement of these components being such that, according to the embodiments described herein, the installation space of these components along the direction perpendicular to the planes and, thus, the distance of the planes 68 a and 68 b from each other becomes small or minimal. Compared to other arrangements of the components, the volume and/or distance of other sides of the virtual cuboid can be increased.
  • the device 30 comprises an actuator 72 for generating a relative movement between the image sensor 12 , the single-line array 14 and the beam deflector 18 .
  • This may include, for example, a positioning movement of the beam deflector 18 to switch between the positions described in relation to FIG. 1 b .
  • the actuator 72 can be configured to execute the relative movement described in connection with FIG. 1 a to change the relative position between the image sensor 12 and the array 14 .
  • the actuator 72 is arranged at least partially between the planes 68 a and 68 b .
  • the actuator 72 may be adapted to move at least one of the image sensor 12 , the single-line array 14 and the beam deflector 18 , which may include rotational and/or translational movements along one or more directions.
  • Examples of this are a channel-specific change of a relative position between image sensor areas 28 of a respective optical channel 16 , of the optics 22 of the respective optical channel 16 and of the beam deflector 18 and/or of the corresponding segment or facet, respectively, and/or a channel-specific change of an optical property of the segment/facet relating to the deflection of the beam path of the respective optical channel.
  • the actuator 72 may at least partially implement an autofocus and/or optical image stabilization.
  • the actuator 72 may be part of the focusing means 32 and may be adapted to provide a relative movement between at least one optics of at least one of optical channels 16 a and 16 b and the image sensor 12 .
  • the relative movement between the optics 22 a and/or 22 b and the image sensor 12 can be controlled by the focusing means 32 in such a way that the beam deflector 18 performs a simultaneous movement.
  • the distance between the beam deflector 18 and the image sensor 12 can be reduced accordingly, so that a relative distance between the array 14 and the optics 22 a and/or 22 b , respectively, and the beam deflector 18 is substantially constant. This enables the beam deflector 18 to be equipped with small beam deflecting surfaces since a beam cone growing due to a growing distance between the array 14 and the beam deflector 18 may be compensated for by maintaining the distance from the beam deflector 18 .
  • the focusing means 32 and/or the actuator 72 are arranged in such a way that they protrude by not more than 50% from the area between the planes 68 a and 68 b .
  • the actuator 72 may have a dimension or extension 74 parallel to the thickness direction y. A proportion of a maximum of 50%, a maximum of 30% or a maximum of 10% of the dimension 74 can project beyond the plane 68 a and/or 68 b , starting from an area between planes 68 a and 68 b , and, thus, project out of the virtual cuboid. This means that at the most, the actuator 72 projects only insignificantly from the plane 68 a and/or 68 b . According to embodiments, the actuator 72 does not project beyond the planes 68 a and 68 b . The advantage of this is that an extension of the multi-aperture imaging device along the thickness direction y is not increased by the actuator 72 .
  • the actuator 72 can alternatively or additionally also generate a translational movement along one or more spatial directions.
  • the actuator 72 can comprise one or more single actuators, possibly to generate different single movements in an individually controllable manner.
  • the actuator 72 or at least a single actuator thereof may, for example, be implemented as or comprise a piezo actuator, in particular a piezoelectric bending actuator, described in more detail in connection with FIG. 4 .
  • a piezo bender allows a fast and reproducible change of position. This feature is advantageous for the capturing of focus stacks in the sense of several or many images within a short time.
  • Piezo benders as actuators designed along one dimension or direction can be advantageously employed, in particular, in the described architecture as they have a form factor that is advantageous for this, i.e. an extension especially in one direction.
  • the array 14 can include a substrate 78 to or at which optics 22 a and 22 b are attached or arranged.
  • the substrate 78 may be at least partially transparent to the optical paths of the optical channels 16 a and 16 b by means of recesses or materials suitably selected; this does not preclude manipulation of the optical channels being performed, for example by arranging filter structures or the like.
  • FIG. 3 b shows a schematic lateral sectional view of the device 30 according to a embodiment.
  • the multi-aperture imaging device of the device 30 may comprise, e.g., a plurality of actuators, e.g. more than one, more than two, or a different number >0.
  • actuators 72 1 to 72 5 may be arranged which can be used for different purposes, for example for adjusting the focal position and/or changing the position or location of the beam deflector 18 for setting the viewing direction of the multi-aperture imaging device and/or for providing optical image stabilization by rotational movement of the beam deflector 18 and/or translational movement of the array 14 .
  • the actuators 72 1 to 72 5 can be arranged in such a way that they are at least partially arranged between the two planes 68 a and 68 b , which are spanned by the sides 69 a and 69 b of the virtual cuboid 69 .
  • the sides 69 a and 69 b of the cuboid 69 can be aligned parallel to each other and parallel to the line extension direction of the array and part of the beam path of the optical channels between the image sensor 12 and the beam deflector 18 .
  • the volume of the cuboid 69 is minimal but still includes the image sensor 12 , the array 14 and the beam deflector 18 as well as their operational movements.
  • Optical channels of array 14 have optics 22 , which can be the same or different for each optical channel.
  • a volume of the multi-aperture imaging device may have a small or minimal installation space between the planes 68 a and 68 b .
  • an installation space of the multi-aperture imaging device can be large or arbitrarily large.
  • the volume of the virtual cuboid is influenced, for example, by an arrangement of the image sensor 12 , the single-line array 14 and the beam deflector, the arrangement of these components being such that, according to the embodiments described herein, the installation space of these components along the direction perpendicular to the planes and, thus, the distance of the planes 68 a and 68 b from each other becomes small or minimal.
  • the volume and/or distance of other sides of the virtual cuboid can be increased.
  • the virtual cuboid 69 is represented by dotted lines.
  • the planes 68 a and 68 b can comprise or be spanned by two sides of the virtual cuboid 69 .
  • a thickness direction y of the multi-aperture imaging device may be normal to planes 68 a and/or 68 b and/or parallel to the y direction.
  • the image sensor 12 , the array 14 and the beam deflector 18 may be arranged such that a vertical distance between the planes 68 a and 68 b along the thickness direction y, which—in a simplified manner, but without a restrictive effect—can be referred to as the height of the cuboid, is minimal, wherein a minimization of the volume, i.e. of the other dimensions of the cuboid, can be dispensed with.
  • An extension of the cuboid 69 along the direction y may be minimal and be substantially predetermined by the extension of the optical components of the imaging channels, i.e. array 14 , image sensor 12 and beam deflector 18 , along the direction y.
  • a volume of the multi-aperture imaging device may have a small or minimal installation space between planes 68 a and 68 b .
  • an installation space of the multi-aperture imaging device can be large or arbitrarily large.
  • the volume of the virtual cuboid is influenced, for example, by an arrangement of the image sensor 12 , the single-line array 14 and the beam deflector, the arrangement of these components being such that, according to the embodiments described herein, the installation space of these components along the direction perpendicular to the planes and, thus, the distance of the planes 68 a and 68 b from each other becomes small or minimal.
  • the volume and/or distance of other sides of the virtual cuboid can be increased.
  • the actuators 72 1 to 72 5 can each have a dimension or extension that is parallel to the direction y. A proportion not exceeding 50%, 30% or 10% of the dimension of each actuator 72 1 to 72 5 may protrude, starting from an area between both planes 68 a and 68 b , beyond plane 68 a and/or 68 b or protrude from said area. This means that the actuators 72 1 to 72 5 project, at the most, insignificantly beyond plane 68 a and/or 68 b . According to embodiments, the actuators do not protrude beyond planes 68 a and 68 b . The advantage of this is that an extension of the multi-aperture imaging device along the thickness direction, or direction y, is not increased by the actuators.
  • an image stabilizer may include at least one of the actuators 72 1 to 72 5 . Said at least one actuator may be located within a plane 71 or between planes 68 a and 68 b.
  • actuators 72 1 to 72 5 can be located in front of, behind or next to image sensor 12 , array 14 and/or beam deflector 18 .
  • actuators 36 and 42 are arranged outside the area between planes 68 a and 68 b , with a maximum circumference of 50%, 30% or 10%.
  • FIG. 3 c shows a schematic lateral sectional view of the multi-aperture imaging device wherein different total fields of view 26 1 and 26 2 are detectable on the basis of different positions of the beam deflector 18 since the multi-aperture imaging device then has different viewing directions.
  • the multi-aperture imaging device may be adapted to vary a tilt of the beam deflector by an angle ⁇ so that alternately, different main sides of the beam deflector 18 are arranged facing the array 14 .
  • the multi-aperture imaging device may include an actuator adapted to tilt the beam deflector 18 about the rotation axis 76 .
  • the actuator may be adapted to move the beam deflector 18 to a first position in which the beam deflector 18 deflects the beam path 26 of the optical channels of the array 14 to the positive y direction.
  • the beam deflector 18 may have, in the first position, e.g. an angle ⁇ of >0° and ⁇ 90°, of at least 10° and at most 80° or of at least 30° and at most 50°, e.g. 45°.
  • the actuator may be adapted to deflect the beam deflector, in a second position, about the axis of rotation 76 so that the beam deflector 18 deflects the beam path of the optical channels of the array 14 towards the negative y direction as represented by the viewing direction toward the total field of view 26 2 and by the dashed representation of the beam deflector 18 .
  • the beam deflector 18 can be configured to be reflective on both sides, so that in the first position, the viewing direction points towards the total field of view 26 1 .
  • FIG. 4 a shows a schematic top view of a device 40 according to a embodiment in which the actuator 72 is formed as a piezoelectric bending actuator.
  • Actuator 72 is configured to perform a bend in the x/z plane as shown by the dashed lines.
  • the actuator 72 is connected to the array 14 via a mechanical deflector 82 , so that a lateral displacement of the array 14 along the x direction can occur when the actuator 72 is bent, so that the focal position can be changed.
  • actuator 72 can be connected to substrate 78 .
  • the actuator 72 can also be mounted on a housing that houses at least part of the optics 22 a to 22 d to move the housing. Other variants are also possible.
  • the device may include 40 further actuators 841 and 842 configured to generate movement at the array 14 and/or the beam deflector 18 , for example to place the beam deflector 18 in different positions or locations and/or for optical image stabilization by translational displacement of the array 14 along the z direction and/or by generating rotational movement of the beam deflector 18 about the axis of rotation 76 .
  • the beam deflector 18 may have several spaced-apart but common movable facets 86 a to 86 d , each optical channel being associated with one facet 86 a to 86 d .
  • the facets 86 a to 86 d can also be arranged to be directly adjacent, i.e. with little or no distance between them.
  • a flat mirror can also be arranged.
  • a distance 881 between at least one of the optics 22 a - d and the image sensor 12 can be changed from a first value 88 1 to a second value 88 2 , e.g. increased or decreased.
  • FIG. 4 b shows a schematic lateral sectional view of device 40 to illustrate the arrangement of actuator 72 between planes 68 a and 68 b described in connection with FIG. 3 a .
  • the actuator 72 for example, is arranged completely between planes 68 a and 68 b , as is the mechanical deflection device 82 , which uses several force-transmitting elements such as connecting webs and wires, ropes or the like and mechanical bearings or deflection elements.
  • the mechanical deflector or mechanical means for transmitting the movement to the array 14 can be arranged on one side of the image sensor 12 which faces away from the array 14 , i.e. starting from the array 14 behind the image sensor 12 .
  • the mechanical means 82 can be arranged in such a way that a flux of force laterally passes the image sensor 12 .
  • the actuator 72 or another actuator may be located on a side of the beam deflector 18 which faces away from the array 14 , i.e. starting from the array 14 behind the beam deflector 18 .
  • the mechanical means 82 may be arranged such that a flux of force laterally passes the beam deflector 18 .
  • actuator 72 Although only one actuator 72 is shown, a larger number of actuators can also be arranged, and/or more than one side of actuator 72 can be connected to a mechanical deflector 82 .
  • a centrally mounted or supported actuator 72 may be connected on two sides to a mechanical deflector 82 and may act, for example, on both sides of the array 14 to enable homogeneous movement.
  • FIG. 5 a shows a schematic representation of an array of partial fields of view 24 a and 24 b in a total field of view 26 which is detectable, for example, by a multi-aperture imaging device described herein, such as the multi-aperture imaging device 10 1 , 10 2 , 10 3 , 30 and/or 40 , and may correspond, for example, to the total field of view 26 1 and/or 26 2 .
  • the total field of view 26 can be projected on the image sensor area 28 b by using the optical channel 16 b .
  • optical channel 16 a can be configured to capture the partial field of view 24 a and project it on the image sensor area 28 a .
  • optical channel 16 c may be configured to detect the partial field of view 24 b and to project it on the image sensor area 28 c .
  • simultaneous capturing of the total field of view and of the partial fields of view, which, in turn, together represent the total field of view 26 can take place.
  • partial fields of view 24 a and 24 b may have the same extension or comparable extensions along at least one image direction B 1 or B 2 , such as image direction B 2 .
  • the extension of the partial fields of view 24 a and 24 b can be identical to the extension of the total field of view 26 along the image direction B 2 .
  • This means that the partial fields of view 24 a and 24 b may completely capture, or pick up, the total field of view 26 along the image direction B 2 and may only partially capture, or pick up, the total field of view along another image direction B 1 arranged perpendicular to the former and may be arranged in a mutually offset manner so that complete capture of the total field of view 26 combinatorially results also along the second direction.
  • the partial fields of view 24 a and 24 b may be disjoint from each other or at most incompletely overlap each other in an overlap area 25 , which possibly extends completely along the image direction B 2 in the total field of view 26 .
  • a group of optical channels comprising optical channels 16 a and 16 c may be configured to fully image the total field of view 26 when taken together, for example by a total picture taken in combination with partial pictures taken which, when taken together, image the total field of view.
  • the image direction B 1 for example, can be a horizontal line of an image to be provided.
  • the image directions B 1 and B 2 represent two different image directions that can be positioned anywhere in the room.
  • FIG. 5 b shows a schematic representation of an arrangement of the partial fields of view 24 a and 24 b , which are arranged in a mutually offset manner along a different image direction, image direction B 2 , and overlap each other.
  • the partial fields of view 24 a and 24 b can each capture the total field of view 26 completely along image direction B 1 and incompletely along image direction B 2 .
  • the overlap area 25 is arranged completely within the total field of view 26 along the image direction B 1 .
  • FIG. 5 c shows a schematic representation of four partial fields of view 24 a to 24 b , which incompletely capture the total field of view 26 in both directions B 1 and B 2 , respectively.
  • Two adjacent partial fields of view 24 a and 24 b overlap in an overlap area 25 b .
  • Two overlapping partial fields of view 24 b and 24 c overlap in an overlap area 25 c .
  • partial fields of view 24 c and 24 d overlap in an overlap area 25 d
  • partial fields of view 24 d overlap with partial fields of view 24 a in an overlap area 25 a .
  • All four partial fields of view 24 a to 24 d can overlap in an overlap area 25 e of the total field of view 26 .
  • a multi-aperture imaging device similar to that described in relation to FIG. 1 a - c may be used to capture the total field of view 26 and the partial fields of view 24 a - d , for example; the array 14 may have five optics, four to capture the partial fields of view 24 a - d , and one optics to capture the total field of view 26 . Accordingly, the array may be configured with three optical channels with regard to FIG. 5 a - b.
  • the overlap area 25 b is captured via the total field of view 26 , the partial field of view 24 a and the partial field of view 24 b .
  • An image format of the total field of view can correspond to a redundancy-free combination of the imaged partial fields of view, for example of the partial fields of view 24 a - d in FIG. 5 c , where the overlap areas 25 a - e are only counted once in each case. In connection with FIGS. 5 a and 5 b , this applies to the redundancy-free combination of the partial fields of view 24 a and 24 b.
  • An overlap in the overlap areas 25 and/or 25 a - e may, for example, include a maximum of 50%, 35% or 20% of the respective partial images.
  • a reduction in the number of optical channels can be obtained, which allows cost savings and a reduction in lateral installation space requirements.
  • a form of depth information acquisition which is an alternative to stereoscopic acquisition, is possible without the need for additional sensors such as Time of Flight, Structured or Coded Light and the like.
  • time-of-flight sensors which enable low resolution, as well as structured-light sensors, which have a high energy requirement, can thus be avoided. Both approaches still have problems with intense ambient lighting, especially sunlight.
  • a piezo bender serves as an extremely fast focus factor with low power consumption.
  • embodiments provide for combining the extraction of depth information from the sequence of focal positions with the extraction of depth information from disparity-based depth information. It is advantageous here to first create a disparity-based depth map and, if the latter has shortcomings, to supplement, correct or improve it with the additional depth information through the sequence of focal positions.
  • the described architecture of the multi-aperture imaging device allows the use of such piezo benders since an otherwise cubic form factor of the camera module makes utilization of long piezo benders more difficult or even impossible. With a short exposure time, this allows shooting focus stacks, i.e. numerous pictures taken quickly one after the other with slightly different focusing of the scene.
  • Embodiments provide that the entire depth of the scene is sampled in a useful manner, for example from macro, the closest possible shot, to infinity, that is, at the largest possible distance.
  • the distances can be arranged equidistantly in the object space, but advantageously in the image space. Alternatively, a different reasonable distance can be chosen.
  • a number of focal positions is at least two, at least three, at least five, at least ten, at least 20, or any other number.
  • images 42 can be presented to the user.
  • embodiments provide for combining the individual image information so that the user can be provided with an image that has combined image information.
  • an image with depth information which offers the possibility of digital refocusing.
  • the presented image can offer a so-called Bokeh effect, defocusing.
  • the image can also be presented in such a way that the entire image is artificially sharp, which means that a larger distance range than in the single frames of partial areas is in focus, for example the entire image.
  • the sharpness or blur measured in the single frames and further information, such as the sharpness of the same objects in adjacent images of the stack, association of the focus actuator position with an object distance, for example while using a calibrated lookup table, a direction of the through-focus scan may be used—with regard to said frames themselves but also, in a recursive manner, from other images in order to avoid ambiguities—for reconstructing the object distance of the individual elements of the scene and to create therefrom a depth map in image resolution.
  • duplication of the channels for a stereo image can be omitted while a depth map can still be created.
  • This depth map enables the stitching of the different partial images of the multi-aperture imaging device. For example, by halving the number of optical channels, a significant reduction of the lateral dimensions, e.g. along the line extension direction, and, thus, also a cost reduction can be achieved.
  • Image processing can provide images that are at least as good through other steps. Alternatively or additionally, it is possible to dispense with an additional arrangement of time-of-flight sensors or structured-light sensors. This advantage is maintained even if the duplication mentioned above is carried out nevertheless, which may also offer advantages.
  • the total field of view 26 can be captured at least stereoscopically, as described in DE 10 2013 222 780 A1, for example, in order to obtain depth information from the multiple, i.e. at least double, capturing of the total field of view or of the partial fields of view 24 .
  • the at least stereoscopic capturing shown, for example, in FIG. 5 d makes it possible to obtain depth information by viewing the same partial field of view 24 a or 24 b through two optical channels 16 a and 16 c , or 16 c and 16 d , which are spaced apart by a basic distance BA.
  • a number of the partial fields of view is just as freely selectable as an arrangement of same, see the differences in FIGS. 5 a - c as an example.
  • arranging the partial fields of view shown in FIG. 5 d according to FIG. 5 b is advantageous.
  • FIG. 5 d shows only a part of a multi-aperture imaging device 50 . Elements such as the beam deflector 18 or actuators are not shown.
  • the multi-aperture imaging device 50 there are now two information sources for depth information that can be used to create a depth map. This includes, on the one hand, setting a sequence of focal positions in the multi-aperture imaging device, on the other hand, it includes the disparity between optical channels for detecting matching image content.
  • the disparity-based depth information can be incomplete or of low quality in places due to occlusions or masking, but in contrast to the sequence of focal positions it can be done quickly and with a low expenditure in terms of electrical energy for actuators and/or computing power (which involves both corresponding computing resources and electrical energy).
  • a preliminary depth map can be created from the disparity-based depth information and be supplemented or improved by a fully or partially created additional depth map from the sequence of focal positions.
  • a preliminary depth map does not necessarily describe a temporal relationship, since the order in which the two depth maps to be combined are created can be arbitrary.
  • the preliminary depth map can be created from the sequence of focal positions and be improved or upgraded by disparity-based depth information.
  • the control means 34 or a different instance of the multi-aperture imaging device 50 may be configured to check the quality or reliability of the depth information or depth map, for example by checking a resolution of the depth information and/or by monitoring the occurrence of occlusions or other effects.
  • additional depth information can be created from the sequence of focal positions in order to improve or correct the preliminary depth map.
  • this additional depth information can be obtained in an energy- and/or computationally efficient way by creating a sequence of focal positions only for those areas of the preliminary depth map that are to be supplemented, i.e. only in a partial area of the depth map.
  • control means specifies a local area and/or an area of the depth planes for determining the depth information on the basis of the sequence of focal positions, i.e. a range of values between minimum and maximum focal positions, on the basis of the locations in the preliminary depth map which are to be supplemented or corrected; it is also possible for a plurality of ranges to be set.
  • control means may be configured to recalculate the depth information only for those areas of the total field of view 26 where improvement, optimization or correction of the preliminary depth map is a requirement, which may also save energy and time.
  • the control means may be configured to select areas of the total field of view in the preliminary depth map on the basis of a quality criterion for which an improvement is required, and to supplement the preliminary depth map in the selected areas and not to supplement it in non-selected areas.
  • the additional depth information for supplementing the preliminary depth map may be determined only for the selected areas and not for non-selected areas.
  • the control means may be configured to determine the at least one region and/or focal positions by performing a comparison indicating whether the quality or reliability of the depth map in the determined area correspond to at least one threshold value. This can be achieved by evaluating a reference parameter to determine whether at least a minimum quality is achieved, but also by checking whether a negative quality criterion (e.g. number of errors or the like) is not exceeded, i.e. whether a threshold value is fallen below or at least not exceeded.
  • a negative quality criterion e.g. number of errors or the like
  • the aforementioned preliminary depth map can be a depth map created while using the available depth information.
  • the preliminary depth map may also be understood as a collection of depth information (without a specific map format).
  • Embodiments refer to the aspect of combining depth maps from disparity and focus stacks. Embodiments are based on the described architecture with more than 2 channels, which derives a depth map primarily from the natural disparity/parallax of the channels. Depending on the computing and/or power budget, either another depth map can be generated from focus stacks and combined with the first depth map to improve it (essentially hole filling in the event of overlaps) and improve the stitching results, or it can be generated only after obvious defects in the depth map from disparity have been identified. A different order is less advantageous since generating the focus stacks may involve additional energy consumption as well as possibly considerable loss of time, or, as a result, considerably more complicated exposure ratios of the sequential single frames.
  • Advantages that may result from the additional use of a depth map generated from focus stacks of images captured extremely quickly one after the other are: fewer holes in the depth map that are due to occlusions, possibly additional depth planes, especially for larger object distances, possibly improved lateral resolution of the depth map and, overall, due to additional information obtained, an improved signal-to-noise ratio in the depth map and, thus, fewer ambiguities or equivocalities which would otherwise lead to artifacts in the stitched images of the total field of view.
  • FIG. 6 shows a schematic perspective view of a device 60 according to an embodiment with respect to the second aspect.
  • the implementations described above readily apply also to devices 10 1 , 10 3 , 30 and/or 40 .
  • the device 60 or the multi-aperture imaging device of the device 60 can capture two mutually spaced-apart entire fields of view 26 1 and 26 2 .
  • the device 60 is designed, for example, as a portable or mobile device, in particular a tablet computer or a mobile phone, in particular a smartphone (intelligent telephone).
  • One of the fields of view 26 1 and 26 2 may, for example, be arranged along a user direction of the device 60 , as is common practice, for example, in the case of selfies for photos and/or videos.
  • the other total field of view may be arranged, for example, along an opposite direction and/or a world direction of the device 60 and may be arranged, for example, along the direction along which the user looks when he/she looks at the device 60 along the user direction from the total field of view.
  • the beam deflector 18 in FIG. 1 b can be formed to be reflective on both sides and deflect the beam path of the optical channels 16 a d in different positions with different main sides, for example, so that starting from the device 60 , the total fields of view 26 1 and 26 2 are arranged opposite one another and/or at an angle of 180°.
  • FIG. 7 a shows a schematic diagram to illustrate processing of image information 46 1 and 46 2 , which can be obtained by imaging the total fields of view 26 1 and 26 2 .
  • the control means is configured to separate, for example cut out, or isolate, a part 92 of the imaging information 46 1 of the field of view 26 1 or to copy exclusively said part 92 .
  • the control means is further adapted to combine the separated or segmented part 92 with the imaging information 46 2 , i.e. to insert part 92 into the imaging information 46 2 to obtain the accumulated image information 48 . In places, the latter exhibits the total field of view 26 2 and in places, namely where part 92 was inserted, it exhibits the image information 46 1 .
  • the obtaining of the accumulated image information 48 is not limited to the insertion of a single part 92 , but that any number of parts 92 can be segmented from image information 46 1 and one, several or all of these parts can be inserted into image information 46 2 .
  • a location or position at which the part 92 is inserted into the second imaging information 46 2 may be automatically determined by the control means, for example by projecting the part 92 through the device 60 into the second field of view 26 2 , but may alternatively or additionally also be selected by a user.
  • control means is configured to identify and segment a person in the first imaging information 46 1 , for example via pattern matching and/or edge detection, but in particular on the basis of the depth map generated by the device itself.
  • Said control means may be adapted to insert the image of the person into the second imaging information 46 2 to obtain the accumulated image information 48 .
  • part 92 can be a person, such as a user of the device 60 .
  • the device is configured to automatically identify the person and to automatically insert the image of the person, that is, part 92 , into the second imaging information 46 2 .
  • Embodiments provide that the control means uses a depth map, such as the depth map 38 , to position part 92 in the second imaging information 46 2 .
  • the depth map 38 may have a plurality or multiplicity of depth planes, for example in accordance with the number of focal positions considered or a reduced number obtained therefrom or a larger number interpolated therefrom.
  • the control means may be adapted to insert the part 92 into the predetermined depth plane of the second imaging information 46 2 to obtain the accumulated image information 48 .
  • the predetermined depth plane may correspond substantially, i.e. within a tolerance range of ⁇ 10%, ⁇ 5% or ⁇ 2%, to a distance of the first total field of view 26 2 from the device 60 or to the distance of the segmented part 92 from the device 60 , respectively. This may also be referred to as inserting the part 92 into the second imaging information 46 2 in a manner that is correct in terms of depth.
  • FIG. 7 b shows a schematic representation of scaling of part 92 in the accumulated image information 48 .
  • a different depth plane can be selected, for which purpose various possibilities of embodiments are provided.
  • the predetermined depth plane may be influenced by or determined by the placement of part 92 in the second imaging information 46 2 .
  • the placement can be effected automatically or by a user input.
  • the control means may be configured to determine, in the second imaging information 46 2 , a distance of the area in which part 92 is to be inserted. Knowing the distance of part 92 from the device and the objects in the second imaging information 46 2 , for example while using depth maps, a virtual distance change of part 92 caused by user input may be compensated for by scaling part 92 .
  • a one-dimensional, two-dimensional or three-dimensional size 94 of the part 92 can be changed, e.g. reduced, to a size 96 when the distance of the part 92 from the first imaging information 46 1 to the second imaging information 46 2 is increased, or may be increased to a size 96 when the distance from the first imaging information 46 1 to the second imaging information 46 2 is reduced.
  • the device may be configured to scale the imaging information 46 1 to obtain scaled imaging information.
  • the scaled imaging information may be inserted into the imaging information 46 2 by the control means to obtain the accumulated image information 48 .
  • the device may be configured to determine a distance of an object, which represents part 92 and is imaged in the first imaging information 46 1 , with respect to the device 60 .
  • the device may scale the imaging information 46 1 , or the part 92 thereof, on the basis of a comparison of the determined distance with the predetermined depth plane in the second imaging information 46 2 . It is advantageous if the two items of imaging information 46 1 and 46 2 are captured within short distances of time. It is advantageous if this distance of time within a time interval of not more than 30 ms, not more than 10 ms, not more than 5 ms or not more than 1 ms is approximately 0.1 ms. This time can be used, for example, for a changeover or repositioning of the beam deflector and can be at least partly determined by a duration of this process.
  • the accumulated image information 48 can be obtained as a single frame, alternatively or additionally also as a video data stream, for example as a large number of single frames.
  • an device is formed, in line with the second aspect, such that the first imaging information 46 1 comprises an image of a user, and that the second imaging information 46 2 comprises a world view of the device.
  • the control means is configured to segment an image of the user from the first imaging information 46 1 and to insert it into the world view.
  • the device can be configured to insert the image of the user into the world view in the correct depth.
  • taking a selfie image or shooting a video may include a depth-based combination of quasi-simultaneous picture-taking with a front-facing camera/view and a main camera/view of a device, in particular a mobile phone.
  • the foreground of the selfie i.e. the self-portrait, can be transferred to the foreground of the picture taken by the main camera.
  • a very fast switchover between front and rear picture-taking by changing the position of the beam deflector allows the above-mentioned quasi-simultaneous capturing of the world-side and the user-side camera images with the same image sensor.
  • a single-channel imaging device can also be used according to the second aspect, the second aspect provides advantages especially with regard to multi-aperture imaging devices, as they can already create or use a depth map to merge the frames.
  • This depth map can also be used to determine depth information for synthesizing the accumulated imaging information 48 .
  • a procedure is made possible which can be described as follows:
  • the advantage of this is that the selfie shot can be combined with the world-side picture as background, without having to turn the phone by 180°, as otherwise necessary, in order to take a picture of yourself in front of this scene. Alternatively or additionally, it is avoided that you take a picture past yourself in a rearward direction, which involves remembering that the orientation of the telephone has to be mirror-inverted in relation to the scene.
  • the depth map itself can be generated according to the second aspect, too, as is described in connection with the first aspect, so that the additional arrangement of time-of-flight sensors or structured-light sensors can be dispensed with.
  • FIG. 8 shows parts of a multi-aperture imaging device 80 which can be used in inventive devices of the first and/or second aspect(s), wherein a possible focusing means and/or actuator for implementing optical image stabilization are not represented but can easily be implemented.
  • the multi-aperture imaging device 80 of FIG. 8 comprises an array 14 of adjacently arranged optical channels 16 a - d , which is formed in several lines or advantageously in one line.
  • Each optical channel 16 a - d comprises optics 22 a - d for imaging a respective partial field of view 24 a - d of a total field of view 26 , or possibly a total field of view, as described in connection with FIG. 5 .
  • the field of view shown on the multi-aperture imaging device 80 is projected on a respectively associated image sensor area 28 a - d of an image sensor 12 .
  • the image sensor areas 28 a - d may each be formed from a chip comprising a corresponding pixel array; the chips may be mounted on a common substrate or board 98 , as indicated in FIG. 8 .
  • each of the image sensor areas 28 a - d may be formed from part of a common pixel array that extends continuously or with interruptions across the image sensor areas 28 a - d , with the common pixel array being formed, for example, on a single chip. For example, only the pixel values of the common pixel array in the image sensor areas 28 a - d will then be read out.
  • optical channels 16 a - d are arranged in one line next to one another in the line extension direction of the array 14 , but the number four is only exemplary and might be any other number greater than one, i.e., N optical channels with N>1 can be arranged.
  • the array 14 may also have further lines that extend along the line extension direction.
  • An array 14 of optical channels 16 a - d is understood to be a combination of the optical channels or a spatial grouping thereof.
  • Optics 22 a - d can each have a lens, but also a lens compound or lens stack, as well as a combination of imaging optics and other optical elements, including filters, apertures, reflective or diffractive elements or the like.
  • Array 14 can be designed in such a way that the optics 22 a - d are arranged, fixed or mounted on the substrate 78 in a channel-specific manner, in groups or all channels taken together. This means that a single substrate 78 , several parts thereof, or no substrate 78 may be arranged, for example if the optics 22 a - d are mounted elsewhere.
  • Optical axes or the beam paths 102 a - d of the optical channels 16 a - d may extend, according to an example, in parallel with one another between the image sensor areas 28 a - d and the optics 22 a - d .
  • the image sensor areas 28 a - d are arranged in a common plane, for example, as are the optical centers of optics 22 a - d . Both planes are parallel to each other, i.e. parallel to the common plane of the image sensor areas 28 a - d .
  • optical centers of the optics 22 a - d coincide with centers of the image sensor areas 28 a - d .
  • the optics 22 a - d on the one hand, and the image sensor areas 28 a d are arranged in line extension direction at the same pitch.
  • An image-side distance between image sensor areas 28 a - d and the associated optics 22 a d is set such that the images on the image sensor areas 28 a - d are set to a desired object distance.
  • the distance lil is within a range equal to or greater than the focal length of optics 22 a - d or within a range between the focal length and double the focal length of optics 22 a - d , both inclusive.
  • the image-side distance along the optical axis 102 a - d between the image sensor area 28 a - d and the optics 22 a - d may also be settable, e.g. manually by a user and/or automatically via a focusing means or autofocus control.
  • the beam deflector 18 is provided to cover a larger total field of view 26 and so that the partial fields of view 24 a - d only partially overlap in space.
  • the beam deflector 18 deflects the beam paths 102 a - d , or optical axes, e.g. with a channel-specific deviation into a total-field-of-view direction 104 .
  • the total-field-of-view direction 104 extends in parallel with a plane that is perpendicular to the line extension direction of array 14 and is parallel to the course of the optical axes 102 a - d prior to or without beam deflection.
  • the total-field-of-view direction 104 of the optical axes 102 a - d is obtained by rotating the line extension direction around an angle that is >0° and ⁇ 180° and, for example, lies between 80 and 100° and, for example, can be 90°.
  • the total field of view 26 of the multi-aperture imaging device 80 which corresponds to the total coverage of the partial fields of view 24 a - d , is thus not located in the direction of an extension of the series connection of the image sensor 12 and the array 14 in the direction of the optical axes 102 a - d , but due to the beam deflection, the total field of view is located laterally to the image sensor 12 and array 14 in a direction in which the installation height of the multi-aperture imaging device 80 is measured, i.e. the lateral direction perpendicular to the line extension direction.
  • the beam deflector 18 deflects, for example, each beam path or the beam path of each optical channel 16 a - d with a channel-specific deviation from the just mentioned one to deflection leading to the direction 104 .
  • the beam deflector 18 for each channel 16 a - d includes, for example, an individually installed element, such as a reflective facet 86 - d and/or a reflective surface. These are slightly inclined towards each other.
  • the mutual tilting of the facets 86 a - d is selected in such a way that when the beam is deflected by the beam deflector 18 , the partial fields of view 24 a - d are provided with a slight divergence in such a way that the partial fields of view 24 a - d overlap only partially.
  • individual deflection can also be implemented in such a way that the partial fields of view 24 a - d cover the total field of view 26 in a two-dimensional manner, i.e. they are distributed in a two-dimensional manner in the total field of view 26 .
  • the optics 22 a - d of an optical channel can be established to completely or partially generate the divergence in the beam paths 102 a - d , which makes it possible to completely or partially dispense with the inclination between individual facets 86 a - d .
  • the beam deflector can also be formed as a planar mirror.
  • the beam deflector 18 can also be formed differently than previously described.
  • the beam deflector 18 is not necessarily reflective. It can therefore also be designed differently than in the form of a facetted mirror, e.g. in the form of transparent prism wedges.
  • the mean beam deflection might be 0°, i.e.
  • the direction 104 might, for example, be parallel to the beam paths 102 a - d prior to or without beam deflection, or in other words, the multi-aperture imaging device 80 might still “look straight ahead” despite the beam deflector 18 .
  • Channel-specific deflection by the beam deflector 18 would again result in the partial fields of view 24 a - d overlapping each other only slightly, e.g. in pairs with an overlap of ⁇ 10% in relation to the solid angle ranges of the partial fields of view 24 a - d.
  • the beam paths 102 a - d might deviate from the described parallelism, and nevertheless the parallelism of the beam paths of the optical channels might still be sufficiently pronounced for the partial fields of view, which are covered by the individual channels 16 a -N and/or are projected on the respective image sensor areas 28 a - d , would largely overlap if no any further measures such as beam deflection are taken, so that, in order to cover a larger total field of view by the multi-aperture imaging device 80 , the beam deflector 18 provides the beam paths with an additional divergence such that the partial fields of view of N optical channels 16 a -N overlap one another less.
  • the beam deflector 18 ensures that the total field of view has an aperture angle that is greater than 1.5 times the aperture angle of the individual partial fields of view of the optical channels 16 a -N.
  • the divergence of the optical axes 102 a - d of these channels 16 a - d might then originate from the divergence of these optical axes 102 a - d , as is achieved by a lateral offset between optical centers of the optics 22 a - d and image sensor areas 28 a - d of the channels 16 a - d or prism structures or decentered lens sections.
  • the pre-divergence might be limited to one plane.
  • the optical axes 102 a - d might extend within a common plane prior to or without any beam deflection 18 , but might extend within said plane in a divergent manner, and the facets 86 a - d only cause an additional divergence within the other transverse plane, i.e.
  • the overall divergence might also be accomplished by the lateral offset between the optical centers of the optics 22 a - d , on the one hand, and centers of the image sensor areas 28 a - d , on the other hand, or by prism structures or decentered lens sections.
  • the aforementioned pre-divergence which may possibly exist may be achieved, for example, by placing the optical centers of optics 22 a - d on a straight line along the line extension direction, while the centers of image sensor areas 28 a - d are arranged to deviate from the projection of the optical centers along the normal of the plane of image sensor areas 28 a - d on points on a straight line within the image sensor plane, such as at points which deviate from the points on the aforementioned straight line within the image sensor plane in a channel-specific manner along the line extension direction and/or along the direction perpendicular to both the line extension direction and the image sensor normal.
  • pre-divergence can be achieved by placing the centers of the image sensors 28 a - d on a straight line along the line extension direction, while the centers of the optics 22 a - d are arranged to deviate from the projection of the optical centers of the image sensors along the normal of the plane of the optical centers of the optics 22 a - d on points on a straight line within the optical center plane, such as at points which deviate from the points on the above-mentioned straight line in the optical center plane in a channel-specific manner along the line extension direction and/or along the direction perpendicular to both the line extension direction and the normal of the optical center plane.
  • the above-mentioned channel-specific deviation from the respective projection occurs only in the line extension direction, i.e. if the optical axes 102 a - d are located only within one common plane and are provided with a pre-divergence. Both optical centers and image sensor area centers will then lie on a straight line parallel to the line extension direction, but with different intermediate distances.
  • a lateral offset between lenses and image sensors in a direction that is vertical and lateral to the line extension direction led to an increase in the installation height.
  • a purely in-plane offset in the line extension direction does not change the installation height, but it may possibly result in fewer facets, and/or the facets may only have a tilt in an angular orientation, which simplifies the architecture.
  • aspects have been described in connection with a device, it is understood that these aspects also represent a description of the corresponding method, so that a block or component of a device is also to be understood as a corresponding method step or as a feature of a method step. Similarly, aspects described in connection with or as a method step also represent a description of a corresponding block or detail or feature of a corresponding device.

Abstract

An inventive device includes a multi-aperture imaging device having an image sensor; an array of adjacently arranged optical channels, each optical channel including optics for projecting at least one partial field of view of a total field of view on an image sensor area of the image sensor, a beam deflector for deflecting a beam path of the optical channels, and focusing means for setting a focal position of the multi-aperture imaging device. The device further includes control means configured to control the focusing means and to receive image information from the image sensor; wherein the control means is configured to control the focusing means and to receive image information from the image sensor; wherein the control means is configured to set a sequence of focal positions in the multi-aperture imaging device so as to detect a corresponding sequence of image information of the total field of view, and to create a depth map for the detected total field of view on the basis of the sequence of image information.

Description

    CROSS-REFERENCES TO RELATED APPLICATIONS
  • This application is a continuation of copending International Application No. PCT/EP2019/085645, filed Dec. 17, 2019, which is incorporated herein by reference in its entirety, and additionally claims priority from German Application No. DE 10 2018 222 865.5, filed Dec. 21, 2018, which is incorporated herein by reference in its entirety.
  • BACKGROUND OF THE INVENTION
  • The present invention relates to multi-aperture imaging devices, in particular to a device comprising a multi-aperture imaging device. The device is configured to use the information contained within the multi-aperture imaging device to produce a depth map and/or to accumulate image information. The present invention also refers to extracting depth information from focus stacks with an array camera and/or a self-portrait against a modified background with an array camera.
  • Multi-aperture imaging devices can image the object field by using multiple partial fields of view. There are concepts that use a beam deflection system, such as a mirror, to enable deflection of a viewing direction of the camera channels from the device plane into another direction of the overall system, for example approximately perpendicular thereto. In the case of applying a mobile phone, for example, this vertical direction can be in one direction of the user's face or in the direction of the environment in front of him/her and can essentially be achieved by using switchable hinged mirrors.
  • Devices for efficient image generation and/or devices for easy handling would be desirable.
  • SUMMARY
  • According to an embodiment, a device may have: a multi-aperture imaging device comprising: an image sensor; an array of adjacently arranged optical channels, each optical channel comprising optics for projecting at least a partial field of view of a total field of view on an image sensor area of the image sensor, a beam deflector for deflecting a beam path of said optical channels, focusing means for adjusting a focal position of said multi-aperture imaging device; the device further has: control means adapted to control the focusing means and to receive image information from the image sensor; wherein the control means is adapted to set a sequence of focal positions in the multi-aperture imaging device so as to detect a corresponding sequence of image information of the total field of view, and to create a depth map for the detected total field of view on the basis of the sequence of image information.
  • One finding of the present invention is that by capturing a sequence of images in a sequence of focal positions, the depth map can be created from one's own image information, so that disparity information is not important and the use of such information can possibly be dispensed with.
  • According to an embodiment of the first aspect, a device comprises a multi-aperture imaging device. The multi-aperture imaging device comprises an image sensor, an array of adjacently arranged optical channels, each optical channel comprising optics for projecting at least a partial field of view of a total field of view on an image sensor area of the image sensor. The multi-aperture imaging device comprises beam deflection means for deflecting a beam path of the optical channels and focusing means for setting ing a focal position of the multi-aperture imaging device. A control means of the device is adapted to control the focusing means and to receive image information from the image sensor. The control means is configured to set a sequence of focal positions in the multi-aperture imaging device so as to detect a corresponding sequence of image information of the total field of view, and to create a depth map for the detected total field of view on the basis of the sequence of image information. The advantage of this is that the depth map can be generated from the sequence of focal positions, so that even a small number of optical channels is sufficient to obtain depth information.
  • According to an embodiment of the first aspect, the control means is configured to create the depth map from the sequence of image information, e.g. without any additional measurements or evaluations of the depth information by means of other methods. The advantage of this is that the depth map can be generated from the sequence of focal positions, so that even a single shot of the total field of view from one viewing direction can provide a sufficient amount of information for creating the depth map.
  • According to an embodiment of the first aspect, the optical channels are configured to capture the total field of view at least stereoscopically. Said control means is adapted to produce a preliminary depth map on the basis of disparity information obtained from said optical channels and to supplement said preliminary depth map on the basis of depth information based on said sequence of image information to obtain said depth map. Alternatively or in addition, the control means is adapted, by contrast, to produce a preliminary depth map on the basis of the sequence of image information; and to supplement the preliminary depth map on the basis of depth information based on disparity information obtained from the optical channels, in order to obtain the depth map. The advantage of this is that a highly accurate and high-quality depth map can be obtained which enables very good stitching results.
  • According to an embodiment of the first aspect, the control means is configured to select areas of the total field of view in the preliminary depth map on the basis of a quality criterion for which improvement is a requirement, and to determine additional depth information to supplement the preliminary depth map for the selected areas, and not to determine same for non-selected areas. The advantage of this is that the additional expenditure in terms of electrical energy and/or computing for supplementing the depth map can be kept low by creating or determining additional depth information, while nevertheless obtaining a high-quality depth map.
  • According to an embodiment of the first aspect, the control means is configured to capture a corresponding number of groups of partial images in the sequence of focal positions. Each partial image is associated with an imaged partial field of view so that each of the groups of partial images has a common focal position. The control means is configured to perform a comparison of local image sharpness information in the partial images and to create the depth map from this. This is made possible, for example, in that with a known focal position, which has been set by the control means for detecting the group of partial images, and by determining sharply imaged objects, that is to say objects which are in the respectively set focal position, information can be obtained about the fact that the respectively sharply imaged image areas have been captured at a distance from the multi-aperture imaging device which corresponds to the set focal position. By using several groups of partial images and therefore several focal positions, corresponding information can be generated for different objects and, thus, for the total field of view in order to achieve the depth map. This allows large-area or even complete mapping of the total field of view with regard to depth information.
  • According to an embodiment of the first aspect, the device is configured to control the focusing means such that the sequence of focal positions is distributed substantially equidistantly within the image space. This can be done by an equidistant arrangement that is as exact as possible, but also by taking into account a tolerance range of up to ±25%, ±15% or ±5%, wherein the sequence of focal positions is distributed between a minimum focal position and a maximum focal position of the sequence. Due to the equidistance in the image area, uniform precision of the depth map across different distances can be achieved.
  • According to an embodiment of the first aspect, the device is configured to generate a sequence of total images reproducing the total field of view on the basis of the sequence of image information, each total image based on a combination of partial images of the same focal position. This joining of partial images can be done while using the depth map in order to obtain a high image quality in the joined total image.
  • According to an embodiment of the first aspect, the device is configured to alter, on the basis of the depth map, a total image which renders the total field of view. When the depth map is known, different image manipulations can be carried out, for example subsequent focusing and/or defocusing of one or more image areas.
  • According to an embodiment of the first aspect, a first optical channel of the array is formed to image a first partial field of view of the total field of view, a second optical channel of the array is formed to image a second partial field of view of the total field of view. A third optical channel is configured to completely image the total field of view. This enables using additional imaging functionalities, for example with regard to a zoom range and/or an increase in resolution.
  • According to an embodiment of the first aspect, the focusing means has at least one actuator for setting the focal position. The focusing means is configured to be at least partially disposed between two planes spanned by sides of a cuboid, the sides of the cuboid being aligned in parallel with each other and with a line extension direction of the array and of part of the beam path of the optical channels between the image sensor and the beam deflector. The volume of the cuboid is minimal and yet designed to include the image sensor, array, and the beam deflector. This allows the multi-aperture imaging device to be designed with a small dimension along a depth direction normal to the planes.
  • According to an embodiment of the first aspect, the multi-aperture imaging device has a thickness direction that is arranged to be normal to the two planes. The actuator has a dimension parallel to the thickness direction. A proportion of at most 50% of the actuator's dimension is arranged, starting from an area between the two planes, in such a way that it extends beyond the two planes. Arranging the actuator in a circumference of at least 50% between the planes results in a thin configuration of the multi-aperture imaging device, which also allows a thin configuration of the device.
  • According to an embodiment of the first aspect, the focusing means comprises an actuator for providing a relative movement between optics of at least one of the optical channels and the image sensor. This allows easy setting of the focal position.
  • In accordance with an embodiment of the first aspect, the focusing means is adapted to perform the relative movement between the optics of one of the optical channels and the image sensor while performing a movement of the beam deflector that is simultaneous to the relative movement. This makes it possible to maintain the set optical influence by the beam deflector, for example with regard to a beam deflecting surface of the beam deflector, which is used in an unchanged size to deflect the beam path, which enables a small size of the beam deflector since it is possible to dispense with the provision of relatively large surfaces at an increased distance.
  • According to an embodiment of the first aspect, the focusing means is arranged in such a way that it protrudes by a maximum of 50% from the area between the planes of the cuboid. By arranging the entire focusing means in the area between the planes, including any mechanical components and the like, a very thin design of the multi-aperture imaging device and, thus, of the device is made possible.
  • According to an embodiment of the first aspect, at least one actuator of the focusing means is a piezoelectric bending actuator. This makes it possible to preserve the sequence of focal positions with a short time interval.
  • According to an embodiment of the first aspect, the focusing means comprises at least one actuator adapted to provide movement. The focusing means further comprises mechanical means for transmitting the movement to the array for setting the focal position. The actuator is arranged on a side of the image sensor which faces away from the array, and the mechanical means is arranged in such a way that a flux of force laterally passes the image sensor. Alternatively, the actuator is arranged on a side of the beam deflector that faces away from the array, and the mechanical means is arranged in such a way that a flux of force laterally passes the beam deflector. This allows the multi-aperture imaging device to be designed in such a way that the actuator is positioned, in the lateral direction, perpendicularly to the thickness direction without blocking the beam paths of the optical channels while enabling avoidance of the device being detected. When several actuators are used, embodiments provide for arranging all of the actuators on that side of the image sensor which faces away from the array, for arranging all of the actuators on that side of the beam deflector which faces away from the array, or for arranging a subset of the actuators on that side of the image sensor which faces away from the array, and for arranging a subset, which is disjoint therefrom, on that side of the beam deflector which faces away from the array.
  • According to an embodiment of the first aspect, a relative position of the beam deflector is switchable between a first position and a second position so that in the first position, the beam path is deflected towards a first total field of view, and in the second position, the beam path is deflected towards a second total field of view. The control means is adapted to direct the beam deflector to the first position to obtain imaging information of the first total field of view from the image sensor, wherein the control means is further adapted to direct the beam deflector to the second position to obtain imaging information of the second total field of view from the image sensor. The control means is further adapted to insert a portion of said first imaging information into said second imaging information to obtain accumulated image information which in parts represents said first total field of view and in parts represents said second total field of view. This allows easy handling of the device since complex repositioning of the multi-aperture imaging device, e.g. for taking a picture of oneself against a background, can be dispensed with. This is possible, in a particularly advantageous manner, due to the self-generated depth map.
  • A further finding of the present invention is that one has recognized that by combining image information of different total fields of view, so that in an accumulated image information, the first total field of view and the second total field of view are reproduced in parts, respectively, easy handling of the device can be obtained since, for example, elaborate positioning of the user and/or of the device can be dispensed with.
  • According to an embodiment of the second aspect, a device comprises a multi-aperture imaging device having an image sensor, an array of adjacently arranged optical channels, and a beam deflector. Each optical channel of the array comprises optics for projecting at least a partial field of view of a total field of view on an image sensor area of the image sensor. The beam deflector is configured to deflect a beam path of the optical channels, wherein a relative position of the beam deflector is switchable between a first position and a second position, so that in the first position, the beam path is deflected toward a first total field of view, and in the second position, the beam path is deflected toward a second total field of view. The device further comprises control means adapted to direct the beam deflector to the first position. Thus controlled, image information may be obtained from the control means which relates to the first total field of view projected on the image sensor. The control means is configured to direct the beam deflector to the second position to obtain imaging information of the second total field of view from the image sensor. An order of obtaining the imaging information of the first total field of view and of obtaining the imaging information of the second total field of view may be arbitrary. Said control means is configured to insert a portion of said first imaging information into said second imaging information to obtain accumulated image information which in parts represents said first total field of view and in parts represents said second total field of view. This allows combining image contents of different total fields of view, so that time-consuming positioning of the device and/or image objects can be dispensed with.
  • According to an embodiment of the second aspect, the first total field of view is arranged along a direction corresponding to a user direction of the device or corresponding to an oppositely arranged world direction of the device. This allows combining image content in the first total field of view with a total field of view different therefrom.
  • In addition, according to a advantageous embodiment of the second aspect, the second total field of view is arranged along a direction corresponding to the other one of the user direction and the world direction, so that the two total fields of view together capture the world direction and the user direction. In the accumulated image information, contents from the world direction and contents from the user direction can thus be combined with each other.
  • According to an embodiment of the second aspect, the control means is adapted to identify and segment, i.e. to separate, a person in the first image information, or to at least copy the image information relating to the person and to insert the image of the person into the second image information so as to obtain the accumulated image information. This allows inserting the image of the person in an image environment that is actually arranged along a different direction of the device.
  • According to a advantageous configuration of this, the device is configured to automatically identify the person and to automatically insert the image of the person into the second image information. This makes it possible to obtain a picture of oneself (selfie) against an actually different background. This avoids complex positioning of the device, the person and/or the background. It also enables compensating for the fact that the background is mirror inverted.
  • According to an embodiment of the second aspect, the device is adapted to identify and/or segment the part, such as a person or at least a part thereof, while using a depth map generated by the device from the first imaging information. The depth map can, for example, be created while using the first aspect or by other means. This enables easy implementation of the embodiment.
  • According to an embodiment of the second aspect, the device is adapted to create for the second image information a depth map having a plurality of depth planes and to insert the first image information in a predetermined depth plane of the second image information so as to obtain the accumulated image information. This enables integrating the first image information into the second image information in a manner that is correct, in terms of depth, with regard to the predefined or predetermined depth plane.
  • According to a advantageous embodiment of this, the predefined depth plane within a tolerance range of 10% is equal to a distance of the first total field of view from the device. This enables the accumulated image information to be obtained in such a way that the second total field of view is represented as if the first image information, or the portion thereof, had been arranged along the other direction of the device.
  • According to another advantageous embodiment of this, the predefined depth plane is based on a user input associated with the placement of the first image information. This makes it possible that the depth plane that is to be considered can be varied between different pictures taken and/or can be adapted to the user's selection, by means of a user's input.
  • According to an embodiment of the second aspect, the device is configured to scale the first image information so as to obtain scaled first image information, and to insert the scaled first image information into the second image information so as to obtain the accumulated image information. This makes it possible to insert the first image information into the second image information in such a way that a predefined perception is obtained in the accumulated image information, in particular with regard to the size of the first image information, which is advantageous in particular in combination with the adjustable depth plane into which the first image information is inserted, so that, in addition to the insertion which is correct in terms of depth, a representation which is correct in terms of size is also possible.
  • According to an embodiment of the second aspect, the device is configured to determine a distance of an object imaged in the first image information with respect to the device and to scale the first image information on the basis of a comparison of the determined distance with the predetermined depth plane in the second image information. This makes it possible to automatically take into account, by scaling, i.e. by adjusting the size, a distance of the first image information which is changed by the depth plane when inserting it into the second image information.
  • According to an embodiment in accordance with the second aspect, the device is configured to detect the first total field of view and the second total field of view within a time interval of at least 0.1 ms to at most 30 ms. The lower limit is optional. Such fast capturing of both total fields of view makes it possible to reduce or even avoid changes in the total fields of view caused by time.
  • According to an embodiment in accordance with the second aspect, the device is configured to receive the accumulated image information as a video data stream. For this purpose, the device may obtain a plurality of accumulated image information data for a plurality of sequential images of the first total field of view and/or the second total field of view and combine them into an image sequence as a video data stream.
  • Alternatively or additionally, in accordance with the second aspect, embodiments provide for the provision of the accumulated image information as a still image.
  • According to an embodiment in accordance with the second aspect, the first image information comprises the image of a user, and the second image information comprises a world view of the device. The control means is configured to segment an image of the user from the first image information, possibly on the basis of depth map information generated with the device, and to insert the image of the user into the world view. This makes it easy to obtain a selfie by using the device.
  • According to an embodiment in accordance with the second aspect, the device is configured to insert the image of the user into the world view in a manner that is correct in terms of depth. This enables the impression that the user is standing in front of the world view, without the need for time-consuming positioning.
  • According to an embodiment in accordance with the second aspect, the device is configured to capture, with different focal positions, a sequence of images of the first total field of view and/or the second total field of view, and to create from the sequence of images a depth map for the first total field of view and/or the second total field of view. This enables, in particular, combining the second image information with the first image information within a predefined depth plane and/or imaging that is correct in terms of depth; for this purpose, the advantages of the first aspect of the present invention can be exploited. This means that the first aspect can be combined with implementations of the second aspect and/or the second aspect can be combined with implementations of the first aspect. Particularly in combination, the two aspects result in advantageous designs, which will be discussed later.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:
  • FIG. 1a shows a schematic perspective view of a device according to the first aspect;
  • FIG. 1b shows a schematic perspective view of a device according to an embodiment of the second aspect;
  • FIG. 1c shows a schematic perspective view of a device according to an embodiment, combining the first aspect and the second aspect;
  • FIG. 2a shows a schematic view of different focal positions according to an embodiment, in which a device can be controlled according to the first aspect;
  • FIG. 2b shows a schematic representation of the utilization of a depth map generated from different focal positions according to an embodiment as well as its generation;
  • FIG. 3a shows a schematic perspective view of a device according to an embodiment in which an image sensor spans an array of optical channels, and a beam deflector spans a cuboid in space;
  • FIG. 3b shows a schematic lateral sectional view of the device of FIG. 3a according to an embodiment, wherein the multi-aperture imaging device comprises a plurality of actuators;
  • FIG. 3c shows a schematic lateral sectional view of the multi-aperture imaging device of FIG. 3a and/or 3 b, in which different total fields of view can be detected on the basis of different positions of the beam deflector;
  • FIG. 4a shows a schematic top view of a device according to a embodiment in which the actuator is formed as a piezoelectric bending actuator;
  • FIG. 4b shows a schematic lateral sectional view of the device of FIG. 4a to illustrate the arrangement of the actuator between the planes of the cuboid that are described in relation to FIG. 3 a;
  • FIG. 5a-5d show schematic representations of arrangements of partial fields of view in a total field of view, according to embodiments;
  • FIG. 6 shows a schematic perspective view of a device according to an embodiment of the second aspect;
  • FIG. 7a shows a schematic diagram illustrating processing of the image information that can be obtained by imaging the total fields of view according to an embodiment;
  • FIG. 7b shows a schematic representation of scaling of a portion of image information in the accumulated image information according to an embodiment; and
  • FIG. 8 shows parts of a multi-aperture imaging device according to an embodiment, which can be used in inventive devices of the first and/or second aspect(s).
  • DETAILED DESCRIPTION OF THE INVENTION
  • Before the embodiments of the present invention will be explained in detail below with reference to the drawings, it shall be pointed out that elements, objects and/or structures in the different figures which are identical, identical in function or effect are provided with the same reference numeral, so that the description of these elements presented in different embodiments is interchangeable with each other or can be applied to each other.
  • FIG. 1a shows a schematic perspective view of a device 10 1 according to the first aspect. The device 10 1 comprises a multi-aperture imaging device comprising an image sensor 12 and an array 14 of adjacently arranged optical channels 16 a-e. The multi-aperture imaging device further comprises a beam deflector 18 for deflecting a beam path of optical channels 16 a-d. This allows the beam paths of the optical channels 16 a-d to be deflected between a lateral course between the image sensor 12 by optics 22 a-d of the array 14 and the beam deflector 18 toward a non-lateral course. Different optical channels 16 a-d are deflected in such a way that each optical channel 16 a-d projects a partial field of view 24 a-d of a total field of view 26 on an image sensor area 28 a-d of the image sensor 12. The partial fields of view 24 a-d can be distributed in space in a one-dimensional or two-dimensional manner, or, on the basis of different focal lengths of the optics 22 a-d, in a three-dimensional manner. For better comprehensibility, the total field of view 26 will be described below in such a way that the partial fields of view 24 a-d have a two-dimensional distribution, wherein adjacent partial fields of view 24 a-d can overlap each other. A total area of the partial fields of view results in the total field of view 26.
  • The multi-aperture imaging device comprises focusing means 32 for setting a focal position of the multi-aperture imaging device. This can be done by varying a relative location or position between the image sensor 12 and the array 14, wherein the focusing means 32 can be adapted to vary a position of the image sensor 12 and/or a position of the array 14 in order to obtain a variable relative position between the image sensor 12 and the array 14, so as to set the focal position of the multi-aperture imaging device.
  • Setting the relative position can be channel-specific or apply to groups of optical channels or to all channels. For example, single optics 22 a-d may be moved, or a group of optics 22 a-d or all optics 22 a-d may be moved together. The same applies to the image sensor 12.
  • The device comprises control means 34 adapted to control the focusing means 32. In addition, the control means 34 is adapted to receive image information 36 from the image sensor 12. This could be, for example, the partial fields of view 24 a-d projected on the image sensor areas 28 a-d, or information or data corresponding to the projections. This does not preclude intermediate processing of the image information 36, for example with regard to filtering, smoothing or the like.
  • The control means 34 is configured to set a sequence of focal positions in the multi-aperture imaging device so as to detect a corresponding sequence of image information of the total field of view 26. The control means 34 is adapted to create a depth map 38 for the total field of view 26 from the sequence of image information. The depth map 38 can be provided via a corresponding signal. The control means 34 is capable of capturing different images of the same field of view 26 on the basis of the different focal positions obtained by different relative positions between the image sensor 12 and the array 14, and/or is capable of capturing differently focused partial images thereof in accordance with segmentation by the partial fields of view 24 a-d.
  • Depth maps can be used for different purposes, for example for image processing, but also for image merging. Thus, the control means 34 may be adapted to connect individual images (single frames) obtained from the image sensor areas 28 a to 28 d while using the depth map 38 to obtain image information 42 representing the image of the total field of view 26, that is, a total image. Using a depth map is particularly advantageous for such methods of merging partial images, also known as stitching.
  • While using the depth map, the control means may be configured to assemble the partial images of a group of partial images to form a total image. This means that the depth map used for stitching can be generated from the very partial images to be stitched. For example, on the basis of the sequence of image information, a sequence of total images representing the total field of view can be generated. Each total image can be based on a combination of partial images with the same focal position. Alternatively or additionally, at least two, several or all total images from the sequence can be combined to obtain a total image with extended information, e.g. to create a Bokeh effect. Alternatively or additionally, the image can also be represented in such a way that the entire image is artificially sharp, i.e. a larger number of partial areas are in focus than is the case in the single frames, for example the entire image.
  • The device 10 1 is configured to create the image of the total field of view as a mono image and to create the depth map 38 from the sequence of mono images. Although multiple scanning of the total field of view 26 is also possible, the device 10 can generate the depth map from one mere mono image, which can spare the need for additional pictures to be taken from different viewing directions, e.g. by using multiple pictures taken with the same device or by means of redundant arrangement of additional optical channels.
  • FIG. 1b shows a schematic perspective view of a device 10 2 according to an embodiment of the second aspect. Compared to the device 10 1, the device 10 2 has, instead of the control means 34, a control means 44 configured to direct the beam deflector to different positions 18 1 and 18 2. In the different positions 18 1 and 18 2, the beam deflector 18 has different relative positions, so that in the different positions or locations, image information of different total fields of view 26 1 and 26 2 is obtained since the beam paths of the optical channels 16 ad are deflected in different directions influenced by the different positions 18 1 and 18 2. Alternatively or in addition to the control means 34, the device 10 2 comprises control means 44 configured to direct the beam deflector to the first position 18 1 to obtain imaging information of the first total field of view 26 1 from the image sensor 12. Before or after that, the control means 44 is adapted to direct the beam deflector 18 to the second position 18 2 to obtain image information of the second total field of view 26 2 from the image sensor 12. The control means 44 is adapted to insert a portion of the first image information 46 1 into the second image information 46 2 to obtain common or accumulated image information 48. The accumulated image information 48 may reproduce parts of the first total field of view 26 1 and parts of the second total field of view 26 2, which involves steps of image manipulation or processing. This means that the accumulated image information 48 is based, in some places, on an image of the total field of view 26 1 and in other places on an image of the total field of view 26 2.
  • The control means 44 may be adapted to provide a signal 52 containing or reproducing the accumulated image information 48. Optionally, the image information 46 1 and/or 46 2 can also be output by signal 52.
  • FIG. 1c shows a schematic view of an device 10 3 according to an embodiment which comprises, instead of the control means 34 of FIG. 1a and instead of the control means 44 of FIG. 1b , a control means 54 which combines the functionality of the control means 34 and of the control means 44 and is adapted to produce the depth map 38 on the basis of a variable focal position of the device 10 3, and to provide the accumulated image information 48 of FIG. 1 b.
  • FIG. 2a shows a schematic view of different focal positions 56 1 to 56 5 which can be set in a device according to the first aspect, for example the device 10 1, and the device 10 2. The different focal positions 56 1 to 56 5 can be understood as positions or distances 58 1 to 58 5 in which objects in the captured field of view are projected on the image sensor 12 in a focused manner. A number of focal positions 56 can be arbitrary and be greater than 1.
  • Distances 62 1 to 62 4 between adjacent focal positions can refer to distances in the image space, where an implementation or transfer of the explanation to distances in the object space is also possible. However, the advantage of viewing the image space is that the properties of the multi-aperture imaging device are taken into account, especially with regard to a minimum or maximum object distance. Control means 34 and/or 54 may be adapted to control the multi-aperture imaging device to have two or more focal positions 56 1 to 56 5. In the respective focal position, single frames 64 1 and 64 2 can be captured in accordance with the number of partial fields of view 24 captured. On the basis of the knowledge of which of the focal positions 56 1 to 56 5 has been set to obtain the respective partial image 46 1 and 46 2, the control means can determine, by analyzing the image information in terms of which of the image parts are imaged sharply, the distance at which these sharply imaged objects are arranged with respect to the device. This information regarding the distance can be used for the depth map 38. This means that the control means can be configured to capture a corresponding number of groups of partial images in the sequence of focal positions 56 1 to 56 5, each partial image being associated with a partial field of view imaged. The group of partial images can therefore correspond to those partial images that represent the total field of view in the set focal position.
  • The control means may be configured to produce the depth map from a comparison of local image sharpness information in the partial images. The local sharpness information can indicate in which areas of the image objects are in focus, or are sharply imaged within a previously defined tolerance range. For example, a determination of the edge blurring function and a detection of the distances over which the edges extend can be used to determine whether a corresponding image area, a corresponding object or a part thereof is sharply imaged or is blurred on the image sensor. Furthermore, the sequential-image or line-blurring function can be used as a quality criterion for the sharpness of image content. Alternatively or additionally, any known optical sharpness metric, as well as the known Modulation Transfer Function (MTF), can be used. Alternatively or additionally, the sharpness of the same objects in adjacent images of the stack, association of the focus actuator position with the object distance via a calibrated lookup table and/or the direction of the through-focus scan can be used to obtain the depth information from adjacent images of the stack in a partly recursive manner and to avoid ambiguities. Knowing the set focal position which is uniquely correlated with an object distance which is imaged in focus, it is thus possible to infer from—the knowledge that the object is imaged in focus, at least within the predetermined tolerance range—a distance from the area of the image, of the object or the part thereof, which can be a basis for the depth map 38.
  • When using the depth map, the control means may be configured to assemble the partial images of a group of partial images into a total image. This means that the depth map used for stitching can be generated from the partial images to be stitched themselves.
  • The device may be configured to control the focusing means 32 so that the sequence of focal positions 56 1 to 56 5 is distributed equidistantly in the image space between a minimum focal position and a maximum focal position within a tolerance range of ±25%, ±15% or ±5%, advantageously as close to 0% as possible. To save time when setting a focal position, it makes sense, but is not necessary, to set the focal positions 56 1 to 56 5 sequentially at an increasing or decreasing distance. Rather, an order of the set focal positions 56 1 to 56 5 is arbitrary.
  • FIG. 2b shows a schematic representation of the use of the depth map 38 and its generation. The partial images 64 1 and 64 2 can each be used to obtain partial information 38 1 to 38 5 of the depth map 38 from the respective focal position 56 1 to 56 5 since the objects sharply represented in the single frames 64 1 and 64 2 can be precisely determined with respect to their distance. Between the focal positions 56 1 and 56 5, however, interpolation methods can also be used, so that sufficiently precise information can still be obtained for the depth map 38 even with slightly blurred objects. The distance information contained in the partial information 38 1 to 38 5 may be combined by the control means to form the depth map 38. The depth map 38 can be used to combine the single frames 64 1 and 64 2 from the different focal positions 56 1 to 56 5 to form a corresponding number of total images 421 to 425.
  • FIG. 3a shows a schematic perspective view of a device 30 according to a embodiment. The image sensor 12, the array 14 and the beam deflector 18 can span a cuboid within the room.
  • The cuboid can also be understood as a virtual cuboid and can, for example, have a minimum volume and, in particular, a minimum vertical extension along a direction parallel to a thickness direction y that is parallel to a line extension direction 66. The line extension direction 66, for example, runs along a z direction and perpendicularly to an x direction, which is arranged parallel to a course of the beam paths between the image sensor 12 and the array 14. The directions x, y and z may span a Cartesian coordinate system. The minimum volume of the virtual cuboid, or the minimum vertical extension thereof, can be such that the virtual cuboid nevertheless comprises the image sensor 12, the array 14 and the beam deflector 18. The minimum volume can also be understood as describing a cuboid which is spanned by the arrangement and/or operational movement of the image sensor 12, the array 14 and/or the beam deflector 18. The line extension direction 66 can be arranged in such a way that the optical channels 16 a and 16 b are arranged next to each other, possibly parallel to each other, along the line extension direction 66. The line extension direction 66 can be fixedly arranged within the room.
  • The virtual cuboid may have two sides arranged opposite each other in parallel, parallel to the line extension direction 66 of array 14, and parallel to a portion of the beam path of the optical channels 16 a and 16 b between the image sensor 12 and the beam deflector 18. In a simplified manner, but without a restrictive effect, these can be, for example, a top side and a bottom side of the virtual cuboid. The two sides can span a first plane 68 a and a second plane 68 b. This means that both sides of the cuboid can be part of plane 68 a or 68 b, respectively. Further components of the multi-aperture imaging device may be arranged entirely, but at least partially, within the area between planes 68 a and 68 b, so that an installation space requirement of the multi-aperture imaging device along the y direction, which is parallel to a surface normal of planes 68 a and/or 68 b, may be small, which is advantageous. A volume of the multi-aperture imaging device may have a small or minimal installation space between planes 68 a and 68 b. Along the lateral sides or extension directions of planes 68 a and/or 68 b, the installation space of the multi-aperture imaging device can be large or arbitrarily large. The volume of the virtual cuboid is influenced, for example, by an arrangement of the image sensor 12, the array 14 and the beam deflector 18, the arrangement of these components being such that, according to the embodiments described herein, the installation space of these components along the direction perpendicular to the planes and, thus, the distance of the planes 68 a and 68 b from each other becomes small or minimal. Compared to other arrangements of the components, the volume and/or distance of other sides of the virtual cuboid can be increased.
  • The device 30 comprises an actuator 72 for generating a relative movement between the image sensor 12, the single-line array 14 and the beam deflector 18. This may include, for example, a positioning movement of the beam deflector 18 to switch between the positions described in relation to FIG. 1b . Alternatively or additionally, the actuator 72 can be configured to execute the relative movement described in connection with FIG. 1a to change the relative position between the image sensor 12 and the array 14. The actuator 72 is arranged at least partially between the planes 68 a and 68 b. The actuator 72 may be adapted to move at least one of the image sensor 12, the single-line array 14 and the beam deflector 18, which may include rotational and/or translational movements along one or more directions. Examples of this are a channel-specific change of a relative position between image sensor areas 28 of a respective optical channel 16, of the optics 22 of the respective optical channel 16 and of the beam deflector 18 and/or of the corresponding segment or facet, respectively, and/or a channel-specific change of an optical property of the segment/facet relating to the deflection of the beam path of the respective optical channel. Alternatively or additionally, the actuator 72 may at least partially implement an autofocus and/or optical image stabilization.
  • The actuator 72 may be part of the focusing means 32 and may be adapted to provide a relative movement between at least one optics of at least one of optical channels 16 a and 16 b and the image sensor 12. The relative movement between the optics 22 a and/or 22 b and the image sensor 12 can be controlled by the focusing means 32 in such a way that the beam deflector 18 performs a simultaneous movement. When reducing a distance between the optics 22 a and/or 22 b and the image sensor, the distance between the beam deflector 18 and the image sensor 12 can be reduced accordingly, so that a relative distance between the array 14 and the optics 22 a and/or 22 b, respectively, and the beam deflector 18 is substantially constant. This enables the beam deflector 18 to be equipped with small beam deflecting surfaces since a beam cone growing due to a growing distance between the array 14 and the beam deflector 18 may be compensated for by maintaining the distance from the beam deflector 18.
  • The focusing means 32 and/or the actuator 72 are arranged in such a way that they protrude by not more than 50% from the area between the planes 68 a and 68 b. The actuator 72 may have a dimension or extension 74 parallel to the thickness direction y. A proportion of a maximum of 50%, a maximum of 30% or a maximum of 10% of the dimension 74 can project beyond the plane 68 a and/or 68 b, starting from an area between planes 68 a and 68 b, and, thus, project out of the virtual cuboid. This means that at the most, the actuator 72 projects only insignificantly from the plane 68 a and/or 68 b. According to embodiments, the actuator 72 does not project beyond the planes 68 a and 68 b. The advantage of this is that an extension of the multi-aperture imaging device along the thickness direction y is not increased by the actuator 72.
  • Although the beam deflector 18 is depicted to be rotatably mounted about an axis of rotation 76, the actuator 72 can alternatively or additionally also generate a translational movement along one or more spatial directions. The actuator 72 can comprise one or more single actuators, possibly to generate different single movements in an individually controllable manner. The actuator 72 or at least a single actuator thereof may, for example, be implemented as or comprise a piezo actuator, in particular a piezoelectric bending actuator, described in more detail in connection with FIG. 4. A piezo bender allows a fast and reproducible change of position. This feature is advantageous for the capturing of focus stacks in the sense of several or many images within a short time. Piezo benders as actuators designed along one dimension or direction can be advantageously employed, in particular, in the described architecture as they have a form factor that is advantageous for this, i.e. an extension especially in one direction.
  • The array 14 can include a substrate 78 to or at which optics 22 a and 22 b are attached or arranged. The substrate 78 may be at least partially transparent to the optical paths of the optical channels 16 a and 16 b by means of recesses or materials suitably selected; this does not preclude manipulation of the optical channels being performed, for example by arranging filter structures or the like.
  • Several requirements placed upon the actuator 72, including fast adjustability for fast setting of the different focal positions 56, a large force can be achieved by using piezoelectric actuators with small installation space and the like.
  • FIG. 3b shows a schematic lateral sectional view of the device 30 according to a embodiment. The multi-aperture imaging device of the device 30 may comprise, e.g., a plurality of actuators, e.g. more than one, more than two, or a different number >0. For example, actuators 72 1 to 72 5 may be arranged which can be used for different purposes, for example for adjusting the focal position and/or changing the position or location of the beam deflector 18 for setting the viewing direction of the multi-aperture imaging device and/or for providing optical image stabilization by rotational movement of the beam deflector 18 and/or translational movement of the array 14.
  • The actuators 72 1 to 72 5 can be arranged in such a way that they are at least partially arranged between the two planes 68 a and 68 b, which are spanned by the sides 69 a and 69 b of the virtual cuboid 69. The sides 69 a and 69 b of the cuboid 69 can be aligned parallel to each other and parallel to the line extension direction of the array and part of the beam path of the optical channels between the image sensor 12 and the beam deflector 18. The volume of the cuboid 69 is minimal but still includes the image sensor 12, the array 14 and the beam deflector 18 as well as their operational movements. Optical channels of array 14 have optics 22, which can be the same or different for each optical channel.
  • A volume of the multi-aperture imaging device may have a small or minimal installation space between the planes 68 a and 68 b. Along the lateral sides or extension directions of the planes 68 a and/or 68 b, an installation space of the multi-aperture imaging device can be large or arbitrarily large. The volume of the virtual cuboid is influenced, for example, by an arrangement of the image sensor 12, the single-line array 14 and the beam deflector, the arrangement of these components being such that, according to the embodiments described herein, the installation space of these components along the direction perpendicular to the planes and, thus, the distance of the planes 68 a and 68 b from each other becomes small or minimal. Compared to other arrangements of the components, the volume and/or distance of other sides of the virtual cuboid can be increased.
  • The virtual cuboid 69 is represented by dotted lines. The planes 68 a and 68 b can comprise or be spanned by two sides of the virtual cuboid 69. A thickness direction y of the multi-aperture imaging device may be normal to planes 68 a and/or 68 b and/or parallel to the y direction.
  • The image sensor 12, the array 14 and the beam deflector 18 may be arranged such that a vertical distance between the planes 68 a and 68 b along the thickness direction y, which—in a simplified manner, but without a restrictive effect—can be referred to as the height of the cuboid, is minimal, wherein a minimization of the volume, i.e. of the other dimensions of the cuboid, can be dispensed with. An extension of the cuboid 69 along the direction y may be minimal and be substantially predetermined by the extension of the optical components of the imaging channels, i.e. array 14, image sensor 12 and beam deflector 18, along the direction y.
  • A volume of the multi-aperture imaging device may have a small or minimal installation space between planes 68 a and 68 b. Along the lateral sides or extension directions of planes 68 a and/or 68 b, an installation space of the multi-aperture imaging device can be large or arbitrarily large. The volume of the virtual cuboid is influenced, for example, by an arrangement of the image sensor 12, the single-line array 14 and the beam deflector, the arrangement of these components being such that, according to the embodiments described herein, the installation space of these components along the direction perpendicular to the planes and, thus, the distance of the planes 68 a and 68 b from each other becomes small or minimal. Compared to other arrangements of the components, the volume and/or distance of other sides of the virtual cuboid can be increased.
  • The actuators 72 1 to 72 5 can each have a dimension or extension that is parallel to the direction y. A proportion not exceeding 50%, 30% or 10% of the dimension of each actuator 72 1 to 72 5 may protrude, starting from an area between both planes 68 a and 68 b, beyond plane 68 a and/or 68 b or protrude from said area. This means that the actuators 72 1 to 72 5 project, at the most, insignificantly beyond plane 68 a and/or 68 b. According to embodiments, the actuators do not protrude beyond planes 68 a and 68 b. The advantage of this is that an extension of the multi-aperture imaging device along the thickness direction, or direction y, is not increased by the actuators.
  • Although terms used here such as above, below, left, right, front or back are used for improved clarity, they are not intended to have any restrictive effect. It is obvious that these terms are mutually interchangeable on the basis of a rotation or tilt within space. For example, the x direction from the image sensor 12 to the beam deflector 18 can be understood as being at the front or forwards. For example, a positive y direction can be understood as being above. A region along the positive or negative z direction away from or at a distance from the image sensor 12, the array 14 and/or the beam deflector 18 can be understood as being adjacent to the respective component. In simpler terms, an image stabilizer may include at least one of the actuators 72 1 to 72 5. Said at least one actuator may be located within a plane 71 or between planes 68 a and 68 b.
  • In other words, actuators 72 1 to 72 5 can be located in front of, behind or next to image sensor 12, array 14 and/or beam deflector 18. According to embodiments, actuators 36 and 42 are arranged outside the area between planes 68 a and 68 b, with a maximum circumference of 50%, 30% or 10%.
  • FIG. 3c shows a schematic lateral sectional view of the multi-aperture imaging device wherein different total fields of view 26 1 and 26 2 are detectable on the basis of different positions of the beam deflector 18 since the multi-aperture imaging device then has different viewing directions. The multi-aperture imaging device may be adapted to vary a tilt of the beam deflector by an angle α so that alternately, different main sides of the beam deflector 18 are arranged facing the array 14. The multi-aperture imaging device may include an actuator adapted to tilt the beam deflector 18 about the rotation axis 76. For example, the actuator may be adapted to move the beam deflector 18 to a first position in which the beam deflector 18 deflects the beam path 26 of the optical channels of the array 14 to the positive y direction. For this purpose, the beam deflector 18 may have, in the first position, e.g. an angle α of >0° and <90°, of at least 10° and at most 80° or of at least 30° and at most 50°, e.g. 45°. The actuator may be adapted to deflect the beam deflector, in a second position, about the axis of rotation 76 so that the beam deflector 18 deflects the beam path of the optical channels of the array 14 towards the negative y direction as represented by the viewing direction toward the total field of view 26 2 and by the dashed representation of the beam deflector 18. For example, the beam deflector 18 can be configured to be reflective on both sides, so that in the first position, the viewing direction points towards the total field of view 26 1.
  • FIG. 4a shows a schematic top view of a device 40 according to a embodiment in which the actuator 72 is formed as a piezoelectric bending actuator. Actuator 72 is configured to perform a bend in the x/z plane as shown by the dashed lines. The actuator 72 is connected to the array 14 via a mechanical deflector 82, so that a lateral displacement of the array 14 along the x direction can occur when the actuator 72 is bent, so that the focal position can be changed. For example, actuator 72 can be connected to substrate 78. Alternatively, the actuator 72 can also be mounted on a housing that houses at least part of the optics 22 a to 22 d to move the housing. Other variants are also possible.
  • Optionally, the device may include 40 further actuators 841 and 842 configured to generate movement at the array 14 and/or the beam deflector 18, for example to place the beam deflector 18 in different positions or locations and/or for optical image stabilization by translational displacement of the array 14 along the z direction and/or by generating rotational movement of the beam deflector 18 about the axis of rotation 76.
  • Unlike it is described in the preceding figures, the beam deflector 18 may have several spaced-apart but common movable facets 86 a to 86 d, each optical channel being associated with one facet 86 a to 86 d. The facets 86 a to 86 d can also be arranged to be directly adjacent, i.e. with little or no distance between them. Alternatively, a flat mirror can also be arranged.
  • By actuating the actuator 72, a distance 881 between at least one of the optics 22 a-d and the image sensor 12 can be changed from a first value 88 1 to a second value 88 2, e.g. increased or decreased.
  • FIG. 4b shows a schematic lateral sectional view of device 40 to illustrate the arrangement of actuator 72 between planes 68 a and 68 b described in connection with FIG. 3a . The actuator 72, for example, is arranged completely between planes 68 a and 68 b, as is the mechanical deflection device 82, which uses several force-transmitting elements such as connecting webs and wires, ropes or the like and mechanical bearings or deflection elements.
  • The mechanical deflector or mechanical means for transmitting the movement to the array 14 can be arranged on one side of the image sensor 12 which faces away from the array 14, i.e. starting from the array 14 behind the image sensor 12. The mechanical means 82 can be arranged in such a way that a flux of force laterally passes the image sensor 12.
  • Alternatively or additionally, the actuator 72 or another actuator may be located on a side of the beam deflector 18 which faces away from the array 14, i.e. starting from the array 14 behind the beam deflector 18. The mechanical means 82 may be arranged such that a flux of force laterally passes the beam deflector 18.
  • Although only one actuator 72 is shown, a larger number of actuators can also be arranged, and/or more than one side of actuator 72 can be connected to a mechanical deflector 82. For example, a centrally mounted or supported actuator 72 may be connected on two sides to a mechanical deflector 82 and may act, for example, on both sides of the array 14 to enable homogeneous movement.
  • FIG. 5a shows a schematic representation of an array of partial fields of view 24 a and 24 b in a total field of view 26 which is detectable, for example, by a multi-aperture imaging device described herein, such as the multi-aperture imaging device 10 1, 10 2, 10 3, 30 and/or 40, and may correspond, for example, to the total field of view 26 1 and/or 26 2. For example, the total field of view 26 can be projected on the image sensor area 28 b by using the optical channel 16 b. For example, optical channel 16 a can be configured to capture the partial field of view 24 a and project it on the image sensor area 28 a. Another optical channel, such as optical channel 16 c, may be configured to detect the partial field of view 24 b and to project it on the image sensor area 28 c. This means that a group of optical channels can be formed to capture exactly two partial fields of view 24 a and 24 b. Thus, simultaneous capturing of the total field of view and of the partial fields of view, which, in turn, together represent the total field of view 26, can take place.
  • Although shown with different extensions for improved distinguishability, partial fields of view 24 a and 24 b may have the same extension or comparable extensions along at least one image direction B1 or B2, such as image direction B2. The extension of the partial fields of view 24 a and 24 b can be identical to the extension of the total field of view 26 along the image direction B2. This means that the partial fields of view 24 a and 24 b may completely capture, or pick up, the total field of view 26 along the image direction B2 and may only partially capture, or pick up, the total field of view along another image direction B1 arranged perpendicular to the former and may be arranged in a mutually offset manner so that complete capture of the total field of view 26 combinatorially results also along the second direction. The partial fields of view 24 a and 24 b may be disjoint from each other or at most incompletely overlap each other in an overlap area 25, which possibly extends completely along the image direction B2 in the total field of view 26. A group of optical channels comprising optical channels 16 a and 16 c may be configured to fully image the total field of view 26 when taken together, for example by a total picture taken in combination with partial pictures taken which, when taken together, image the total field of view. The image direction B1, for example, can be a horizontal line of an image to be provided. In simplified terms, the image directions B1 and B2 represent two different image directions that can be positioned anywhere in the room.
  • FIG. 5b shows a schematic representation of an arrangement of the partial fields of view 24 a and 24 b, which are arranged in a mutually offset manner along a different image direction, image direction B2, and overlap each other. The partial fields of view 24 a and 24 b can each capture the total field of view 26 completely along image direction B1 and incompletely along image direction B2. The overlap area 25, for example, is arranged completely within the total field of view 26 along the image direction B1.
  • FIG. 5c shows a schematic representation of four partial fields of view 24 a to 24 b, which incompletely capture the total field of view 26 in both directions B1 and B2, respectively. Two adjacent partial fields of view 24 a and 24 b overlap in an overlap area 25 b. Two overlapping partial fields of view 24 b and 24 c overlap in an overlap area 25 c. Similarly, partial fields of view 24 c and 24 d overlap in an overlap area 25 d, and partial fields of view 24 d overlap with partial fields of view 24 a in an overlap area 25 a. All four partial fields of view 24 a to 24 d can overlap in an overlap area 25 e of the total field of view 26.
  • A multi-aperture imaging device similar to that described in relation to FIG. 1a-c may be used to capture the total field of view 26 and the partial fields of view 24 a-d, for example; the array 14 may have five optics, four to capture the partial fields of view 24 a-d, and one optics to capture the total field of view 26. Accordingly, the array may be configured with three optical channels with regard to FIG. 5a -b.
  • In the overlap areas 25 a to 25 e, a large number of image information is available. For example, the overlap area 25 b is captured via the total field of view 26, the partial field of view 24 a and the partial field of view 24 b. An image format of the total field of view can correspond to a redundancy-free combination of the imaged partial fields of view, for example of the partial fields of view 24 a-d in FIG. 5c , where the overlap areas 25 a-e are only counted once in each case. In connection with FIGS. 5a and 5b , this applies to the redundancy-free combination of the partial fields of view 24 a and 24 b.
  • An overlap in the overlap areas 25 and/or 25 a-e may, for example, include a maximum of 50%, 35% or 20% of the respective partial images.
  • In other words, according to the first aspect, a reduction in the number of optical channels can be obtained, which allows cost savings and a reduction in lateral installation space requirements. According to the first aspect, a form of depth information acquisition, which is an alternative to stereoscopic acquisition, is possible without the need for additional sensors such as Time of Flight, Structured or Coded Light and the like. time-of-flight sensors, which enable low resolution, as well as structured-light sensors, which have a high energy requirement, can thus be avoided. Both approaches still have problems with intense ambient lighting, especially sunlight. Embodiments provide that the corresponding device is designed without such sensors. According to an embodiment, a piezo bender serves as an extremely fast focus factor with low power consumption. However, embodiments provide for combining the extraction of depth information from the sequence of focal positions with the extraction of depth information from disparity-based depth information. It is advantageous here to first create a disparity-based depth map and, if the latter has shortcomings, to supplement, correct or improve it with the additional depth information through the sequence of focal positions. The described architecture of the multi-aperture imaging device allows the use of such piezo benders since an otherwise cubic form factor of the camera module makes utilization of long piezo benders more difficult or even impossible. With a short exposure time, this allows shooting focus stacks, i.e. numerous pictures taken quickly one after the other with slightly different focusing of the scene. Embodiments provide that the entire depth of the scene is sampled in a useful manner, for example from macro, the closest possible shot, to infinity, that is, at the largest possible distance. The distances can be arranged equidistantly in the object space, but advantageously in the image space. Alternatively, a different reasonable distance can be chosen. For example, a number of focal positions is at least two, at least three, at least five, at least ten, at least 20, or any other number.
  • Several images 42 can be presented to the user. Alternatively or additionally, embodiments provide for combining the individual image information so that the user can be provided with an image that has combined image information. For example, an image with depth information, which offers the possibility of digital refocusing. The presented image can offer a so-called Bokeh effect, defocusing. Alternatively, the image can also be presented in such a way that the entire image is artificially sharp, which means that a larger distance range than in the single frames of partial areas is in focus, for example the entire image. With a low aperture value of the lenses used, the sharpness or blur measured in the single frames and further information, such as the sharpness of the same objects in adjacent images of the stack, association of the focus actuator position with an object distance, for example while using a calibrated lookup table, a direction of the through-focus scan may be used—with regard to said frames themselves but also, in a recursive manner, from other images in order to avoid ambiguities—for reconstructing the object distance of the individual elements of the scene and to create therefrom a depth map in image resolution.
  • According to the first aspect one achieves that duplication of the channels for a stereo image can be omitted while a depth map can still be created. This depth map enables the stitching of the different partial images of the multi-aperture imaging device. For example, by halving the number of optical channels, a significant reduction of the lateral dimensions, e.g. along the line extension direction, and, thus, also a cost reduction can be achieved. Image processing can provide images that are at least as good through other steps. Alternatively or additionally, it is possible to dispense with an additional arrangement of time-of-flight sensors or structured-light sensors. This advantage is maintained even if the duplication mentioned above is carried out nevertheless, which may also offer advantages.
  • For example, the total field of view 26 can be captured at least stereoscopically, as described in DE 10 2013 222 780 A1, for example, in order to obtain depth information from the multiple, i.e. at least double, capturing of the total field of view or of the partial fields of view 24. The at least stereoscopic capturing shown, for example, in FIG. 5d , makes it possible to obtain depth information by viewing the same partial field of view 24 a or 24 b through two optical channels 16 a and 16 c, or 16 c and 16 d, which are spaced apart by a basic distance BA. A number of the partial fields of view is just as freely selectable as an arrangement of same, see the differences in FIGS. 5a-c as an example. With regard to the avoidance of occlusions, arranging the partial fields of view shown in FIG. 5d according to FIG. 5b is advantageous.
  • FIG. 5d shows only a part of a multi-aperture imaging device 50. Elements such as the beam deflector 18 or actuators are not shown. For the multi-aperture imaging device 50 there are now two information sources for depth information that can be used to create a depth map. This includes, on the one hand, setting a sequence of focal positions in the multi-aperture imaging device, on the other hand, it includes the disparity between optical channels for detecting matching image content.
  • This can offer advantages in that any source of information combines advantages with disadvantages. For example, the disparity-based depth information can be incomplete or of low quality in places due to occlusions or masking, but in contrast to the sequence of focal positions it can be done quickly and with a low expenditure in terms of electrical energy for actuators and/or computing power (which involves both corresponding computing resources and electrical energy).
  • Therefore, embodiments provide for combining the depth information from both information sources. For example, a preliminary depth map can be created from the disparity-based depth information and be supplemented or improved by a fully or partially created additional depth map from the sequence of focal positions. In this context, a preliminary depth map does not necessarily describe a temporal relationship, since the order in which the two depth maps to be combined are created can be arbitrary. According to less advantageous embodiments, the preliminary depth map can be created from the sequence of focal positions and be improved or upgraded by disparity-based depth information.
  • The control means 34 or a different instance of the multi-aperture imaging device 50 may be configured to check the quality or reliability of the depth information or depth map, for example by checking a resolution of the depth information and/or by monitoring the occurrence of occlusions or other effects. For affected areas of the depth map, additional depth information can be created from the sequence of focal positions in order to improve or correct the preliminary depth map. For example, this additional depth information can be obtained in an energy- and/or computationally efficient way by creating a sequence of focal positions only for those areas of the preliminary depth map that are to be supplemented, i.e. only in a partial area of the depth map. By improving the depth map and, thus, the depth information, stitching results can be obtained in high quality.
  • Embodiments provide that the control means specifies a local area and/or an area of the depth planes for determining the depth information on the basis of the sequence of focal positions, i.e. a range of values between minimum and maximum focal positions, on the basis of the locations in the preliminary depth map which are to be supplemented or corrected; it is also possible for a plurality of ranges to be set. This means that at least some of the possible focal positions, or those focal positions which are set for creating the depth map exclusively from the sequence of image information or focal positions, can be omitted, which can save computing effort, time and electrical energy. Alternatively or in addition to limiting the number and/or location of focal positions, the control means may be configured to recalculate the depth information only for those areas of the total field of view 26 where improvement, optimization or correction of the preliminary depth map is a requirement, which may also save energy and time.
  • The control means may be configured to select areas of the total field of view in the preliminary depth map on the basis of a quality criterion for which an improvement is required, and to supplement the preliminary depth map in the selected areas and not to supplement it in non-selected areas. For example, the additional depth information for supplementing the preliminary depth map may be determined only for the selected areas and not for non-selected areas. For example, the control means may be configured to determine the at least one region and/or focal positions by performing a comparison indicating whether the quality or reliability of the depth map in the determined area correspond to at least one threshold value. This can be achieved by evaluating a reference parameter to determine whether at least a minimum quality is achieved, but also by checking whether a negative quality criterion (e.g. number of errors or the like) is not exceeded, i.e. whether a threshold value is fallen below or at least not exceeded.
  • The aforementioned preliminary depth map can be a depth map created while using the available depth information. Alternatively, the preliminary depth map may also be understood as a collection of depth information (without a specific map format).
  • In other words, in addition to generating the depth map merely by focus stacks without redundant field of view capturing (to generate the parallax for depth map), systems with a minimum number of channels (ideally only two) can be used. This concept can be modified to the effect that supporting such an architecture (e.g. 2× field of view at the top, 2× field of view at the bottom, as shown in FIG. 5d , or otherwise) with a depth map generated in a manner other than only via disparity can contribute to improving same, which can ultimately also contribute to improving the stitching results. Here, preference is given to embodiments according to the aspect of focus stacks for multimodal support while avoiding other mechanisms such as depth map via time of flight (ToF) or coded light.
  • Embodiments refer to the aspect of combining depth maps from disparity and focus stacks. Embodiments are based on the described architecture with more than 2 channels, which derives a depth map primarily from the natural disparity/parallax of the channels. Depending on the computing and/or power budget, either another depth map can be generated from focus stacks and combined with the first depth map to improve it (essentially hole filling in the event of overlaps) and improve the stitching results, or it can be generated only after obvious defects in the depth map from disparity have been identified. A different order is less advantageous since generating the focus stacks may involve additional energy consumption as well as possibly considerable loss of time, or, as a result, considerably more complicated exposure ratios of the sequential single frames. Advantages that may result from the additional use of a depth map generated from focus stacks of images captured extremely quickly one after the other are: fewer holes in the depth map that are due to occlusions, possibly additional depth planes, especially for larger object distances, possibly improved lateral resolution of the depth map and, overall, due to additional information obtained, an improved signal-to-noise ratio in the depth map and, thus, fewer ambiguities or equivocalities which would otherwise lead to artifacts in the stitched images of the total field of view.
  • FIG. 6 shows a schematic perspective view of a device 60 according to an embodiment with respect to the second aspect. The implementations described above readily apply also to devices 10 1, 10 3, 30 and/or 40. By directing the beam deflector to different positions, the device 60 or the multi-aperture imaging device of the device 60 can capture two mutually spaced-apart entire fields of view 26 1 and 26 2.
  • The device 60 is designed, for example, as a portable or mobile device, in particular a tablet computer or a mobile phone, in particular a smartphone (intelligent telephone).
  • One of the fields of view 26 1 and 26 2 may, for example, be arranged along a user direction of the device 60, as is common practice, for example, in the case of selfies for photos and/or videos.
  • The other total field of view may be arranged, for example, along an opposite direction and/or a world direction of the device 60 and may be arranged, for example, along the direction along which the user looks when he/she looks at the device 60 along the user direction from the total field of view. For example, the beam deflector 18 in FIG. 1b can be formed to be reflective on both sides and deflect the beam path of the optical channels 16 ad in different positions with different main sides, for example, so that starting from the device 60, the total fields of view 26 1 and 26 2 are arranged opposite one another and/or at an angle of 180°.
  • FIG. 7a shows a schematic diagram to illustrate processing of image information 46 1 and 46 2, which can be obtained by imaging the total fields of view 26 1 and 26 2. The control means is configured to separate, for example cut out, or isolate, a part 92 of the imaging information 46 1 of the field of view 26 1 or to copy exclusively said part 92. The control means is further adapted to combine the separated or segmented part 92 with the imaging information 46 2, i.e. to insert part 92 into the imaging information 46 2 to obtain the accumulated image information 48. In places, the latter exhibits the total field of view 26 2 and in places, namely where part 92 was inserted, it exhibits the image information 46 1. It should be noted that the obtaining of the accumulated image information 48 is not limited to the insertion of a single part 92, but that any number of parts 92 can be segmented from image information 46 1 and one, several or all of these parts can be inserted into image information 46 2.
  • A location or position at which the part 92 is inserted into the second imaging information 46 2 may be automatically determined by the control means, for example by projecting the part 92 through the device 60 into the second field of view 26 2, but may alternatively or additionally also be selected by a user.
  • According to an embodiment, the control means is configured to identify and segment a person in the first imaging information 46 1, for example via pattern matching and/or edge detection, but in particular on the basis of the depth map generated by the device itself. Said control means may be adapted to insert the image of the person into the second imaging information 46 2 to obtain the accumulated image information 48. This means that part 92 can be a person, such as a user of the device 60. Embodiments provide that the device is configured to automatically identify the person and to automatically insert the image of the person, that is, part 92, into the second imaging information 46 2. This makes it possible to automatically create a self-portrait or a self-portrait in front of or in the second total field of view 26 2 without having to position the device 60 in a time-consuming manner and/or without having to position the user in a time-consuming manner.
  • Embodiments provide that the control means uses a depth map, such as the depth map 38, to position part 92 in the second imaging information 46 2. The depth map 38 may have a plurality or multiplicity of depth planes, for example in accordance with the number of focal positions considered or a reduced number obtained therefrom or a larger number interpolated therefrom. The control means may be adapted to insert the part 92 into the predetermined depth plane of the second imaging information 46 2 to obtain the accumulated image information 48. The predetermined depth plane may correspond substantially, i.e. within a tolerance range of ±10%, ±5% or ±2%, to a distance of the first total field of view 26 2 from the device 60 or to the distance of the segmented part 92 from the device 60, respectively. This may also be referred to as inserting the part 92 into the second imaging information 46 2 in a manner that is correct in terms of depth.
  • FIG. 7b shows a schematic representation of scaling of part 92 in the accumulated image information 48. Alternatively, a different depth plane can be selected, for which purpose various possibilities of embodiments are provided. For example, the predetermined depth plane may be influenced by or determined by the placement of part 92 in the second imaging information 46 2. The placement can be effected automatically or by a user input. For example, if the user selects a particular location or place within the second imaging information 46 2 for inserting the part 92, the control means may be configured to determine, in the second imaging information 46 2, a distance of the area in which part 92 is to be inserted. Knowing the distance of part 92 from the device and the objects in the second imaging information 46 2, for example while using depth maps, a virtual distance change of part 92 caused by user input may be compensated for by scaling part 92.
  • Thus, a one-dimensional, two-dimensional or three-dimensional size 94 of the part 92 can be changed, e.g. reduced, to a size 96 when the distance of the part 92 from the first imaging information 46 1 to the second imaging information 46 2 is increased, or may be increased to a size 96 when the distance from the first imaging information 46 1 to the second imaging information 46 2 is reduced. Independently thereof, but also in combination placing part 92 in the first imaging information 46 1 on the basis of an associated user input, the device may be configured to scale the imaging information 46 1 to obtain scaled imaging information. The scaled imaging information may be inserted into the imaging information 46 2 by the control means to obtain the accumulated image information 48. The device may be configured to determine a distance of an object, which represents part 92 and is imaged in the first imaging information 46 1, with respect to the device 60. The device may scale the imaging information 46 1, or the part 92 thereof, on the basis of a comparison of the determined distance with the predetermined depth plane in the second imaging information 46 2. It is advantageous if the two items of imaging information 46 1 and 46 2 are captured within short distances of time. It is advantageous if this distance of time within a time interval of not more than 30 ms, not more than 10 ms, not more than 5 ms or not more than 1 ms is approximately 0.1 ms. This time can be used, for example, for a changeover or repositioning of the beam deflector and can be at least partly determined by a duration of this process.
  • The accumulated image information 48 can be obtained as a single frame, alternatively or additionally also as a video data stream, for example as a large number of single frames.
  • According to an embodiment, an device is formed, in line with the second aspect, such that the first imaging information 46 1 comprises an image of a user, and that the second imaging information 46 2 comprises a world view of the device. The control means is configured to segment an image of the user from the first imaging information 46 1 and to insert it into the world view. For example, the device can be configured to insert the image of the user into the world view in the correct depth.
  • In other words, in the context of the second aspect, taking a selfie image or shooting a video may include a depth-based combination of quasi-simultaneous picture-taking with a front-facing camera/view and a main camera/view of a device, in particular a mobile phone. The foreground of the selfie, i.e. the self-portrait, can be transferred to the foreground of the picture taken by the main camera. A very fast switchover between front and rear picture-taking by changing the position of the beam deflector allows the above-mentioned quasi-simultaneous capturing of the world-side and the user-side camera images with the same image sensor. Although a single-channel imaging device can also be used according to the second aspect, the second aspect provides advantages especially with regard to multi-aperture imaging devices, as they can already create or use a depth map to merge the frames.
  • This depth map can also be used to determine depth information for synthesizing the accumulated imaging information 48. A procedure is made possible which can be described as follows:
    • 1. Use the depth map of the selfie so as to segment the foreground, i.e. the person(s) taking (a) picture(s) of themselves, from the background;
    • 2. Use the depth map of the world-side picture to identify a foreground and a background from it, that is, to separate depth information; and
    • 3. Insert the foreground, i.e. the person(s) taking (a) picture(s) of themselves, from the selfie into the picture of the world-side shot, especially into its foreground.
  • The advantage of this is that the selfie shot can be combined with the world-side picture as background, without having to turn the phone by 180°, as otherwise necessary, in order to take a picture of yourself in front of this scene. Alternatively or additionally, it is avoided that you take a picture past yourself in a rearward direction, which involves remembering that the orientation of the telephone has to be mirror-inverted in relation to the scene. The depth map itself can be generated according to the second aspect, too, as is described in connection with the first aspect, so that the additional arrangement of time-of-flight sensors or structured-light sensors can be dispensed with.
  • In the following, reference will be made to some advantageous implementations of the multi-aperture imaging device to explain advantages of the invention.
  • FIG. 8 shows parts of a multi-aperture imaging device 80 which can be used in inventive devices of the first and/or second aspect(s), wherein a possible focusing means and/or actuator for implementing optical image stabilization are not represented but can easily be implemented.
  • The multi-aperture imaging device 80 of FIG. 8 comprises an array 14 of adjacently arranged optical channels 16 a-d, which is formed in several lines or advantageously in one line. Each optical channel 16 a-d comprises optics 22 a-d for imaging a respective partial field of view 24 a-d of a total field of view 26, or possibly a total field of view, as described in connection with FIG. 5. The field of view shown on the multi-aperture imaging device 80 is projected on a respectively associated image sensor area 28 a-d of an image sensor 12.
  • The image sensor areas 28 a-d, for example, may each be formed from a chip comprising a corresponding pixel array; the chips may be mounted on a common substrate or board 98, as indicated in FIG. 8. Alternatively, it would of course also be possible for each of the image sensor areas 28 a-d to be formed from part of a common pixel array that extends continuously or with interruptions across the image sensor areas 28 a-d, with the common pixel array being formed, for example, on a single chip. For example, only the pixel values of the common pixel array in the image sensor areas 28 a-d will then be read out. Different mixtures of these alternatives are of course also possible, such as the presence of a chip for two or more channels and a further chip for yet other channels or the like. In the case of several chips of the image sensor 12, they can be mounted on one or more boards, e.g. altogether or in groups or the like.
  • In the example of FIG. 8, four optical channels 16 a-d are arranged in one line next to one another in the line extension direction of the array 14, but the number four is only exemplary and might be any other number greater than one, i.e., N optical channels with N>1 can be arranged. In addition, the array 14 may also have further lines that extend along the line extension direction. An array 14 of optical channels 16 a-d is understood to be a combination of the optical channels or a spatial grouping thereof. Optics 22 a-d can each have a lens, but also a lens compound or lens stack, as well as a combination of imaging optics and other optical elements, including filters, apertures, reflective or diffractive elements or the like. Array 14 can be designed in such a way that the optics 22 a-d are arranged, fixed or mounted on the substrate 78 in a channel-specific manner, in groups or all channels taken together. This means that a single substrate 78, several parts thereof, or no substrate 78 may be arranged, for example if the optics 22 a-d are mounted elsewhere.
  • Optical axes or the beam paths 102 a-d of the optical channels 16 a-d may extend, according to an example, in parallel with one another between the image sensor areas 28 a-d and the optics 22 a-d. To this end, the image sensor areas 28 a-d are arranged in a common plane, for example, as are the optical centers of optics 22 a-d. Both planes are parallel to each other, i.e. parallel to the common plane of the image sensor areas 28 a-d. In addition, in a projection perpendicular to the plane of the image sensor areas 28 a-d, optical centers of the optics 22 a-d coincide with centers of the image sensor areas 28 a-d. In other words, in these parallel planes, the optics 22 a-d, on the one hand, and the image sensor areas 28 ad are arranged in line extension direction at the same pitch.
  • An image-side distance between image sensor areas 28 a-d and the associated optics 22 ad is set such that the images on the image sensor areas 28 a-d are set to a desired object distance. For example, the distance lilis within a range equal to or greater than the focal length of optics 22 a-d or within a range between the focal length and double the focal length of optics 22 a-d, both inclusive. The image-side distance along the optical axis 102 a-d between the image sensor area 28 a-d and the optics 22 a-d may also be settable, e.g. manually by a user and/or automatically via a focusing means or autofocus control.
  • Without any additional measures, the partial fields of view 24 a-d of the optical channels 16 a-d overlapped completely essentially due to the parallelism of the beam paths, or optical axes, 102 a-d. The beam deflector 18 is provided to cover a larger total field of view 26 and so that the partial fields of view 24 a-d only partially overlap in space. The beam deflector 18 deflects the beam paths 102 a-d, or optical axes, e.g. with a channel-specific deviation into a total-field-of-view direction 104. For example, the total-field-of-view direction 104 extends in parallel with a plane that is perpendicular to the line extension direction of array 14 and is parallel to the course of the optical axes 102 a-d prior to or without beam deflection. For example, the total-field-of-view direction 104 of the optical axes 102 a-d is obtained by rotating the line extension direction around an angle that is >0° and <180° and, for example, lies between 80 and 100° and, for example, can be 90°. The total field of view 26 of the multi-aperture imaging device 80, which corresponds to the total coverage of the partial fields of view 24 a-d, is thus not located in the direction of an extension of the series connection of the image sensor 12 and the array 14 in the direction of the optical axes 102 a-d, but due to the beam deflection, the total field of view is located laterally to the image sensor 12 and array 14 in a direction in which the installation height of the multi-aperture imaging device 80 is measured, i.e. the lateral direction perpendicular to the line extension direction.
  • In addition, however, the beam deflector 18 deflects, for example, each beam path or the beam path of each optical channel 16 a-d with a channel-specific deviation from the just mentioned one to deflection leading to the direction 104. The beam deflector 18 for each channel 16 a-d includes, for example, an individually installed element, such as a reflective facet 86-d and/or a reflective surface. These are slightly inclined towards each other. The mutual tilting of the facets 86 a-d is selected in such a way that when the beam is deflected by the beam deflector 18, the partial fields of view 24 a-d are provided with a slight divergence in such a way that the partial fields of view 24 a-d overlap only partially. As shown exemplarily in FIG. 8, individual deflection can also be implemented in such a way that the partial fields of view 24 a-d cover the total field of view 26 in a two-dimensional manner, i.e. they are distributed in a two-dimensional manner in the total field of view 26.
  • According to another example, the optics 22 a-d of an optical channel can be established to completely or partially generate the divergence in the beam paths 102 a-d, which makes it possible to completely or partially dispense with the inclination between individual facets 86 a-d. If, for example, the divergence is provided completely by optics 22 a-d, the beam deflector can also be formed as a planar mirror.
  • It should be noted that many of the details regarding the multi-aperture imaging device 80 which have been described so far have been chosen as examples only. This was the case, for example, with the above-mentioned number of optical channels. The beam deflector 18 can also be formed differently than previously described. For example, the beam deflector 18 is not necessarily reflective. It can therefore also be designed differently than in the form of a facetted mirror, e.g. in the form of transparent prism wedges. In this case, for example, the mean beam deflection might be 0°, i.e. the direction 104 might, for example, be parallel to the beam paths 102 a-d prior to or without beam deflection, or in other words, the multi-aperture imaging device 80 might still “look straight ahead” despite the beam deflector 18. Channel-specific deflection by the beam deflector 18 would again result in the partial fields of view 24 a-d overlapping each other only slightly, e.g. in pairs with an overlap of <10% in relation to the solid angle ranges of the partial fields of view 24 a-d.
  • Also, the beam paths 102 a-d, or the optical axes, might deviate from the described parallelism, and nevertheless the parallelism of the beam paths of the optical channels might still be sufficiently pronounced for the partial fields of view, which are covered by the individual channels 16 a-N and/or are projected on the respective image sensor areas 28 a-d, would largely overlap if no any further measures such as beam deflection are taken, so that, in order to cover a larger total field of view by the multi-aperture imaging device 80, the beam deflector 18 provides the beam paths with an additional divergence such that the partial fields of view of N optical channels 16 a-N overlap one another less. The beam deflector 18, for example, ensures that the total field of view has an aperture angle that is greater than 1.5 times the aperture angle of the individual partial fields of view of the optical channels 16 a-N. With a kind of pre-divergence of the beam paths 102 a-d, it would also be possible, for example, that not all facet inclinations differ, but that some groups of channels, for example, have facets with the same inclination. The latter can then be formed in one piece or such that they continuously merge into one another, quasi as one facet which is associated with this group of channels that are adjacent in the line extension direction.
  • The divergence of the optical axes 102 a-d of these channels 16 a-d might then originate from the divergence of these optical axes 102 a-d, as is achieved by a lateral offset between optical centers of the optics 22 a-d and image sensor areas 28 a-d of the channels 16 a-d or prism structures or decentered lens sections. For example, the pre-divergence might be limited to one plane. For example, the optical axes 102 a-d might extend within a common plane prior to or without any beam deflection 18, but might extend within said plane in a divergent manner, and the facets 86 a-d only cause an additional divergence within the other transverse plane, i.e. they are all inclined in parallel with the line extension direction and are mutually inclined in a manner that differs only with regard to the aforementioned common plane of the optical axes 102 a-d; in turn, several facets 86 a-d might exhibit the same inclination or be commonly associated with a group of channels whose optical axes differ, for example, already in the aforementioned common plane of the optical axes, in pairs prior to or without any beam deflection.
  • If the beam deflector 18 is omitted or if the beam deflector 18 is designed as a planar mirror or the like, the overall divergence might also be accomplished by the lateral offset between the optical centers of the optics 22 a-d, on the one hand, and centers of the image sensor areas 28 a-d, on the other hand, or by prism structures or decentered lens sections.
  • The aforementioned pre-divergence which may possibly exist may be achieved, for example, by placing the optical centers of optics 22 a-d on a straight line along the line extension direction, while the centers of image sensor areas 28 a-d are arranged to deviate from the projection of the optical centers along the normal of the plane of image sensor areas 28 a-d on points on a straight line within the image sensor plane, such as at points which deviate from the points on the aforementioned straight line within the image sensor plane in a channel-specific manner along the line extension direction and/or along the direction perpendicular to both the line extension direction and the image sensor normal. Alternatively, pre-divergence can be achieved by placing the centers of the image sensors 28 a-d on a straight line along the line extension direction, while the centers of the optics 22 a-d are arranged to deviate from the projection of the optical centers of the image sensors along the normal of the plane of the optical centers of the optics 22 a-d on points on a straight line within the optical center plane, such as at points which deviate from the points on the above-mentioned straight line in the optical center plane in a channel-specific manner along the line extension direction and/or along the direction perpendicular to both the line extension direction and the normal of the optical center plane.
  • It is advantageous if the above-mentioned channel-specific deviation from the respective projection occurs only in the line extension direction, i.e. if the optical axes 102 a-d are located only within one common plane and are provided with a pre-divergence. Both optical centers and image sensor area centers will then lie on a straight line parallel to the line extension direction, but with different intermediate distances. A lateral offset between lenses and image sensors in a direction that is vertical and lateral to the line extension direction, on the other hand, led to an increase in the installation height. A purely in-plane offset in the line extension direction does not change the installation height, but it may possibly result in fewer facets, and/or the facets may only have a tilt in an angular orientation, which simplifies the architecture.
  • Although some aspects have been described in connection with a device, it is understood that these aspects also represent a description of the corresponding method, so that a block or component of a device is also to be understood as a corresponding method step or as a feature of a method step. Similarly, aspects described in connection with or as a method step also represent a description of a corresponding block or detail or feature of a corresponding device.
  • While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.

Claims (18)

1. Device comprising:
a multi-aperture imaging device comprising:
an image sensor;
an array of adjacently arranged optical channels, each optical channel comprising optics for projecting at least a partial field of view of a total field of view on an image sensor area of the image sensor,
a beam deflector for deflecting a beam path of said optical channels,
focusing means for adjusting a focal position of said multi-aperture imaging device;
the device further comprising:
control means adapted to control the focusing means and to receive image information from the image sensor; wherein the control means is adapted to set a sequence of focal positions in the multi-aperture imaging device so as to detect a corresponding sequence of image information of the total field of view, and to create a depth map for the detected total field of view on the basis of the sequence of image information.
2. Device as claimed in claim 1, wherein the control means is adapted to create the depth map from the sequence of image information.
3. Device as claimed in claim 1, wherein the optical channels are formed to detect the total field of view in an at least stereoscopic manner;
wherein said control means is adapted to produce a preliminary depth map on the basis of disparity information acquired from said optical channels; and to supplement said preliminary depth map on the basis of depth information based on said sequence of image information in order to acquire said depth map; or
wherein said control means is adapted to create a preliminary depth map on the basis of the sequence of image information; and to supplement the preliminary depth map on the basis of depth information based on disparity information acquired from the optical channels in order to acquire the depth map.
4. Device as claimed in claim 3, wherein the control means is adapted to select areas of the total field of view in the preliminary depth map on the basis of a quality criterion for which an improvement is required, and to determine additional depth information for supplementing the preliminary depth map for the selected areas and not to determine them for non-selected areas.
5. Device as claimed in claim 1, wherein the control means is adapted to
acquire, in the sequence of focal positions, a corresponding number of groups of partial images, each partial image being associated with an imaged partial field of view; to
produce, from a comparison of local image sharpness information in said partial images, depth maps and to produce therefrom said depth map; and
assemble, while using the depth map, the partial images of a group of partial images into a total image.
6. Device as claimed in claim 1, adapted to control the focusing means so that the sequence of focal positions is equidistantly distributed within a tolerance range of 25% in the image space between a minimum focal position and a maximum focal position.
7. Device as claimed in claim 1, adapted to generate a sequence of total images representing the total field of view on the basis of the sequence of image information, wherein each total image is based on a combination of partial images of the same focal position.
8. Device as claimed in claim 1, adapted to change a total image, which represents the total field of view, on the basis of the depth map by subsequent focusing and/or defocusing of one or more image areas.
9. Device as claimed in claim 1, adapted to produce an image of the total field of view as a mono image and to produce the depth map from the sequence of mono images.
10. Device as claimed in claim 1, wherein a first optical channel of the array is formed to map a first partial field of view of the total field of view, wherein a second optical channel of the array is configured to image a second partial field of view of the total field of view, and wherein a third optical channel is configured to completely image the total field of view.
11. Device as claimed in claim 1, wherein the focusing means comprises at least one actuator for adjusting the focal position, the focusing means being configured to be at least partially disposed between two planes spanned by sides of a cuboid, the sides of the cuboid being aligned in parallel with each other and to a line extension direction of the array and a part of the beam path of the optical channels, between the image sensor and the beam deflector, and whose volume is minimal and yet comprises the image sensor, the array and the beam deflector.
12. Device as claimed in claim 11, wherein the multi-aperture imaging device comprises a thickness direction configured to be normal to the two planes, the actuator comprises a dimension parallel to the thickness direction, and a proportion of at most 50% of the dimension projects beyond the two planes from a region located between the two planes.
13. Device as claimed in claim 11, wherein the focusing means comprises an actuator for providing relative movement between optics of at least one of the optical channels and the image sensor.
14. Device as claimed in claim 13, wherein the focusing means is adapted to perform the relative movement between the optics of one of the optical channels and the image sensor while performing a movement of the beam deflector that is simultaneous to the relative movement.
15. Device as claimed in claim 11, wherein the focusing means is configured to project by at most 50% from the area located between the planes.
16. Device as claimed in claim 11, wherein the at least one actuator of the focusing means comprises a piezoelectric bending actuator.
17. Device as claimed in claim 11, wherein the focusing means comprises at least one actuator adapted to provide movement and comprising mechanical means for transmitting the movement to the array for adjusting the focal position;
the actuator being arranged on a side of the image sensor which faces away from the array and, starting from the array, behind the image sensor, and the mechanical means being arranged such that a flux of force laterally passes the image sensor; or
the actuator being arranged on a side of the beam deflector which faces away from the array and, starting from the array, behind the beam deflector, and the mechanical means being arranged such that a flux of force laterally passes the beam deflector.
18. Device as claimed in claim 1, wherein a relative location of the beam deflector is switchable between a first position and a second position, so that in the first position the beam path is deflected towards a first total field of view, and in the second position the beam path is deflected towards a second total field of view;
wherein the one control means is adapted to control the beam deflector to move to the first position to acquire imaging information of the first total field of view from the image sensor; to control the beam deflector to move to the second position to acquire imaging information of the second total field of view from the image sensor; and to insert a portion of said first imaging information into said second imaging information so as to acquire accumulated image information that in parts represents said first total field of view and in parts represents said second total field of view.
US17/352,744 2018-12-21 2021-06-21 Device comprising a multi-aperture imaging device for generating a depth map Active US11924395B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE102018222865.5 2018-12-21
DE102018222865.5A DE102018222865A1 (en) 2018-12-21 2018-12-21 Device with a multi-aperture imaging device for generating a depth map
PCT/EP2019/085645 WO2020127262A1 (en) 2018-12-21 2019-12-17 Device having a multi-aperture imaging device for generating a depth map

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/085645 Continuation WO2020127262A1 (en) 2018-12-21 2019-12-17 Device having a multi-aperture imaging device for generating a depth map

Publications (2)

Publication Number Publication Date
US20210314548A1 true US20210314548A1 (en) 2021-10-07
US11924395B2 US11924395B2 (en) 2024-03-05

Family

ID=69024262

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/352,744 Active US11924395B2 (en) 2018-12-21 2021-06-21 Device comprising a multi-aperture imaging device for generating a depth map

Country Status (6)

Country Link
US (1) US11924395B2 (en)
EP (2) EP3900317B1 (en)
CN (1) CN113366821B (en)
DE (1) DE102018222865A1 (en)
TW (1) TWI745805B (en)
WO (1) WO2020127262A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI818282B (en) * 2021-07-12 2023-10-11 明基電通股份有限公司 Projector device, projector system and method for calibrating projected image

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130278788A1 (en) * 2012-04-23 2013-10-24 Csr Technology Inc. Method for determining the extent of a foreground object in an image
US20130308036A1 (en) * 2012-05-02 2013-11-21 Aptina Imaging Corporation Image focus adjustment using stacked-chip image sensors
US20140240515A1 (en) * 2013-02-28 2014-08-28 Arnold & Richter Cine Technik Gmbh & Co. Betriebs Kg Motion picture camera and method for taking a sequence of moving images
US20140267849A1 (en) * 2011-11-04 2014-09-18 Imec Spectral camera with integrated filters and multiple adjacent image copies projected onto sensor array
US20150077760A1 (en) * 2013-09-03 2015-03-19 Universitat Stuttgart Robust One-Shot Interferometer and OCT Method for Material Measurement and Tumor Cell Recognition
US20150138423A1 (en) * 2013-10-18 2015-05-21 The Lightco Inc. Methods and apparatus relating to a camera including multiple optical chains
US20150163478A1 (en) * 2013-12-06 2015-06-11 Google Inc. Selecting Camera Pairs for Stereoscopic Imaging
US20150288894A1 (en) * 2011-11-04 2015-10-08 Imec Spectral camera with mirrors for projecting multiple adjacent image copies onto sensor array
US20150323760A1 (en) * 2014-05-07 2015-11-12 Canon Kabushiki Kaisha Focus adjustment apparatus, control method for focus adjustment apparatus, and storage medium
US20160069743A1 (en) * 2014-06-18 2016-03-10 Innopix, Inc. Spectral imaging system for remote and noninvasive detection of target substances using spectral filter arrays and image capture arrays
US20160182813A1 (en) * 2014-12-19 2016-06-23 Thomson Licensing Method and apparatus for generating an adapted slice image from a focal stack
US20170069097A1 (en) * 2015-09-04 2017-03-09 Apple Inc. Depth Map Calculation in a Stereo Camera System
US20170099439A1 (en) * 2015-10-05 2017-04-06 Light Labs Inc. Methods and apparatus for compensating for motion and/or changing light conditions during image capture
US20180241920A1 (en) * 2015-10-21 2018-08-23 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Device comprising a multi-aperture imaging device, method for producing same and method for capturing a total field of view
US20180324334A1 (en) * 2016-01-13 2018-11-08 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Multi-aperture imaging devices, methods for producing same and imaging system
US20190104242A1 (en) * 2016-01-13 2019-04-04 Fraunhofer-Gesellschft Zur Foerderung Der Angewandten Forschung E.V. Multi-aperture imaging device, imaging system and method for capturing an object area
US20210366968A1 (en) * 2018-03-15 2021-11-25 Photonic Sensors & Algorithms, S.L. Plenoptic camera for mobile devices

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012123296A (en) * 2010-12-10 2012-06-28 Sanyo Electric Co Ltd Electronic device
JP6076970B2 (en) 2011-06-21 2017-02-08 フロント、ストリート、インベストメント、マネジメント、インコーポレイテッド、アズ、マネジャー、フォー、フロント、ストリート、ダイバーシファイド、インカム、クラスFront Street Investment Management Inc., As Manager For Front Street Diversified Income Class Method and apparatus for generating three-dimensional image information
CN103827730B (en) * 2011-06-21 2017-08-04 管理前街不同收入阶层的前街投资管理有限公司 Method and apparatus for generating three-dimensional image information
CA2919985A1 (en) * 2013-07-31 2015-02-05 California Institute Of Technology Aperture scanning fourier ptychographic imaging
DE102013222780B3 (en) 2013-11-08 2015-04-16 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. MULTIAPERTURVÄRICHTUNG AND METHOD FOR DETECTING AN OBJECT AREA
DE102014212104A1 (en) * 2014-06-24 2015-12-24 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. DEVICE AND METHOD FOR THE RELATIVE POSITIONING OF A MULTI-PAPER UROPTIK WITH SEVERAL OPTICAL CHANNELS RELATIVE TO AN IMAGE SENSOR
KR101575964B1 (en) * 2014-07-01 2015-12-11 재단법인 다차원 스마트 아이티 융합시스템 연구단 Sensor array included in dual aperture camera
DE102014213371B3 (en) 2014-07-09 2015-08-06 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. DEVICE AND METHOD FOR DETECTING AN OBJECT AREA
US10394038B2 (en) * 2014-08-29 2019-08-27 Ioculi, Inc. Image diversion to capture images on a portable electronic device
DE102015215836B4 (en) 2015-08-19 2017-05-18 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Multiaperture imaging device with a reflective facet beam deflection device
DE102015215840B4 (en) * 2015-08-19 2017-03-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. A multi-aperture imaging apparatus, imaging system, and method of providing a multi-aperture imaging apparatus
DE102015215841B4 (en) 2015-08-19 2017-06-01 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus with a multi-channel imaging device and method of making the same
DE102015215845B4 (en) * 2015-08-19 2017-06-22 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Multi-aperture imaging device with channel-specific adjustability
DE102015215837A1 (en) 2015-08-19 2017-02-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Multi-aperture imaging apparatus, method of making same and imaging system
DE102015215844B4 (en) 2015-08-19 2017-05-18 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. A multi-aperture imaging apparatus, a portable apparatus, and a method of manufacturing a multi-aperture imaging apparatus
DE102015215833A1 (en) 2015-08-19 2017-02-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Multi-aperture imaging device with optical substrate
DE102015216140A1 (en) 2015-08-24 2017-03-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. 3D Multiaperturabbildungsvorrichtung
CN105093523B (en) * 2015-09-11 2017-10-31 哈尔滨工业大学 Multiple dimensioned multiple aperture optical imaging system
JP6792188B2 (en) * 2015-12-25 2020-11-25 株式会社リコー Optical scanning device and image display device
DE102016204148A1 (en) * 2016-03-14 2017-09-14 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Multi-aperture imaging apparatus, imaging system and method for detecting an object area
DE102016208210A1 (en) * 2016-05-12 2017-11-16 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. 3D MULTI-PAPER PICTURE DEVICES, MULTI-PAPER IMAGING DEVICE, METHOD FOR PROVIDING AN OUTPUT SIGNAL OF A 3D MULTI-PAPER IMAGING DEVICE AND METHOD FOR DETECTING A TOTAL FACE
US10097777B2 (en) * 2016-06-03 2018-10-09 Recognition Robotics, Inc. Depth map from multi-focal plane images
US11042984B2 (en) * 2016-11-10 2021-06-22 Movea Systems and methods for providing image depth information
CN106775258A (en) * 2017-01-04 2017-05-31 虹软(杭州)多媒体信息技术有限公司 The method and apparatus that virtual reality is interacted are realized using gesture control
DE102017206442B4 (en) 2017-04-13 2021-01-28 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Device for imaging partial fields of view, multi-aperture imaging device and method for providing the same

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140267849A1 (en) * 2011-11-04 2014-09-18 Imec Spectral camera with integrated filters and multiple adjacent image copies projected onto sensor array
US20150288894A1 (en) * 2011-11-04 2015-10-08 Imec Spectral camera with mirrors for projecting multiple adjacent image copies onto sensor array
US20130278788A1 (en) * 2012-04-23 2013-10-24 Csr Technology Inc. Method for determining the extent of a foreground object in an image
US20130308036A1 (en) * 2012-05-02 2013-11-21 Aptina Imaging Corporation Image focus adjustment using stacked-chip image sensors
US20140240515A1 (en) * 2013-02-28 2014-08-28 Arnold & Richter Cine Technik Gmbh & Co. Betriebs Kg Motion picture camera and method for taking a sequence of moving images
US20150077760A1 (en) * 2013-09-03 2015-03-19 Universitat Stuttgart Robust One-Shot Interferometer and OCT Method for Material Measurement and Tumor Cell Recognition
US20150138423A1 (en) * 2013-10-18 2015-05-21 The Lightco Inc. Methods and apparatus relating to a camera including multiple optical chains
US20150163478A1 (en) * 2013-12-06 2015-06-11 Google Inc. Selecting Camera Pairs for Stereoscopic Imaging
US20150323760A1 (en) * 2014-05-07 2015-11-12 Canon Kabushiki Kaisha Focus adjustment apparatus, control method for focus adjustment apparatus, and storage medium
US20160069743A1 (en) * 2014-06-18 2016-03-10 Innopix, Inc. Spectral imaging system for remote and noninvasive detection of target substances using spectral filter arrays and image capture arrays
US20160182813A1 (en) * 2014-12-19 2016-06-23 Thomson Licensing Method and apparatus for generating an adapted slice image from a focal stack
US20170069097A1 (en) * 2015-09-04 2017-03-09 Apple Inc. Depth Map Calculation in a Stereo Camera System
US20170099439A1 (en) * 2015-10-05 2017-04-06 Light Labs Inc. Methods and apparatus for compensating for motion and/or changing light conditions during image capture
US20180241920A1 (en) * 2015-10-21 2018-08-23 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Device comprising a multi-aperture imaging device, method for producing same and method for capturing a total field of view
US20180324334A1 (en) * 2016-01-13 2018-11-08 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Multi-aperture imaging devices, methods for producing same and imaging system
US20190104242A1 (en) * 2016-01-13 2019-04-04 Fraunhofer-Gesellschft Zur Foerderung Der Angewandten Forschung E.V. Multi-aperture imaging device, imaging system and method for capturing an object area
US20210366968A1 (en) * 2018-03-15 2021-11-25 Photonic Sensors & Algorithms, S.L. Plenoptic camera for mobile devices

Also Published As

Publication number Publication date
EP3900317B1 (en) 2024-01-17
CN113366821B (en) 2024-03-08
US11924395B2 (en) 2024-03-05
TW202037964A (en) 2020-10-16
DE102018222865A1 (en) 2020-06-25
WO2020127262A1 (en) 2020-06-25
EP4325875A2 (en) 2024-02-21
EP3900317A1 (en) 2021-10-27
TWI745805B (en) 2021-11-11
CN113366821A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
US10565734B2 (en) Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline
US10708570B2 (en) 3D multi-aperture imaging devices, multi-aperture imaging device, method for providing an output signal of a 3D multi-aperture imaging device and method for capturing a total field of view
US8305425B2 (en) Solid-state panoramic image capture apparatus
WO2011052064A1 (en) Information processing device and method
TWI687099B (en) Device for imaging partial fields of view, multi-aperture imaging device and method of providing same
US9648305B2 (en) Stereoscopic imaging apparatus and stereoscopic imaging method
EP2133726A1 (en) Multi-image capture system with improved depth image resolution
CN115335768B (en) Imaging system with rotatable reflector
US10996460B2 (en) Multi-aperture imaging device, imaging system and method of providing a multi-aperture imaging device
US11070731B2 (en) Multi-aperture imaging device, imaging system and method for making available a multi-aperture imaging device
CN108139565A (en) Multiple aperture imaging device with the specific adjustability of channel
US11924395B2 (en) Device comprising a multi-aperture imaging device for generating a depth map
US11330161B2 (en) Device comprising a multi-aperture imaging device for accumulating image information
JP2007102201A (en) Three-dimensional light ray input apparatus
JP3564383B2 (en) 3D video input device
US20220011661A1 (en) Device comprising a multi-aperture imaging device for generating a depth map
JP2013105000A (en) Video display device and video display method
KR20210151474A (en) Ultrathin camera device using microlens array, and Multi-functional imaging method using the same

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: FRAUNHOFER-GESELLSCHAFT ZUR FOERDERUNG DER ANGEWANDTEN FORSCHUNG E.V., GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WIPPERMANN, FRANK;DUPARRE, JACQUES;REEL/FRAME:057616/0990

Effective date: 20210705

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE