EP2106657A2 - Anordnung und verfahren zur aufnahme und wiedergabe von bildern einer szene und/oder eines objektes - Google Patents
Anordnung und verfahren zur aufnahme und wiedergabe von bildern einer szene und/oder eines objektesInfo
- Publication number
- EP2106657A2 EP2106657A2 EP07817759A EP07817759A EP2106657A2 EP 2106657 A2 EP2106657 A2 EP 2106657A2 EP 07817759 A EP07817759 A EP 07817759A EP 07817759 A EP07817759 A EP 07817759A EP 2106657 A2 EP2106657 A2 EP 2106657A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- images
- depth
- image
- tuple
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/133—Equalising the characteristics of different image components, e.g. their average brightness or colour balance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/25—Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/388—Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume
- H04N13/395—Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume with depth sampling, i.e. the volume being constructed from a stack or sequence of 2D image planes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/194—Transmission of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/282—Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/286—Image signal generators having separate monoscopic and stereoscopic modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Definitions
- the invention relates to an arrangement and a method for recording and reproducing images of a scene and / or an object. They are particularly suitable for spatially perceptible reproduction of the recorded images. Furthermore, the invention relates to a method for transmitting spatially perceivable images.
- a depth camera In this case, a color camera is used together with a depth sensor, which registers the -in the periodic cyclopean depth information of the scene to be recorded.
- a depth sensor In addition to the fact that a depth sensor is relatively expensive, it is disadvantageous that these often do not work very accurately and / or a reasonable compromise between accuracy and speed is achieved.
- a general extrapolation becomes necessary, in particular in the outer views Artifacts can not be ruled out and general masking artifacts can not be concealed.
- the invention has for its object to provide a new way by which it is possible with the least possible effort to create shots of real scenes and / or objects in order to reproduce them three-dimensionally in two or more views spatially perceptible. It is another object of the invention to provide a suitable method for transmitting spatially perceivable images.
- the object is achieved by an arrangement for recording images of a scene and / or an object and for their spatially perceptible reproduction, which comprises the following components:
- At least one main camera of a first type of camera for taking pictures at least one main camera of a first type of camera for taking pictures
- At least two satellite cameras of a second camera type for taking pictures the camera types differing in at least one parameter
- Image conversion means among other processes, a depth or disparity detection is carried out, with only images taken by cameras of the same camera type (preferred, which were taken by the at least two satellite cameras) are used for a depth or disparity determination, but not the remaining Pictures and
- an SD image display device connected to the image conversion device for the aid-free spatially perceptible reproduction of the provided image data, wherein the 3D image display device reproduces at least two views of the scene and / or of the object.
- the latter 3-D display device can also play 3, 4, 5, 6, 7, 8, 9, or even more views simultaneously or on a time average.
- the so-called "multi-view" -like 3D Imaging devices having 4 or more views shown benefit from the particular advantages of the invention, namely that with relatively few (eg, three or four) cameras, a total of more views can be provided than the number of cameras.
- Main and satellite cameras are generally different but not necessarily different in quality.
- the main camera is usually a so-called high-quality camera, with satellite cameras can be used, which are characterized by lower quality (eg industrial cameras) and thus under other parameters usually -but not necessarily - also have a lower resolution.
- the second type of camera has a lower resolution than the first type of camera.
- the two camera types can also differ (at least) by the built-in image capture chip.
- the advantage of the invention is essentially that in addition to the classic use of a stereo camera system, consisting here of essentially two identical high-resolution cameras, a three-camera system, preferably consisting of a central high-quality camera and two additional cameras with low resolution, which are located on the left and right of the main camera is used.
- a stereo camera system consisting here of essentially two identical high-resolution cameras
- a three-camera system preferably consisting of a central high-quality camera and two additional cameras with low resolution, which are located on the left and right of the main camera is used.
- the main camera between the satellite cameras is arranged.
- the main camera is thus preferably arranged spatially between the satellite cameras.
- the cameras can be varied within the usual limits with respect to distance and orientation (parallel alignment or to a focal point).
- additional satellite cameras can be advantageous, since misinterpretations can be further reduced, particularly in the subsequent preparation of the image data.
- it may therefore be advantageous that - exactly one main camera and two satellite cameras are present ("variant
- All cameras can be aligned both parallel and to a point. It is also possible that not all are aligned to one point (convergence angle).
- the optical axes of the cameras can also lie both in one and in different planes, wherein the objective centers should be arranged in a line or in the triangle (preferably isosceles or equilateral).
- the lens center points of the cameras may each have unequal distances from one another (with which the lens centers would form an irregular triangle). It is also possible that even all (at least three) cameras (ie all existing main and satellite cameras) differ from each other in at least one parameter, for example the resolution. A synchronization of the cameras with respect to zoom, aperture, focus, etc.
- the cameras can be fixed or movable to each other, wherein an automatic adjustment of the base distance of the cameras as well as the convergence angle of the cameras is feasible.
- adapter systems can be used, which make it easier to attach, in particular the satellite cameras, to the main camera.
- This allows ordinary cameras to be retrofitted as a 3D camera.
- additional optical elements for.
- one or more partially transparent mirror may be present.
- two satellite cameras each rotated 90 degrees to the main camera, so that the camera body of all three cameras are arranged so that the lens centers are horizontally closer together than if all three cameras arranged directly next to each other would.
- the extension of the camera body would force a certain, higher distance of the lens centers.
- a semi-transparent mirror at an angle of about 45 degrees to the center beam would follow from the lenses of the satellite cameras in reflection position, while the same mirror at an angle of about 45 degrees to the center beam from the lens of Main camera in transmission position follows.
- the objective center points of the main camera and at least two satellite cameras preferably form an isosceles triangle, for example in the variant "1 + 2"
- the objective center points of the three satellite cameras can be a triangle, preferably an isosceles triangle form. Then, the objective center of the main camera should be located within said triangle, with the triangle being considered to include its legs.
- a satellite camera and the main camera are arranged so optically to each other that they both record an image on substantially the same optical axis, for this purpose preferably at least a partially transmissive mirror between the two
- the two further satellite cameras are preferably arranged on a straight line or in a triangle with the satellite camera associated with the main camera
- the image converting device produces at least three views of the recorded scene or of the recorded object, wherein in the image converting device, in addition to the detected depth or disparity values, the image taken by the at least one main camera and at least two further images from the satellite cameras, but not necessarily from all existing cameras , is used to create said at least three views.
- One of the at least three created views can still correspond to one of the input pictures.
- the image conversion device even uses only that of the at least one main camera taken picture and the associated depth information.
- the or all main camera (s) and all satellite cameras are preferably synchronized with frequency accuracy with a maximum tolerance of 100 images per 24 hours.
- the satellite cameras may also be useful to form the satellite cameras as black and white cameras and preferably prefer to automatically assign a color value to the images taken by them.
- the object is also achieved by a method for recording and reproducing images of a scene and / or an object, which comprises the following steps:
- the representation of the combined 3D image on the 3D display is performed.
- the images are taken with the same resolution, which has the lowest total number of pixels compared to all other existing resolutions.
- the depth detection and subsequent generation of further views from the n-tuple of images and the depth or disparity detection values can be carried out, for example, by constructing a stack structure and projecting the stack structure onto a desired view.
- the structure of a stack structure can also be replaced by other applicable depth or disparity detection algorithms, in which case the detected depth or depth.
- Disparity values are used to create desired views.
- a stack structure may generally correspond to a layered structure of graphical elements in different (virtual) planes.
- This may correspond to the highest resolution of the cameras, but is preferably equal to that of the low resolution camera (s).
- the rectification ie a geometric equalization of the camera images (compensation for possible lens distortions, camera distortions, zoom differences, etc.) is made.
- the sizing can also be done as part of the rectification process.
- a color adjustment for example, according to the teaching of the writings "Joshi, N. Color Calibration for Arrays of Inexpensive Image Sensors. Technical Report CSTR 2004-02 3/31/04 4/4/04, Stanford University, 2004 "and A. Lune and G. Welch” Ensuring color consistency across multiple cameras ", ICCV 2005.
- the stack structure is built up for the depth detection, whereby the input images, namely only the images of the n-tuple with one and the same resolution, are built up Line-matching may also be skewed, which will be beneficial if the cameras are not arranged horizontally to each other, but if the pixels are superimposed with the same color values, it will be saved superimposed pixels have different color values, then no value is stored Lines against each other in opposite Directions are shifted in defined steps (eg by VA or Vi pixels), after each step the result of the comparison is saved again.
- defined steps eg by VA or Vi pixels
- the three-dimensional stack structure exists with the coordinates X, Y and Z, where X and Y correspond to the pixel coordinates of the input image, while Z represents the degree of displacement of the views to each other.
- X and Y correspond to the pixel coordinates of the input image
- Z represents the degree of displacement of the views to each other.
- the depth is determined for at least three original images of the n-tuple, preferably in the form of depth maps. At least two depth maps, which are pairwise different in resolution, are preferably created.
- a reconstruction is preferably carried out by inverse projection of the images of the n-tuple into the stack space by means of depth maps, so that the stack structure is reconstructed, and then again from there projection different views can be generated.
- Other methods of generating the views from the given image data are possible.
- the original images of the n-tuple, each with the associated depth to the 3D image display device can be transferred and then first the reconstruction according to claim 17 are performed.
- the images of the n-tuple are created, for example, by means of a 3-D camera system, e.g. through a multiple camera system consisting of several individual cameras.
- the images may be created by computer.
- a depth map is preferably created for each image, so that the steps of rectification, color adjustment and depth or disparity detection can be dispensed with.
- at least two of the three depth maps also have a different resolution from one another in pairs.
- the higher resolution image corresponds spatially to a perspective view that lies between the perspective views of the other two images.
- the 3D image display device used in each case may preferably display 2, 3, 4, 5, 6, 7, 8, 9 or even more views simultaneously or in the time average.
- the so-called "multi-view" type SD display devices having at least 4 or more views the particular advantages of the invention come to fruition, namely that with relatively few (eg three) originally created images More views can be provided for the spatial representation, as the number of originally created images is
- a temporal - not just a spatial - combination of views include.
- n images eg original images or views
- n images eg original images or views
- the disparity can be used in each case instead of the depth.
- a projection in principle also includes a pure shift.
- the images produced are transmitted to the image converting device.
- you can all views of each image generated by the image converting device are transmitted to the 3D image display device.
- Transmission channel at least three images of the n-tuple together with the respective
- Depth information (preferably in the form of depth maps) are transmitted.
- the fourth image particularly preferred
- one high-resolution image and two lower-resolution images are transmitted along with the depth information.
- At least two of the determined depth maps can have a mutually different resolution.
- the depth information is in each case determined only from images of the n-tuple with one and the same resolution.
- the depth for at least one image with higher resolution is generated from the determined depth information.
- the depth information which has been determined on the basis of images of the n-tuple with the lowest available resolution can also be transformed into higher resolution by means of edge detections in the at least one higher-resolution image.
- edge detections in the at least one higher-resolution image.
- the transmission channel may be, for example, a digital television signal, the Internet or a DVD (HD, SD, BlueRay, etc.).
- MPEG-4 can be used advantageously.
- at least two of the three depth maps also have a different resolution from one another in pairs.
- the higher resolution image corresponds spatially to a perspective view that lies between the perspective views of the other two images.
- the 3D image display device used in each case may preferably display 2, 3, 4, 5, 6, 7, 8, 9 or even more views simultaneously or in the time average.
- the so-called "multi-view" type SD display devices with 4 or more views shown the particular advantages of the invention come to fruition, namely that with relatively few (eg three) originally created images still more
- the reconstruction from the transferred n-tuple of images together with the respective depth information, wherein at least two images of the n-tuple have different resolutions in pairs, in different views for example, as follows: Be in a three-dimensional coordinate system each of the color information of each image, as viewed from a suitable direction, is arranged in the depth positions correspondingly designated by the respective depth information associated with the image.
- volume pixels This creates a three-dimensional colored volume with volume pixels, which can be reproduced from different perspectives or directions by a virtual camera or parallel projections. In this way, advantageously more than three views can be regenerated from the transmitted information. Other reconstruction algorithms for the views or images are possible. Regardless, the information transmitted is very universally reconstructed, z. As (perspective) views, slices or volume pixels. Such image formats are of great advantage for special 3D presentation methods, such as volume 3D displays.
- additional meta-information for example in a so-called alpha channel, can also be transmitted.
- This may be additional information about the images, such as geometric relationships of the n> 2 images (such as relative angles to one another, camera parameters), transparency information or outline information.
- the object of the invention can be achieved by a method for transmitting 3D information for the purpose of the following aid-free spatially perceptible reproduction on the basis of at least three different views, starting from at least one n-tuple of images, with n> 2, characterize the different viewing angles of an object or a scene, the depth is determined for at least three images and thereafter in a transmission channel at least three images of the n-tuple are transmitted together with the respective depth information (preferably in the form of depth maps).
- FIG. 1 shows a schematic view of the inventive arrangement with a main camera and three satellite cameras
- FIG. 2 a variant with a main camera and two satellite cameras
- FIG. 3 schematic representation of the stepwise displacement of two lines relative to one another and generation of the Z coordinates
- FIG. 4 Optimization scheme by elimination of ambiguities with respect to FIG. 3
- FIG. 5 Optimization scheme by reduction of the elements to a unique one
- FIG. 4 schematic representation of the stepwise displacement of three lines relative to one another and generation of the Z coordinates
- FIG. 7 Optimization scheme by elimination of ambiguities with respect to FIG. 6,
- FIG. 8 Optimization scheme by reduction of the elements to a unique one
- FIG. 9 is a schematic representation of a projection of a view from the
- Fig. 10 shows a schematic illustration for a picture combination of four pictures, suitable for tool-free spatial reproduction (prior art)
- Fig. 1 a schematic representation of the inventive transfer method
- An inventive arrangement consists essentially of a 3D camera system 1, an image conversion device 2 and an SD image display device 3, wherein according to FIG. 1, the 3D camera system 1, three satellite cameras 14, 15 and 16, a main camera 13, the image conversion device 2, a rectification unit 21, a color adjusting unit 22, a unit for constructing the stack structure 23, a unit for optimizing the stack structure 24, and a unit for projecting the stack structure to the desired view 25 and the 3D image display device 3, an image combining unit 31 and a 3D display 32, wherein the 3D display 32 at least two views of a scene / object or other objects for spatial representation reflects contains.
- the 3D display 32 may also operate based on 3, 4, 5, 6, 7, 8, 9 or even more views.
- an SD display 32 of the "Spatial View 19" type is suitable, which simultaneously represents 5 different views.
- the 3D camera system 1 includes a main camera 13, a first satellite camera 14, and a second satellite camera 15.
- the image converting device 2 includes a rectification unit 21, a color adjustment unit 22, a stack structure assembly unit 23, a stack structure optimization unit 24 Unit for projecting the stack structure onto the desired view 25 and a unit for determining the depth 26 and the 3D image display device 3 includes, as shown in FIG. 2, a unit for reconstructing the stack structure 30, an image combination unit 31 and a 3D display 32 ,
- the 3D camera system 1 consists of a main camera 13 and two satellite cameras 14, 15, the main camera 13 is a so-called high-quality camera with high resolution, however, the two satellite cameras 14, 15 with a lower resolving power are equipped.
- the camera positions with each other are, as usual, variable within known limits in terms of distance and orientation to record stereoscopic images can.
- the rectification unit 21 a possible equalization of the camera images takes place, ie a compensation of lens distortions, camera distortions, zoom differences, etc. takes place.
- the rectification unit 21 is adjoined by the color adjustment unit 22.
- the color / brightness values of the recorded images are adjusted to a uniform level.
- the thus corrected image data are now supplied to the unit for constructing the stack structure 23.
- the input images are compared with each other line by line, but only those of the satellite cameras (14, 15 according to FIG. 2 or 14, 15, 16 according to FIG. 1).
- the comparison according to FIG. 3 is based on the comparison of only two lines in each case.
- first two lines each with the same Y coordinate superimposed, which corresponds to the plane O according to FIG.
- the comparison is carried out on a pixel-by-pixel basis and the result of the comparison is stored as a Z coordinate according to the present comparison plane, with superimposed pixels having the same color value retaining same, whereas no color value is stored in the event of inequality.
- level 3 are each shifted to Vi pixels, and the level 1 is assigned, or a next comparison takes place in level 1, the result of which is stored in level 1 (Z coordinate).
- the comparisons are generally carried out as far as level 7 and then level -1 through level 7, and are respectively stored as Z coordinates in the corresponding level.
- the number of levels corresponds to the maximum occurring depth information and may vary depending on the image content.
- the three-dimensional structure thus constructed with the XYZ coordinates means that the degree of displacement of the views relative to one another is stored for each pixel via the associated Z coordinate.
- FIG. 6 on the basis of the embodiment of FIG. 1, except that three lines are correspondingly compared here.
- the generated stack structure which is also distinguished by the fact that the input images are no longer present individually, is fed to the following unit for optimizing the stack structure 24.
- ambiguous mappings of picture elements are determined, with the aim of erasing such errors due to improbable combinations, so that a corrected data set according to FIG. 4 or also FIG. 7 is generated.
- a possible flat or continuous height profile line is created from the remaining elements in order to achieve an unambiguous mapping of the color values into a discrete depth plane (Z coordinate).
- the results are shown in Fig. 5 and Fig. 8, respectively.
- each view to be generated is generated via the angle of the plane, as shown in FIG. 9.
- All views created are on Output of the image converting device 2 and can thus be transferred to the subsequent 3D image display device 3 for stereoscopic playback, wherein the combination of the different views according to the predetermined assignment rule of the 3D display 32 is carried out by means of the included image combination unit 31.
- the unit for optimizing the stack structure 24 follows here the unit for determining the depth 26 (dashed line). By determining the depth of the images, a particularly efficient data transfer format is created. Only three images and three depth images are transmitted here, preferably in MPEG-4 format.
- a unit for the reconstruction of the stack structure 30 with subsequent image combination unit 31 and a 3D display 32 is present on the input side in the 3D image display device 3. The acquisition of the images and the depths can be converted again particularly efficiently by inverse projection into the stack structure in the unit for reconstructing the stack structure 30, so that the stack structure of the subsequent unit for projecting the stack structure onto the desired view 25 can be provided.
- FIG. 10 shows, for better understanding, a schematic representation from the prior art (JP 08-331605) for an image combination of four images or views, suitable for tool-free spatial reproduction on a 3D display, for example on the basis of appropriate Lenticular or barrier technology.
- the four images or views in the image combination unit 31 have been interwoven according to the image combination structure suitable for the 3D display 32.
- FIG. 1 shows a schematic representation of the transmission method according to the invention. In this case, a total of 3 color images and 3 depth images (or in each case moving image streams) are transmitted in an MPEG-4 data stream.
- One of the color image streams has a resolution of 1920 ⁇ 1080 pixels, while the other two have a resolution of 1280 x 720 (or 1024 x 768) pixels.
- the corresponding depth images are each transmitted in half horizontal and half vertical resolution, ie 960 x 540 pixels or 640 x 360 (or 512 x 384) pixels.
- the depth images consist in the simplest case of grayscale images z. B. with 256 or 1024 possible gray values per pixel, each gray value corresponds to a depth value.
- the super-resolution color image would, for example, have 4096 ⁇ 4096 pixels and the other color images 2048 ⁇ 2048 pixels or 1024 ⁇ 1024 pixels.
- the associated depth images are each transmitted in half horizontal and half vertical resolution. This variant would be advantageous if the same record once for particularly high-resolution stereo presentations (eg in the 3D cinema with left / right images) and another time for less high-resolution 3D displays on 3D displays, but then with at least two views shown, to be used.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Stereoscopic And Panoramic Photography (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102006055641A DE102006055641B4 (de) | 2006-11-22 | 2006-11-22 | Anordnung und Verfahren zur Aufnahme und Wiedergabe von Bildern einer Szene und/oder eines Objektes |
PCT/DE2007/001965 WO2008061490A2 (de) | 2006-11-22 | 2007-10-29 | Anordnung und verfahren zur aufnahme und wiedergabe von bildern einer szene und/oder eines objektes |
Publications (1)
Publication Number | Publication Date |
---|---|
EP2106657A2 true EP2106657A2 (de) | 2009-10-07 |
Family
ID=38596678
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP14168468.8A Ceased EP2800350A3 (de) | 2006-11-22 | 2007-04-27 | Anordnung und Verfahren zur Aufnahme und Wiedergabe von Bildern einer Szene und/oder eines Objektes |
EP07722343A Ceased EP2095625A1 (de) | 2006-11-22 | 2007-04-27 | Anordnung und verfahren zur aufnahme und wiedergabe von bildern einer szene und/oder eines objektes |
EP07817759A Withdrawn EP2106657A2 (de) | 2006-11-22 | 2007-10-29 | Anordnung und verfahren zur aufnahme und wiedergabe von bildern einer szene und/oder eines objektes |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP14168468.8A Ceased EP2800350A3 (de) | 2006-11-22 | 2007-04-27 | Anordnung und Verfahren zur Aufnahme und Wiedergabe von Bildern einer Szene und/oder eines Objektes |
EP07722343A Ceased EP2095625A1 (de) | 2006-11-22 | 2007-04-27 | Anordnung und verfahren zur aufnahme und wiedergabe von bildern einer szene und/oder eines objektes |
Country Status (5)
Country | Link |
---|---|
US (2) | US8330796B2 (de) |
EP (3) | EP2800350A3 (de) |
DE (1) | DE102006055641B4 (de) |
TW (2) | TWI347774B (de) |
WO (2) | WO2008064617A1 (de) |
Families Citing this family (51)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101387366B1 (ko) * | 2007-06-27 | 2014-04-21 | 삼성전자주식회사 | 다시점 입체 영상 표시 장치 및 영상 표시 방법 |
KR20080114169A (ko) * | 2007-06-27 | 2008-12-31 | 삼성전자주식회사 | 3차원 영상 표시방법 및 이를 적용한 영상기기 |
KR20090055803A (ko) * | 2007-11-29 | 2009-06-03 | 광주과학기술원 | 다시점 깊이맵 생성 방법 및 장치, 다시점 영상에서의변이값 생성 방법 |
KR101420684B1 (ko) * | 2008-02-13 | 2014-07-21 | 삼성전자주식회사 | 컬러 영상과 깊이 영상을 매칭하는 방법 및 장치 |
DE102008023501B4 (de) | 2008-05-09 | 2011-04-14 | Visumotion Gmbh | Verfahren und Anordnung zur synchronen Aufzeichnung von mindestens zwei Videodatenströmen unterschiedlicher Formate |
US8866920B2 (en) | 2008-05-20 | 2014-10-21 | Pelican Imaging Corporation | Capturing and processing of images using monolithic camera array with heterogeneous imagers |
US8184143B2 (en) * | 2008-06-27 | 2012-05-22 | Sony Mobile Communications Ab | Simulated reflective display |
KR20110074823A (ko) | 2008-09-30 | 2011-07-04 | 파나소닉 주식회사 | 3d 영상에 관한 기록매체, 재생장치, 시스템 lsi, 재생방법, 안경 및 표시장치 |
US20100135640A1 (en) * | 2008-12-03 | 2010-06-03 | Dell Products L.P. | System and Method for Storing and Displaying 3-D Video Content |
WO2010087751A1 (en) * | 2009-01-27 | 2010-08-05 | Telefonaktiebolaget Lm Ericsson (Publ) | Depth and video co-processing |
US8520020B2 (en) * | 2009-12-14 | 2013-08-27 | Canon Kabushiki Kaisha | Stereoscopic color management |
KR101214536B1 (ko) * | 2010-01-12 | 2013-01-10 | 삼성전자주식회사 | 뎁스 정보를 이용한 아웃 포커스 수행 방법 및 이를 적용한 카메라 |
US8687044B2 (en) | 2010-02-02 | 2014-04-01 | Microsoft Corporation | Depth camera compatibility |
KR101647408B1 (ko) * | 2010-02-03 | 2016-08-10 | 삼성전자주식회사 | 영상 처리 장치 및 방법 |
DE102010020925B4 (de) | 2010-05-10 | 2014-02-27 | Faro Technologies, Inc. | Verfahren zum optischen Abtasten und Vermessen einer Umgebung |
US20120050495A1 (en) * | 2010-08-27 | 2012-03-01 | Xuemin Chen | Method and system for multi-view 3d video rendering |
KR20120020627A (ko) * | 2010-08-30 | 2012-03-08 | 삼성전자주식회사 | 3d 영상 포맷을 이용한 영상 처리 장치 및 방법 |
JP2012124704A (ja) * | 2010-12-08 | 2012-06-28 | Sony Corp | 撮像装置および撮像方法 |
KR101852811B1 (ko) * | 2011-01-05 | 2018-04-27 | 엘지전자 주식회사 | 영상표시 장치 및 그 제어방법 |
JP5058354B1 (ja) * | 2011-04-19 | 2012-10-24 | 株式会社東芝 | 電子機器、表示制御方法及びプログラム |
EP2761534B1 (de) | 2011-09-28 | 2020-11-18 | FotoNation Limited | Systeme zur kodierung von lichtfeldbilddateien |
US9161010B2 (en) | 2011-12-01 | 2015-10-13 | Sony Corporation | System and method for generating robust depth maps utilizing a multi-resolution procedure |
EP3869797B1 (de) | 2012-08-21 | 2023-07-19 | Adeia Imaging LLC | Verfahren zur tiefenerkennung in mit array-kameras aufgenommenen bildern |
EP2901671A4 (de) * | 2012-09-28 | 2016-08-24 | Pelican Imaging Corp | Erzeugung von bildern aus lichtfeldern mithilfe virtueller blickpunkte |
US10067231B2 (en) | 2012-10-05 | 2018-09-04 | Faro Technologies, Inc. | Registration calculation of three-dimensional scanner data performed between scans based on measurements by two-dimensional scanner |
DE102012109481A1 (de) | 2012-10-05 | 2014-04-10 | Faro Technologies, Inc. | Vorrichtung zum optischen Abtasten und Vermessen einer Umgebung |
US9386298B2 (en) * | 2012-11-08 | 2016-07-05 | Leap Motion, Inc. | Three-dimensional image sensors |
JP6285958B2 (ja) | 2013-01-15 | 2018-02-28 | モービルアイ ビジョン テクノロジーズ リミテッド | ローリングシャッターを伴うステレオ支援 |
US10484661B2 (en) * | 2013-08-06 | 2019-11-19 | Sony Interactive Entertainment Inc. | Three-dimensional image generating device, three-dimensional image generating method, program, and information storage medium |
CN104567758B (zh) * | 2013-10-29 | 2017-11-17 | 同方威视技术股份有限公司 | 立体成像系统及其方法 |
US9756316B2 (en) * | 2013-11-04 | 2017-09-05 | Massachusetts Institute Of Technology | Joint view expansion and filtering for automultiscopic 3D displays |
US9967538B2 (en) | 2013-11-04 | 2018-05-08 | Massachussetts Institute Of Technology | Reducing view transitions artifacts in automultiscopic displays |
TWI530909B (zh) | 2013-12-31 | 2016-04-21 | 財團法人工業技術研究院 | 影像合成系統及方法 |
CN104144335B (zh) * | 2014-07-09 | 2017-02-01 | 歌尔科技有限公司 | 一种头戴式可视设备和视频系统 |
US9693040B2 (en) * | 2014-09-10 | 2017-06-27 | Faro Technologies, Inc. | Method for optically measuring three-dimensional coordinates and calibration of a three-dimensional measuring device |
DE102014013678B3 (de) | 2014-09-10 | 2015-12-03 | Faro Technologies, Inc. | Verfahren zum optischen Abtasten und Vermessen einer Umgebung mit einem Handscanner und Steuerung durch Gesten |
DE102014013677B4 (de) | 2014-09-10 | 2017-06-22 | Faro Technologies, Inc. | Verfahren zum optischen Abtasten und Vermessen einer Umgebung mit einem Handscanner und unterteiltem Display |
US9671221B2 (en) | 2014-09-10 | 2017-06-06 | Faro Technologies, Inc. | Portable device for optically measuring three-dimensional coordinates |
US9602811B2 (en) | 2014-09-10 | 2017-03-21 | Faro Technologies, Inc. | Method for optically measuring three-dimensional coordinates and controlling a three-dimensional measuring device |
US20160173869A1 (en) * | 2014-12-15 | 2016-06-16 | Nokia Corporation | Multi-Camera System Consisting Of Variably Calibrated Cameras |
US9621819B2 (en) | 2015-07-09 | 2017-04-11 | Chunghwa Picture Tubes, Ltd. | Electronic device and multimedia control method thereof |
TWI613904B (zh) * | 2016-06-17 | 2018-02-01 | 聚晶半導體股份有限公司 | 立體影像產生方法及使用此方法的電子裝置 |
US10469821B2 (en) * | 2016-06-17 | 2019-11-05 | Altek Semiconductor Corp. | Stereo image generating method and electronic apparatus utilizing the method |
JP6636963B2 (ja) * | 2017-01-13 | 2020-01-29 | 株式会社東芝 | 画像処理装置及び画像処理方法 |
DE102017120956A1 (de) | 2017-09-11 | 2019-03-14 | Thomas Emde | Einrichtung zur audio/visuellen Aufzeichnung und Wiedergabe von Bildern/Filmen |
US20200014909A1 (en) | 2018-07-03 | 2020-01-09 | Faro Technologies, Inc. | Handheld three dimensional scanner with autofocus or autoaperture |
DE102019109147A1 (de) * | 2019-04-08 | 2020-10-08 | Carl Zeiss Jena Gmbh | Bildaufnahmesystem und verfahren zum aufnehmen von bildern |
US11039113B2 (en) * | 2019-09-30 | 2021-06-15 | Snap Inc. | Multi-dimensional rendering |
AU2021211677B2 (en) * | 2020-01-21 | 2024-01-04 | Proprio, Inc. | Methods and systems for augmenting depth data from a depth sensor, such as with data from a multiview camera system |
US11503266B2 (en) * | 2020-03-06 | 2022-11-15 | Samsung Electronics Co., Ltd. | Super-resolution depth map generation for multi-camera or other environments |
CN112995432B (zh) * | 2021-02-05 | 2022-08-05 | 杭州叙简科技股份有限公司 | 一种基于5g双记录仪的深度图像识别方法 |
Family Cites Families (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08201941A (ja) | 1995-01-12 | 1996-08-09 | Texas Instr Inc <Ti> | 3次元画像形成方法 |
EP0759255B1 (de) * | 1995-03-08 | 2001-05-02 | Koninklijke Philips Electronics N.V. | Dreidimensionales bildanzeigesystem |
JP3096613B2 (ja) | 1995-05-30 | 2000-10-10 | 三洋電機株式会社 | 立体表示装置 |
US6055012A (en) * | 1995-12-29 | 2000-04-25 | Lucent Technologies Inc. | Digital multi-view video compression with complexity and compatibility constraints |
US6023291A (en) * | 1996-10-16 | 2000-02-08 | Space Systems/Loral, Inc. | Satellite camera attitude determination and image navigation by means of earth edge and landmark measurement |
US6052124A (en) * | 1997-02-03 | 2000-04-18 | Yissum Research Development Company | System and method for directly estimating three-dimensional structure of objects in a scene and camera motion from three two-dimensional views of the scene |
US6271876B1 (en) * | 1997-05-06 | 2001-08-07 | Eastman Kodak Company | Using two different capture media to make stereo images of a scene |
US6269175B1 (en) * | 1998-08-28 | 2001-07-31 | Sarnoff Corporation | Method and apparatus for enhancing regions of aligned images using flow estimation |
EP1418766A3 (de) * | 1998-08-28 | 2010-03-24 | Imax Corporation | Méthode et appareil pour le traitement d'images |
GB2343320B (en) | 1998-10-31 | 2003-03-26 | Ibm | Camera system for three dimentional images and video |
JP2000321050A (ja) | 1999-05-14 | 2000-11-24 | Minolta Co Ltd | 3次元データ取得装置および3次元データ取得方法 |
KR20010072074A (ko) * | 1999-05-27 | 2001-07-31 | 요트.게.아. 롤페즈 | 비디오 신호의 인코딩 |
US6198505B1 (en) * | 1999-07-19 | 2001-03-06 | Lockheed Martin Corp. | High resolution, high speed digital camera |
WO2001056265A2 (de) * | 2000-01-25 | 2001-08-02 | 4D-Vision Gmbh | Verfahren und anordnung zur räumlichen darstellung |
KR100375708B1 (ko) * | 2000-10-28 | 2003-03-15 | 전자부품연구원 | 3차원 입체영상을 위한 다시점 비디오 시스템 및영상제조방법 |
US6573912B1 (en) * | 2000-11-07 | 2003-06-03 | Zaxel Systems, Inc. | Internet system for virtual telepresence |
US20040104935A1 (en) * | 2001-01-26 | 2004-06-03 | Todd Williamson | Virtual reality immersion system |
US7224382B2 (en) * | 2002-04-12 | 2007-05-29 | Image Masters, Inc. | Immersive imaging system |
CA2386560A1 (en) * | 2002-05-15 | 2003-11-15 | Idelix Software Inc. | Controlling optical hardware and dynamic data viewing systems with detail-in-context viewing tools |
JP4229398B2 (ja) | 2003-03-28 | 2009-02-25 | 財団法人北九州産業学術推進機構 | 3次元モデリング・プログラム、3次元モデリング制御プログラム、3次元モデリング・データ伝送プログラム、記録媒体および3次元モデリング方法 |
US7532225B2 (en) * | 2003-09-18 | 2009-05-12 | Kabushiki Kaisha Toshiba | Three-dimensional image display device |
US7525541B2 (en) * | 2004-04-05 | 2009-04-28 | Actuality Systems, Inc. | Data processing for three-dimensional displays |
WO2005124687A1 (ja) | 2004-06-16 | 2005-12-29 | The University Of Tokyo | 光学式モーションキャプチャシステムにおけるマーカトラッキング方法、光学式モーションキャプチャ方法及びシステム |
US7468745B2 (en) * | 2004-12-17 | 2008-12-23 | Mitsubishi Electric Research Laboratories, Inc. | Multiview video decomposition and encoding |
DE102004061998A1 (de) * | 2004-12-23 | 2006-07-06 | Robert Bosch Gmbh | Stereokamera für ein Kraftfahrzeug |
JP4488996B2 (ja) * | 2005-09-29 | 2010-06-23 | 株式会社東芝 | 多視点画像作成装置、多視点画像作成方法および多視点画像作成プログラム |
-
2006
- 2006-11-22 DE DE102006055641A patent/DE102006055641B4/de not_active Expired - Fee Related
-
2007
- 2007-04-27 EP EP14168468.8A patent/EP2800350A3/de not_active Ceased
- 2007-04-27 WO PCT/DE2007/000786 patent/WO2008064617A1/de active Application Filing
- 2007-04-27 EP EP07722343A patent/EP2095625A1/de not_active Ceased
- 2007-04-27 US US11/920,290 patent/US8330796B2/en not_active Expired - Fee Related
- 2007-08-07 TW TW096128967A patent/TWI347774B/zh not_active IP Right Cessation
- 2007-10-29 WO PCT/DE2007/001965 patent/WO2008061490A2/de active Application Filing
- 2007-10-29 US US11/988,897 patent/US20100134599A1/en not_active Abandoned
- 2007-10-29 EP EP07817759A patent/EP2106657A2/de not_active Withdrawn
- 2007-11-20 TW TW096143841A patent/TW200841704A/zh unknown
Non-Patent Citations (1)
Title |
---|
See references of WO2008061490A2 * |
Also Published As
Publication number | Publication date |
---|---|
US8330796B2 (en) | 2012-12-11 |
EP2800350A3 (de) | 2014-11-12 |
WO2008061490A2 (de) | 2008-05-29 |
EP2095625A1 (de) | 2009-09-02 |
TWI347774B (en) | 2011-08-21 |
US20090315982A1 (en) | 2009-12-24 |
EP2800350A2 (de) | 2014-11-05 |
DE102006055641A1 (de) | 2008-05-29 |
DE102006055641B4 (de) | 2013-01-31 |
US20100134599A1 (en) | 2010-06-03 |
WO2008061490A3 (de) | 2008-08-28 |
TW200824427A (en) | 2008-06-01 |
WO2008064617A1 (de) | 2008-06-05 |
TW200841704A (en) | 2008-10-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2106657A2 (de) | Anordnung und verfahren zur aufnahme und wiedergabe von bildern einer szene und/oder eines objektes | |
DE69432692T2 (de) | Bildverarbeitungsvorrichtung und -verfahren. | |
DE69416671T2 (de) | Autostereoskopisches videogerät | |
AT394459B (de) | Verfahren zum gewinnen von bildern zur verwendung beim darstellen eines dreidimensionalen scheinbildes und bildaufzeichnungstraeger, auf dem erste und zweite gruppen von serien derartiger bilder gespeichert sind | |
EP1451775A1 (de) | Erzeugung einer stereo-bildfolge aus einer 2d-bildfolge | |
EP3427474B1 (de) | Bildverarbeitungsverfahren, bildverarbeitungsmittel und bildverarbeitungsvorrichtung zur erzeugung von abbildungen eines teils eines dreidimensionalen raums | |
DE69601577T2 (de) | Verfahren und einrichtung zur aufnahme stereoskopischer bilder | |
DE10340089A1 (de) | Sweet-Spot-Beamsplitter zur Bildtrennung | |
WO2009039800A1 (de) | Verfahren zur ausrichtung eines parallaxenbarriereschirms auf einem bildschirm | |
EP0907902B1 (de) | Verfahren zur dreidimensionalen bilddarstellung auf einer grossbildprojektionsfläche mittels eines laser-projektors | |
WO2017216263A1 (de) | Bilderfassungsvorrichtung, bilderfassungssystem, bildprojektionsvorrichtung, bildübertragungssystem, verfahren zum erfassen eines 360°-objektbereichs und verfahren zum projizieren eines bildes | |
DE102008024732A1 (de) | Medizinisch optisches Beobachtungsgerät und Verfahren zum Erstellen einer stereoskopischen Zwischenperspektive in einem derartigen Gerät | |
EP3900318A1 (de) | Vorrichtung mit einer multiaperturabbildungsvorrichtung zum akkumulieren von bildinformation | |
AT518256B1 (de) | Erzeugung eines für eine stereoskopische wiedergabe vorgesehenen panoramabilds und eine solche wiedergabe | |
WO2018133996A1 (de) | Verfahren zum kombinieren einer vielzahl von kamerabildern | |
WO2010022805A1 (de) | Anzeigesystem | |
DE102012108685B4 (de) | Multi-Konverter für digitale, hochauflösende, stereoskopische Videosignale | |
DE19528661A1 (de) | Kombi-Bildteiler | |
EP1941745A1 (de) | Verfahren und vorrichtung zur stereoskopischen aufnahme von objekten für eine dreidimensionale visualisierung | |
EP3244369B1 (de) | Verfahren zur wiedergabe von bildsequenzen und bildverarbeitungseinheit sowie computerprogramm hierzu | |
DE10001005A1 (de) | Stereoskopische Videoprojektion | |
WO1996031797A1 (de) | Verfahren und vorrichtung zur generierung von raumbildern | |
DE69803862T2 (de) | Autostereoskopisches bildanzeigegerät mit änderung des formats und dies enthaltendes system | |
EP0645937A2 (de) | Bildwiedergabesystem, Bildaufnahmegerät dafür, Verfahren zur Erzeugung einer Bilddarstellung und deren Verwendung | |
WO1993011644A1 (de) | Verfahren und vorrichtung zur wiedergabe dreidimensionaler bilder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20090622 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: BILLERT, RONNY Inventor name: FERENC, TORMA Inventor name: REUSS, DAVID Inventor name: MEICHSNER, JENS Inventor name: FUESSEL, DANIEL Inventor name: SCHMIDT, ALEXANDER |
|
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20130501 |