WO2018158494A1 - Procédé et appareil pour une unité de caméras multiples - Google Patents

Procédé et appareil pour une unité de caméras multiples Download PDF

Info

Publication number
WO2018158494A1
WO2018158494A1 PCT/FI2018/050047 FI2018050047W WO2018158494A1 WO 2018158494 A1 WO2018158494 A1 WO 2018158494A1 FI 2018050047 W FI2018050047 W FI 2018050047W WO 2018158494 A1 WO2018158494 A1 WO 2018158494A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera unit
captured
camera
scene
content
Prior art date
Application number
PCT/FI2018/050047
Other languages
English (en)
Inventor
Payman Aflaki Beni
Kimmo Roimela
Emre Baris Aksu
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Publication of WO2018158494A1 publication Critical patent/WO2018158494A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B37/00Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
    • G03B37/04Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/08Stereoscopic photography by simultaneous recording
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/168Segmentation; Edge detection involving transform domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19639Details of the system layout
    • G08B13/19641Multiple cameras having overlapping views on a single scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Definitions

  • the present invention relates to a method for a multi-camera unit, an apparatus for a multi-camera unit, and computer program for a multi-camera unit.
  • Such multi-camera captured scenes can be reconstructed in three-dimensional (3D) if the camera location and pose information is known.
  • 3D three-dimensional
  • Such a reconstruction's quality and coverage may depend on the distribution of the cameras and their capture capabilities.
  • a multi-camera unit comprises two or more cameras capable of capturing
  • the cameras may be positioned in different ways with respect to each other camera.
  • the cameras may be located at a short distance from each other and they may view to the same direction so that the two- camera unit can provide a stereo view of the environment.
  • the multi-camera unit may comprise more than two cameras which are located in an omnidirectional manner. Hence, the viewing angle of such a multi-camera unit may be even 360°. In other words, the multi-camera unit may be able to view practically around the multi-camera unit.
  • Each camera of the multi-camera unit may produce images and/or video
  • the plurality of visual information captured by different cameras may be combined together to form an output image and/or video.
  • an image processor may use so called extrinsic parameters of the multi- camera unit, such as orientation and relative position of the cameras, and possibly intrinsic parameters of the cameras to control image warping operations which may be needed to provide a combined image in which details captured with different cameras are properly aligned.
  • two or more cameras may capture at least partly same areas of the environment, wherein the combined image should be formed so that same areas from images of different cameras should be located at the same location.
  • Various embodiments provide a method and apparatus for a multi-camera unit.
  • areas in images captured by a first multi-camera unit and which are blocked by a second multi-camera unit are modified on the basis of images captured by the blocking, second multi-camera unit.
  • a method comprising:
  • an apparatus comprising at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
  • first multi-camera unit and the second multi-camera unit receive at least two streams of images of a 360 degree view captured by at least a first multi-camera unit and a second multi-camera unit, wherein the first multi-camera unit and the second multi-camera unit capture images of the same scene and the second multi- camera unit is at least partially visible in the content captured by the first multi-camera unit;
  • replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.
  • an apparatus comprising at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
  • replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.
  • a computer readable storage medium stored with code thereon for use by an apparatus, which when executed by a processor, causes the apparatus to perform:
  • first multi-camera unit and the second multi-camera unit receive at least two streams of images of a 360 degree view captured by at least a first multi-camera unit and a second multi-camera unit, wherein the first multi-camera unit and the second multi-camera unit capture images of the same scene and the second multi- camera unit is at least partially visible in the content captured by the first multi-camera unit;
  • replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.
  • Figure la shows an example of a multi-camera unit as a simplified block
  • Figure lb shows a perspective view of a multi-camera unit, in accordance with an embodiment
  • Figure 2a illustrates an example in which a first multi-camera unit captures a scene where a part of a view of the first multi-camera unit is blocked by a second multi- camera unit, in accordance with an embodiment
  • Figure 2b illustrates an example in which a first multi-camera unit captures a scene where a part of a view of the first multi-camera unit is blocked by at least a second multi-camera unit and in which also a third multi-camera unit is viewing the same scene, in accordance with an embodiment
  • Figure 2c illustrates an example in which a first multi-camera unit captures a scene where a part of a view of the first multi-camera unit is blocked by at least a second multi-camera unit and in which also a third camera unit is viewing the same scene, in accordance with an embodiment
  • Figure 3 a illustrates an example of an image captured by the first multi-camera unit of the setup of Figure 2a in which the second multi-camera unit is visible, in accordance with an embodiment
  • Figure 3b illustrates the image of Figure 3a modified so that the area where the second multi-camera unit was visible is replaced with image information based on at least an image captured by the second multi-camera unit, in accordance with an embodiment
  • Figure 4a illustrates another example of an image captured by the first multi- camera unit of the setup of Figure 2a in which the second multi-camera unit is visible, in accordance with an embodiment
  • Figure 4b illustrates the image of Figure 4a modified so that the area where the second multi-camera unit was visible is partly replaced with image information based on an image captured by the second multi-camera unit a and partly with image information based on images captured by both the first multi-camera unit and the second multi- camera unit, in accordance with an embodiment
  • Figures 5a— 5c show yet another example of eliminating a second multi- camera unit from an image captured by the first multi-camera unit, in accordance with an embodiment
  • Figure 6 shows a flowchart of a method of correcting captured images, in accordance with an embodiment
  • Figure 7 shows a flowchart of a method of a three-dimensional reconstruction from multiple views of a first multi-camera unit, in accordance with another embodiment
  • Figure 8 shows a simplified block diagram of a system comprising a plurality of multi-camera units, in accordance with an embodiment
  • Figure 9 shows a schematic block diagram of an exemplary apparatus or electronic device
  • Figure 10 shows an apparatus according to an example embodiment
  • Figure 11 shows an example of an arrangement for wireless communication comprising a plurality of apparatuses, networks and network elements.
  • FIG. 1 Figure la illustrates an example of a multi-camera unit 100, which comprises two or more cameras 102.
  • the number of cameras 102 is eight, but may also be less than eight or more than eight.
  • Each camera 102 is located at a different location in the multi-camera unit and may have a different orientation with respect to other cameras 102.
  • the cameras 102 may have an omnidirectional constellation so that it has a 360° viewing angle in a 3D-space.
  • such multi-camera unit 100 may be able to see each direction of a scene so that each spot of the scene around the multi-camera unit 100 can be viewed by at least one camera 102.
  • any two cameras 102 of the multi-camera unit 100 may be regarded as a pair of cameras 102.
  • a multi-camera unit of two cameras has only one pair of cameras
  • a multi-camera unit of three cameras has three pairs of cameras
  • a multi-camera unit of four cameras has six pairs of cameras, etc.
  • a multi-camera unit 100 comprising N cameras 102, where N is an integer greater than one, has N(N-l)/2 pairs of cameras 102. Accordingly, images captured by the cameras 102 at a certain time may be considered as N(N-l)/2 pairs of captured images.
  • the multi-camera unit 100 of Figure la may also comprise a processor 104 for controlling the operations of the multi-camera unit 100.
  • a memory 106 for for storing data and computer code to be executed by the processor 104, and a transceiver 108 for communicating with, for example, a communication network and/or other devices in a wireless and/or wired manner.
  • the user device 100 may further comprise a user interface (UI) 110 for displaying information to the user, for generating audible signals and/or for receiving user input.
  • UI user interface
  • the multi-camera unit 100 need not comprise each feature mentioned above, or may comprise other features as well.
  • the multi-camera unit 100 of Figure la may also comprise devices 128 to
  • ranging information i.e. the depth of the scene.
  • sensors enable the device to calculate all the respective depth information of scene content from the multi- camera unit.
  • Such information results in creating a depth map and may be used in the subsequent processes of this application.
  • a depth map image may be considered to represent the values related to the distance of the surfaces of the scene objects from a reference location, for example a view point of an observer.
  • a depth map image is an image that may include per-pixel depth information or any similar information.
  • each sample in a depth map image represents the distance of the respective texture sample or samples from the plane on which the camera lies. In other words, if the z axis is along the shooting axis of the cameras (and hence orthogonal to the plane on which the cameras lie), a sample in a depth map image represents the value on the z axis.
  • depth map images are generated containing a depth value for each pixel in the image, they can be depicted as gray-level images or images containing only the luma component.
  • chroma components of the depth map images may be set to a pre-defined value, such as a value indicating no chromaticity, e.g. 128 in typical 8-bit chroma sample arrays, where a zero chromaticity level is arranged into the middle of the value range.
  • chroma components of depth map images may be used to contain other picture data, such as any type of monochrome auxiliary pictures, such as alpha planes.
  • N number of bits representing the depth map values
  • Znear and Zfar are the respective distances of the closest and farthest objects in the scene to the camera (mostly available from the content provider), respectively.
  • semantics of depth map values may for example include the following:
  • Each luma sample value in a coded depth view component represents an inverse of real-world distance (Z) value, i.e. 1/Z, normalized in the dynamic range of the luma samples, such as to the range of 0 to 255, inclusive, for 8-bit luma representation.
  • the normalization may be done in a manner where the quantization 1/Z is uniform in terms of disparity.
  • Each luma sample value in a coded depth view component represents an inverse of real-world distance (Z) value, i.e.
  • Each luma sample value in a coded depth view component represents a real- world distance (Z) value normalized in the dynamic range of the luma samples, such as to the range of 0 to 255, inclusive, for 8-bit luma representation.
  • Each luma sample value in a coded depth view component represents a disparity or parallax value from the present depth view to another indicated or derived depth view or view position.
  • Figure la also illustrates some operational elements which may be implemented, for example, as a computer code in the software of the processor, in a hardware, or both.
  • An occlusion determination element 114 may determine which areas of a panorama image are blocked (occluded) by other multi-camera unit(s); a 2D to 3D converting element 116 may convert 2D images to 3D images and vice versa; and an image reconstruction element 118 may reconstruct images so that occluded areas are reconstructed using image information of the blocking multi-camera unit 100.
  • the multi-camera units 100 comprise a location determination unit 124 and an orientation determination unit 126, wherein these units may provide the location and orientation information to the system.
  • the location determination unit 124 and the orientation determination unit 126 may also be implemented as one unit. The operation of the elements will be described later in more detail. It should be noted that there may also be other operational elements in the multi- camera unit 100 than those depicted in Figure la and/or some of the above mentioned elements may be implemented in some other part of a system than the multi-camera unit 100.
  • Figure lb shows as a perspective view an example of an apparatus comprising the multi-camera unit 100.
  • seven cameras 102a— 102g can be seen, but the multi-camera unit 100 may comprise even more cameras which are not visible from this perspective.
  • Figure lb also shows two microphones 112a, 112b, but the apparatus may also comprise one or more than two microphones.
  • the multi-camera unit 100 may be controlled by another device (not shown), wherein the multi-camera unit 100 and the other device may communicate with each other and a user may use a user interface of the other device for entering commands, parameters, etc. and the user may be provided information from the multi-camera unit 100 via the user interface of the other device.
  • a camera space, or camera coordinates stands for a coordinate system of an individual camera 102 whereas a world space, or world coordinates, stands for a coordinate system of the multi-camera unit 100 as a whole.
  • An optical flow may be used to describe how objects, surfaces, and edges in a visual scene move or transform, when an observing point moves between from a location of one camera to a location of another camera. In fact, there need not be any actual movement but it may virtually be determined how the view of the scene might change when a viewing point is moved from one camera to another camera.
  • a parallax can be regarded as a displacement or difference in the apparent position of an object when it is viewed along two different lines of sight. The parallax may be measured by the angle or semi-angle of inclination between those two lines.
  • Intrinsic parameters 120 may comprise, for example, focal length, image sensor format, and principal point.
  • Extrinsic parameters 122 denote the coordinate system transformations from 3D world space to 3D camera space. Equivalently, the extrinsic parameters may be used to define the position of a camera center and camera's heading in world space.
  • FIG 8 is a simplified block diagram of a system 800 comprising a plurality of multi-camera units 130, 140, 150.
  • different multi-camera units are referred with different reference numbers for clarity, although each multi- camera unit 130, 140, 150 may have similar elements than the multi-camera unit 100 of Figure la.
  • the individual cameras of each multi-camera unit 130, 140, 150 will be referred by different reference numerals 132, 132a— 132g, 142, 142a— 142g, 152, 152a— 152g, although each camera may be similar to the cameras 102a— 102g of the multi-camera unit 100 of Figure la.
  • the reference numerals 132, 142, 152 will be used when any of the cameras of the multi-camera unit 130, the multi-camera unit 140, and the multi-camera unit 150 will be referred to, respectively.
  • reference numerals 132a— 132g, 142a— 142g, 152a— 152g will be used when a particular camera of the multi-camera unit 130, the multi-camera unit 140, and the multi-camera unit 150 will be referred to, respectively.
  • Figure 8 only depicts three multi-camera unit 130, 140, 150, the system may have two multi-camera units 130, 140 or more than three multi-camera units. It is assumed that the system 800 has information about the location and orientation of each of the multi-camera units 130, 140, 150 of the system. The location and orientation information may have been stored into a camera database 810.
  • This information may have been entered manually or the system 800 may comprise elements which can determine the location and orientation of each of the multi-camera units 130, 140, 150 of the system. If the location and/or the orientation of any of the multi-camera units 130, 140, 150 changes, the changed location and/or orientation information may be updated in the camera database 810.
  • the system 800 may be controlled by a controller 802, which may be a server or another appropriate element capable of communicating with the multi-camera units 130, 140, 150 and the camera database 810.
  • the location and/or the orientation of the multi-camera units 130, 140, 150 may not be stored into the database 810 but only to each individual multi-camera unit 130, 140, 150. Hence, the location and/or the orientation of the multi-camera units 130, 140, 150 may be requested from the multi- camera units 130, 140, 150 when needed.
  • the first multi-camera unit 130 may request that information from the second multi-camera unit 140. If some information regarding the second multi-camera unit 140 is still needed, the first multi-camera unit 130 may request the missing information from the controller 802, for example.
  • FIG. 2a illustrates an example in which a first multi-camera unit 130 captures a scene where a part of a view of the first multi-camera unit 130 is blocked by a second multi-camera unit 140.
  • a first multi-camera unit 130 captures a scene where a part of a view of the first multi-camera unit 130 is blocked by a second multi-camera unit 140.
  • one or more of the cameras 132 of the first multi-camera unit 130 has a view to the scene 204 where the second multi-camera unit 140 is in that view.
  • the multi-camera unit 130, 140 knows the intrinsic/extrinsic parameters of the cameras 132, 142 mounted in the multi-camera unit 130, 140 (block 602 in Figure 6). These cameras 132, 142 will also be called as internal cameras 132, 142 in this specification. The mutual location and orientation of the internal cameras 132, 142 of the same multi-camera unit 100 will normally remain the same. These parameters and orientation information of the multi-camera unit 100 may be used to determine views of individual cameras 132a— 132g, 142a— 142g of the multi-camera unit 130, 140 (block 604).
  • the first multi-camera unit 130 may obtain information of other multi-camera units of the system from the camera database 810. Also the second multi-camera unit 140 and possible other multi-camera units 100 may obtain information of other multi- camera units of the system from the camera database 810.
  • the first multi-camera unit 130 may obtain information of the location of the other multi-camera units 140, 150 e.g. from the camera database 810 (block 606) and use this information to determine in which directions, with respect to the first multi- camera unit 130, there are other multi-camera units 100. On the basis of the location information the first multi-camera unit 130 may then determine the locations in the views of the cameras 132 of the first multi-camera unit 130 which are blocked by another multi-camera unit 100. Such areas may also be called as occluded areas.
  • the location of at least one of the multi-camera units changes during the capturing process. Such changes may be communicated between the multi-camera units in order to always keep tracking the location and direction information of other multi-camera units and having such said information available.
  • the first multi-camera unit 130 may determine the scene (view) a first camera 132a of the first multi-camera unit 130 sees and compare it to the location of the second multi-camera unit 140 (block 608). If the comparison reveals that the second multi-camera unit 140 is within the scene of the first camera 132a (i.e. a picture of the second multi-camera unit 140 will in an image captured by the first camera 132a), the first multi-camera unit 130 may decide to perform reconstruction of the occluded area in the view of the first camera 132a (blocks 610, 612, 614 in Figure 6).
  • the first multi-camera unit 130 may perform similar determination for other cameras 132b— 132g of the first multi-camera unit 130 and if the second multi-camera unit 140 or another multi-camera unit 100 is within a view of any of the other cameras 132b— 132g, reconstruction of occluded areas of the views may be performed.
  • the reconstruction of occluded areas of the views would also mean that the blocking multi-camera unit 140 will not be visible in the reconstructed image i.e. the picture of the blocking multi-camera unit 140 will also be removed from the image.
  • Similar analyses may be performed by other multi-camera units 100 of the system 800 (block 616).
  • Figure 2a illustrates an example where an occluded area 208 (depicted as a
  • the first multi-camera unit 130 may also use intrinsic camera parameters of the first multi-camera unit 130 to determine the individual cameras 132 which are viewing at least partly towards the second multi-camera unit 140 and have at least partly blocked view. This information may be used in the reconstruction process where the occluded area is constructed from image information from the blocking multi-camera unit 100, which is the second multi-camera unit 140 in the example of Figure 2a. In the following example embodiment the second multi-camera unit 140 will be used as an example of the blocking multi-camera unit 100.
  • Removal of the blocking effect of a multi-camera 140 may be performed in
  • the first multi-camera unit 130 may capture one or more images by cameras 132 of the first multi-camera unit 130. These images may be in a two-dimensional (2D) format.
  • the occlusion determination element 114 may determine locations in the images in which another multi-camera unit 140, 150 is visible. Then, the occlusion determination element 114 may determine which multi-camera unit is in the image. Such determination is performed based on the awareness of each multi-camera device regarding the location of other multi-camera devices.
  • the first multi-camera unit 130 may then try to obtain images from at least the second multi-camera unit 140 so that the images are captured by those cameras 142 of the second multi-camera unit 140 which are viewing the occluded scene 206.
  • the images are received by the first multi-camera unit 130 in which this image information may be used to reconstruct the occluded area(s).
  • the first multi-camera unit 130 may decide to used image information captured from more than one other multi-camera unit 140, 150 to render the occluded area 206.
  • An example of this is depicted in Figure 2b.
  • the first multi-camera unit 130 may decide to used image information captured by and received from the second multi-camera unit 140 and the third, two-dimensional image capturing camera 160. to render the occluded area 206.
  • An example of this is depicted in Figure 2c.
  • such a two-dimensional image capturing camera 160 may be used instead of or in addition to one or more of the possible third, fourth etc. multi-camera units.
  • Figures 3 a and 3b illustrates an example of an image captured by the first multi-camera unit 130 of the setup of Figure 2a; and Figure 3b illustrates the image of Figure 3 a modified so that the area where the second multi- camera unit 140 was visible is replaced with image information from the second multi- camera unit 140, in accordance with an embodiment.
  • the cross-hatched area 304 in Figure 3b illustrates the reconstructed area.
  • the replacement may utilize the images captured by at least the second multi- camera unit 140 directly to replace the occluded parts of the image from the first multi- camera unit 130.
  • the images from the second multi-camera unit 140 are upsampled prior to filling the occluded parts of the images from the first multi-camera unit 130.
  • the image reconstruction element 118 uses in the reconstruction process pixels both from the original image captured by the first multi-camera unit 130 and from the images captured by the second multi-camera unit 140 corresponding to the occluded area.
  • the content of the second multi-camera unit 140 is relatively closer to the captured scene 204 compared to the first multi- camera unit 130 and hence, covers less pixels compared to the view captured from the first multi-camera unit 140.
  • the gap 406 in between the smaller area 408 and the non-occluded area 410 may be filled by interpolating the pixels between the smaller area 408 and the non-occluded area 410. This is illustrated in Figures 4a and 4b.
  • Figure 4a illustrates an image captured by a camera 132 of the first multi-camera unit 130
  • Figure 4b illustrates the image of Figure 4a modified so that the area where the second multi-camera unit 140 is visible is partly replaced with image information based on an image captured by one or more cameras 142 of the second multi-camera unit 140 and partly with image information based on images captured by both the first multi-camera unit 130 and the second multi-camera unit 140, in accordance with an embodiment.
  • the information from the second multi-camera unit 140 may be achieved based on the information of more than one camera 142 of the second multi-camera unit 140.
  • the content from more than one camera may be stitched to create a best presentation from the viewing direction of camera 132 of the first multi-camera unit 130.
  • the image reconstruction element 118 uses in the reconstruction process pixels the images captured by the second multi- camera unit 140 corresponding to the occluded area so that some kind of zooming operation is performed so that a zoomed out version of the second multi-camera unit's 140 content will be presented in the first multi-camera unit's 130 view to compensate the physical distance between the multi-camera units 130, 140.
  • the zooming out may be relative to the distance/orientation difference between the first and second multi- camera units 130, 140.
  • the second multi-camera unit 100 is blocking views of more than one camera 132 of the first multi-camera unit 130. Similar operations may be made for images from each such camera 132 of the first multi-camera unit 130.
  • more than one multi-camera unit 100 is blocking a view of a camera 132 of the first multi-camera unit 130.
  • location and/or orientation information and images from each such multi-camera units 100 may be obtained and utilized in the reconstruction process.
  • the blocked part of each internal camera on a first multi-camera unit 130 may be covered by the captured content from one or more internal cameras of the second multi-camera unit 140.
  • the reconstruction operation can be performed by only using two-dimensional features, or by first transforming two-dimensional images to a three-dimensional scene, making corrections and changes in the three-dimensional reconstruction and then back- projecting to two-dimensional representation for each occluded camera 132.
  • the reconstruction operation can also be performed taking into account the
  • ranging information of the scene. Such depth information may be utilized to synthesize a view from the available images of the second multi-camera unit 140 to be well aligned with the viewing direction of the camera 132 from the first multi-camera unit 130.
  • the occluded area of views from camera 132 of the first multi-camera unit 130 may also be reconstructed not only from the cameras of the first and second multi-camera unit 140, but also from the content captured by cameras of a third multi-camera unit 150.
  • a rendering algorithm may be used to render the required view to replace the occluded area based on the image
  • the depth information may also be used to render the views between the images from different multi-camera units taking into account any depth image based rendering (DIBR) algorithms.
  • DIBR depth image based rendering
  • the rendering is not limited to utilizing a limited number of cameras from a limited number of multi-camera units. Such rendering may utilize the information of one or more cameras from the same multi-camera unit, or several cameras from more than one multi-camera units. The selection of cameras and multi- camera units depends on the viewing direction of camera 132 of the first multi-camera unit and location and direction of the occluding multi-camera unit 140 and location and direction of cameras from other available multi-camera units.
  • a volumetric three-dimensional scene representation generated by multiple multi-camera units 100 may be used to identify the occluding multi-camera units 100 and possible connected peripherals (tripods, cables, etc.).
  • the information from the blocking multi-camera unit 100 can be utilized to fill in the occluded volume.
  • the blocking multi-camera unit 100 related voxels may then be erased from the three- dimensional scene volumetric model.
  • a final back projection operation to the occluded camera 102 of the multi-camera unit 100 provides the viewport without the occluding multi-camera unit 100.
  • the blocking multi-camera unit 100 is removed from the scene after its lens views are utilized to fill-in in the occluded regions of the blocked multi-camera unit 100.
  • the following algorithm may be run to find the blocking multi-camera unit 140 and remove it from the scene. Individual multi-camera unit's characteristics and location of multi-camera units 130, 140 may be known (block 702).
  • a three-dimensional reconstruction from the multiple multi-camera units 130, 140 is generated (block 704 in Figure 7), resulting in three-dimensional geometry comprising modeling primitives such as points, polygons, or voxels.
  • modeling primitives such as points, polygons, or voxels.
  • voxels will be used, but any of the other three-dimensional primitives can also be used.
  • the content required to perform the three-dimensional reconstruction process is not limited to the images captured by the first multi-camera unit 130 and the second multi-camera unit 140.
  • the above described reconstruction process may also be performed based on more than two multi-camera units.
  • the content captured from a third multi-camera unit 150 or other available multi-camera units may be used in the said reconstruction process in addition to or instead of the content captured by the second multi-camera unit 140.
  • content captured by both the second multi-camera unit 140 and the third multi-camera unit 150 may be combined to fill the occluded area of the first multi-camera unit 130.
  • the selection of multi-camera units to be used in the said reconstruction process depends on the location and orientation of the camera 132 in the first multi-camera unit 130 (defining the viewing direction of camera 132 the first multi-camera unit 130) and also the location and orientation of any other cameras in the available multi-camera units.
  • the location of the blocking multi-camera unit 140 may be determined (block 706) by making use of the camera pose and initial camera registration data. It is assumed here that each multi-camera unit 130, 140, 150 knows its substantially exact position in space and their substantially exact position in the space is communicated between all available multi-camera units 130, 140, 150.
  • the blocking multi-camera unit 140 When the blocking multi-camera unit 140 has been determined, it is selected (block 708) and voxel elements connected with it may be extended (block 710) until a ground plane (illustrated with 506 in Figure 5 c) or connected non-camera related peripheral is reached (block 712).
  • An example of a camera-related peripheral is a tripod 502 on the ground plane 506 on which the blocking multi-camera unit 140 may be positioned.
  • Another example of the camera-related peripheral is a rod on another concrete surface (not shown) to which the blocking multi-camera unit 140 may be attached to.
  • non-camera related peripheral means an object which does not belong to the multi-camera unit's setup but rather may be a part of the scene to be imaged.
  • the selected volume may be deleted from the three-dimensional scene (block 714).
  • the regions removed from the non-peripheral areas of the scene may be inpainted (block 716).
  • the volume may be projected back to the blocked multi-camera unit's 140
  • a panoramic video frame may be created where the picture of blocking multi- camera unit 140 is removed (block 720) and the created panoramic video (block 722) may be stored and/or provided to further processing.
  • the three-dimensional reconstruction step can be performed by applying multi- view geometry and photogrammetry techniques, for example.
  • the inpainting may be performed in two- dimensional scene instead of the above mentioned three-dimensional scene. In this option, the inpainting may be performed to the two-dimensional back-projection.
  • Images processed by the system 800 and/or the multi-camera units 100 may be still images, a stream of still images, images of a video, etc.
  • Figure 9 shows a schematic block diagram of an exemplary apparatus or electronic device 50 depicted in Figure 10, which may incorporate a transmitter according to an embodiment of the invention.
  • the electronic device 50 may for example be a mobile terminal or user
  • the apparatus 50 may comprise a housing 30 for incorporating and protecting the device.
  • the apparatus 50 further may comprise a display 32 in the form of a liquid crystal display.
  • the display may be any suitable display technology suitable to display an image or video.
  • the apparatus 50 may further comprise a keypad 34.
  • any suitable data or user interface mechanism may be employed.
  • the user interface may be implemented as a virtual keyboard or data entry system as part of a touch-sensitive display.
  • the apparatus may comprise a microphone 36 or any suitable audio input which may be a digital or analogue signal input.
  • the apparatus 50 may further comprise an audio output device which in embodiments of the invention may be any one of: an earpiece 38, speaker, or an analogue audio or digital audio output connection.
  • the apparatus 50 may also comprise a battery 40 (or in other embodiments of the invention the device may be powered by any suitable mobile energy device such as solar cell, fuel cell or clockwork generator).
  • the term battery discussed in connection with the embodiments may also be one of these mobile energy devices.
  • the apparatus 50 may comprise a combination of different kinds of energy devices, for example a rechargeable battery and a solar cell.
  • the apparatus may further comprise an infrared port 41 for short range line of sight communication to other devices.
  • the apparatus 50 may further comprise any suitable short range communication solution such as for example a Bluetooth wireless connection or a USB/Fire Wire wired connection.
  • the apparatus 50 may comprise a controller 56 or processor for controlling the apparatus 50.
  • the controller 56 may be connected to memory 58 which in
  • embodiments of the invention may store both data and/or may also store instructions for implementation on the controller 56.
  • the controller 56 may further be connected to codec circuitry 54 suitable for carrying out coding and decoding of audio and/or video data or assisting in coding and decoding carried out by the controller 56.
  • the apparatus 50 may further comprise a card reader 48 and a smart card 46, for example a universal integrated circuit card (UICC) reader and a universal integrated circuit card for providing user information and being suitable for providing
  • a card reader 48 and a smart card 46 for example a universal integrated circuit card (UICC) reader and a universal integrated circuit card for providing user information and being suitable for providing
  • UICC universal integrated circuit card
  • authentication information for authentication and authorization of the user at a network.
  • the apparatus 50 may comprise radio interface circuitry 52 connected to the controller and suitable for generating wireless communication signals for example for communication with a cellular communications network, a wireless communications system or a wireless local area network.
  • the apparatus 50 may further comprise an antenna 60 connected to the radio interface circuitry 52 for transmitting radio frequency signals generated at the radio interface circuitry 52 to other apparatus(es) and for receiving radio frequency signals from other apparatus(es).
  • the apparatus 50 comprises a camera 42 capable of recording or detecting imaging.
  • the system 10 comprises multiple communication devices which can communicate through one or more networks.
  • the system 10 may comprise any combination of wired and/or wireless networks including, but not limited to a wireless cellular telephone network (such as a global systems for mobile communications (GSM), universal mobile telecommunications system
  • GSM global systems for mobile communications
  • GSM global systems for mobile communications
  • UMTS long term evolution
  • LTE long term evolution
  • CDMA code division multiple access
  • WLAN wireless local area network
  • Bluetooth personal area network such as defined by any of the IEEE 802.x standards
  • Ethernet local area network such as a token ring local area network
  • wide area network such as a wide area network
  • the Internet such as a wide area network, and the Internet.
  • Connectivity to the internet 28 may include, but is not limited to, long range wireless connections, short range wireless connections, and various wired connections including, but not limited to, telephone lines, cable lines, power lines, and similar communication pathways.
  • the example communication devices shown in the system 10 may include, but are not limited to, an electronic device or apparatus 50, a combination of a personal digital assistant (PDA) and a mobile telephone 14, a PDA 16, an integrated messaging device (IMD) 18, a desktop computer 20, a notebook computer 22, a tablet computer.
  • PDA personal digital assistant
  • IMD integrated messaging device
  • the apparatus 50 may be stationary or mobile when carried by an individual who is moving.
  • the apparatus 50 may also be located in a mode of transport including, but not limited to, a car, a truck, a taxi, a bus, a train, a boat, an airplane, a bicycle, a motorcycle or any similar suitable mode of transport.
  • Some or further apparatus may send and receive calls and messages and
  • the base station 24 may be connected to a network server 26 that allows communication between the mobile telephone network 11 and the internet 28.
  • the system may include additional communication devices and communication devices of various types.
  • the communication devices may communicate using various transmission
  • CDMA code division multiple access
  • GSM global systems for mobile communications
  • UMTS telecommunications system
  • TDMA time divisional multiple access
  • FDMA frequency division multiple access
  • TCP-IP transmission control protocol-internet protocol
  • SMS short messaging service
  • MMS mobile communications
  • IMS instant messaging service
  • Bluetooth IEEE 802.11, Long Term Evolution wireless communication technique (LTE) and any similar wireless communication technology.
  • a communications device involved in implementing various embodiments of the present invention may communicate using various media including, but not limited to, radio, infrared, laser, cable connections, and any suitable connection. In the following some example implementations of apparatuses utilizing the present invention will be described in more detail.
  • embodiments of the invention operating within a wireless communication device
  • the invention as described above may be implemented as a part of any apparatus comprising a circuitry in which radio frequency signals are transmitted and received.
  • embodiments of the invention may be implemented in a mobile phone, in a base station, in a computer such as a desktop computer or a tablet computer comprising radio frequency communication means (e.g. wireless local area network, cellular radio, etc.).
  • radio frequency communication means e.g. wireless local area network, cellular radio, etc.
  • the various embodiments of the invention may be implemented in hardware or special purpose circuits or any combination thereof. While various aspects of the invention may be illustrated and described as block diagrams or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non- limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
  • Embodiments of the inventions may be practiced in various components such as integrated circuit modules.
  • the design of integrated circuits is by and large a highly automated process.
  • Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
  • replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.
  • the method comprises utilizing the scene captured from at least a third available camera unit, wherein the third camera unit captures the same scene as the first multi-camera unit and the second multi-camera unit.
  • the scene captured by at least a third available camera unit is a scene captured by a third multi-camera unit capturing three-dimensional images.
  • the scene captured by at least a third available camera unit is a scene captured by a camera capturing two-dimensional images.
  • the method comprises:
  • the method comprises:
  • the method comprises:
  • the method comprises:
  • replacing the remaining area covered by the second multi-camera unit by interpolating pixels of one or more images captured by the second multi-camera unit and pixels of one or more images captured by the first multi-camera unit.
  • the method comprises:
  • the method comprises:
  • an apparatus comprising at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
  • first multi-camera unit and the second multi-camera unit receive at least two streams of images of a 360 degree view captured by at least a first multi-camera unit and a second multi-camera unit, wherein the first multi-camera unit and the second multi-camera unit capture images of the same scene and the second multi- camera unit is at least partially visible in the content captured by at least the first multi- camera unit;
  • said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:
  • the third multi-camera unit capturing the same scene as the first multi-camera unit and the second multi-camera unit.
  • said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:
  • said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:
  • said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:
  • said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:
  • said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:
  • said at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to:
  • the image captured by the second multi-camera unit comprises at least a part of the view blocked by the second multi-camera unit in the content captured by the first multi-camera unit.
  • an apparatus comprising:
  • replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.
  • the scene captured by at least a third available camera unit is a scene captured by a third multi-camera unit capturing three- dimensional images.
  • the scene captured by at least a third available camera unit is a scene captured by a camera capturing two-dimensional images.
  • the apparatus comprises:
  • the apparatus comprises:
  • the apparatus comprises:
  • the apparatus comprises:
  • the apparatus comprises:
  • the apparatus comprises:
  • a computer readable storage medium stored with code thereon for use by an apparatus, which when executed by a processor, causes the apparatus to perform:
  • first multi-camera unit and the second multi-camera unit receive at least two streams of images of a 360 degree view captured by at least a first multi-camera unit and a second multi-camera unit, wherein the first multi-camera unit and the second multi-camera unit capture images of the same scene and the second multi- camera unit is at least partially visible in the content captured by the first multi-camera unit;
  • replacing comprises utilizing information of mutual location and orientation of cameras of the first multi-camera unit and the second multi-camera unit and information of at least one of the location and orientation of at least two multi-camera units.
  • the computer readable storage medium is stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:
  • the third camera unit capturing the same scene as the first multi-camera unit and the second multi-camera unit.
  • captured by at least a third available camera unit is a scene captured by a third multi- camera unit capturing three-dimensional images.
  • captured by at least a third available camera unit is a scene captured by a camera capturing two-dimensional images.
  • the computer readable storage medium is stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:
  • the computer readable storage medium is stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:
  • the computer readable storage medium is stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:
  • the computer readable storage medium is stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:
  • the computer readable storage medium is stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:
  • the computer readable storage medium is stored with code thereon for use by the apparatus, which when executed by a processor, causes the apparatus to perform:
  • the image captured by the second multi-camera unit comprises at least a part of the view blocked by the second multi-camera unit in the content captured by the first multi-camera unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

La présente invention concerne divers procédés, appareils et produits de programme informatique destinés à une unité de caméras multiples. Dans certains modes de réalisation, le procédé consiste à recevoir au moins deux flux d'images d'une vue à 360 degrés capturées par au moins une première unité de caméras multiples et une seconde unité de caméras multiples, la première unité de caméras multiples et la seconde unité de caméras multiples capturant des images de la même scène et la seconde unité de caméras multiples étant au moins partiellement visible dans le contenu capturé par la première unité de caméras multiples ; et à remplacer la présence de la seconde unité de caméras multiples dans la scène capturée par la première unité de caméras multiples sur la base d'un contenu capturé d'au moins la première unité de caméras multiples et la seconde unité de caméras multiples. Le remplacement consiste à utiliser des informations d'emplacement et d'orientation mutuels de caméras de la première unité de caméras multiples et de la seconde unité de caméras multiples et des informations d'au moins l'un de l'emplacement et de l'orientation d'au moins deux unités de caméras multiples.
PCT/FI2018/050047 2017-03-03 2018-01-23 Procédé et appareil pour une unité de caméras multiples WO2018158494A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1703415.8A GB2560185A (en) 2017-03-03 2017-03-03 Method and apparatus for a multi-camera unit
GB1703415.8 2017-03-03

Publications (1)

Publication Number Publication Date
WO2018158494A1 true WO2018158494A1 (fr) 2018-09-07

Family

ID=58543798

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2018/050047 WO2018158494A1 (fr) 2017-03-03 2018-01-23 Procédé et appareil pour une unité de caméras multiples

Country Status (2)

Country Link
GB (1) GB2560185A (fr)
WO (1) WO2018158494A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071724A (zh) * 2023-03-03 2023-05-05 安徽蔚来智驾科技有限公司 车载相机遮挡场景识别方法、电子设备、存储介质及车辆

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060056056A1 (en) * 2004-07-19 2006-03-16 Grandeye Ltd. Automatically expanding the zoom capability of a wide-angle video camera
US20070014347A1 (en) * 2005-04-07 2007-01-18 Prechtl Eric F Stereoscopic wide field of view imaging system
US20140285634A1 (en) * 2013-03-15 2014-09-25 Digimarc Corporation Cooperative photography
EP2884460A1 (fr) * 2013-12-13 2015-06-17 Panasonic Intellectual Property Management Co., Ltd. Appareil de capture d'images, système de surveillance, appareil de traitement d'images, procédé de capture d'images et support d'enregistrement lisible sur ordinateur non transitoire
WO2016191464A1 (fr) * 2015-05-27 2016-12-01 Google Inc. Capture et rendu en omnistéréo d'un contenu de réalité virtuelle panoramique

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3183875B1 (fr) * 2014-08-18 2020-04-15 Jaguar Land Rover Limited Système et procédé d'affichage

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060056056A1 (en) * 2004-07-19 2006-03-16 Grandeye Ltd. Automatically expanding the zoom capability of a wide-angle video camera
US20070014347A1 (en) * 2005-04-07 2007-01-18 Prechtl Eric F Stereoscopic wide field of view imaging system
US20140285634A1 (en) * 2013-03-15 2014-09-25 Digimarc Corporation Cooperative photography
EP2884460A1 (fr) * 2013-12-13 2015-06-17 Panasonic Intellectual Property Management Co., Ltd. Appareil de capture d'images, système de surveillance, appareil de traitement d'images, procédé de capture d'images et support d'enregistrement lisible sur ordinateur non transitoire
WO2016191464A1 (fr) * 2015-05-27 2016-12-01 Google Inc. Capture et rendu en omnistéréo d'un contenu de réalité virtuelle panoramique

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071724A (zh) * 2023-03-03 2023-05-05 安徽蔚来智驾科技有限公司 车载相机遮挡场景识别方法、电子设备、存储介质及车辆
CN116071724B (zh) * 2023-03-03 2023-08-04 安徽蔚来智驾科技有限公司 车载相机遮挡场景识别方法、电子设备、存储介质及车辆

Also Published As

Publication number Publication date
GB2560185A (en) 2018-09-05
GB201703415D0 (en) 2017-04-19

Similar Documents

Publication Publication Date Title
US11430156B2 (en) Apparatus, a method and a computer program for volumetric video
EP2328125B1 (fr) Procédé et dispositif de raccordement d'images
JP6158929B2 (ja) 画像処理装置、方法及びコンピュータプログラム
Bertel et al. Megaparallax: Casual 360 panoramas with motion parallax
US9973694B1 (en) Image stitching to form a three dimensional panoramic image
EP3759925A1 (fr) Appareil, procédé et programme informatique pour vidéo volumétrique
JP2017532847A (ja) 立体録画及び再生
KR20170005009A (ko) 3d 라돈 이미지의 생성 및 사용
US20180182178A1 (en) Geometric warping of a stereograph by positional contraints
JP2022174085A (ja) 場面の積層深度データを生成するための方法
US10616548B2 (en) Method and apparatus for processing video information
JP2018033107A (ja) 動画の配信装置及び配信方法
WO2019008222A1 (fr) Procédé et appareil de codage de contenu multimédia
WO2018158494A1 (fr) Procédé et appareil pour une unité de caméras multiples
KR20220035229A (ko) 볼류메트릭 비디오 콘텐츠를 전달하기 위한 방법 및 장치
US20230326128A1 (en) Techniques for processing multiplane images
US11528469B2 (en) Apparatus, a method and a computer program for viewing volume signalling for volumetric video
Gurrieri et al. Stereoscopic cameras for the real-time acquisition of panoramic 3D images and videos
GB2601597A (en) Method and system of image processing of omnidirectional images with a viewpoint shift
WO2018211171A1 (fr) Appareil, procédé et programme d'ordinateur pour le codage et le décodage vidéo
WO2019008233A1 (fr) Méthode et appareil d'encodage de contenu multimédia
WO2019034803A1 (fr) Procédé et appareil de traitement d'informations vidéo
TW201911239A (zh) 立體環景影片產生方法及裝置
US20220345681A1 (en) Method and apparatus for encoding, transmitting and decoding volumetric video
CN115104121A (zh) 用于处理图像内容的方法和设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18761149

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18761149

Country of ref document: EP

Kind code of ref document: A1