EP3295368A1 - Deepstereo: learning to predict new views from real world imagery - Google Patents

Deepstereo: learning to predict new views from real world imagery

Info

Publication number
EP3295368A1
EP3295368A1 EP16726706.1A EP16726706A EP3295368A1 EP 3295368 A1 EP3295368 A1 EP 3295368A1 EP 16726706 A EP16726706 A EP 16726706A EP 3295368 A1 EP3295368 A1 EP 3295368A1
Authority
EP
European Patent Office
Prior art keywords
scene
view
depth
requested
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP16726706.1A
Other languages
German (de)
French (fr)
Inventor
John Flynn
Keith SNAVELY
Ivan NEULANDER
James PHILBIN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Publication of EP3295368A1 publication Critical patent/EP3295368A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Definitions

  • This document relates, generally, to deep networks and deep learning associated with images.
  • Deep networks and deep learning, may be applied to generate and improve models and representations from large-scale data. Deep leaming may be considered part of a more broad application of machine leaming. Deep learning may be based on unsupervised learning of feature representations in, for example, computer vision, garnered from multiple levels of processing/computing devices, these multiple levels of processing/computing devices forming a hierarchy from low-level to high-level features. The composition and arrangement of these multiple layers may be developed based on, for example, a particular problem to be solved.
  • a method may include accessing a plurality of posed image sets from a database, the plurality of posed image sets respectively corresponding to a plurality of scenes, each of the plurality of posed image sets including a plurality of views of a corresponding scene of the plurality of scenes, generating a requested view of a scene based on views selected from the plurality of views of the scene included in the posed image set corresponding to the scene in accordance with an automated view generating algorithm, wherein the requested view of the scene is not included in the plurality of views of the scene included in the corresponding posed image set, comparing the view of the scene generated by the automated view generating algorithm to a known view of the scene, and updating the view generating algorithm based on the comparison.
  • a method may include receiving a request for a view of a scene to be rendered, accessing, from a database, a plurality of stored posed images respectively representing a plurality of views of the scene, selecting a plurality of images from the plurality of stored posed images, the selected plurality of images representing views of the scene neighboring the requested view of the scene, reprojecting depth slices of each of the selected plurality of images at a plurality of depths, determining a depth for the requested view of the scene and determining a color for each pixel of the requested view of the scene at the determined depth based on pixels at the reprojected depth slices, and generating the requested view of the scene.
  • a method may include receiving a request for a view of a scene, retrieving, from a database storing a plurality of posed image sets, each of the plurality of posed images sets including a plurality of views of a corresponding scene, a posed image set corresponding to the requested view of the scene, and generating the requested view of the scene based on selected views from the plurality of views of the scene included in the corresponding posed image set, the requested view not included in the plurality of views of the scene of the corresponding posed image set.
  • a system for generating a view of a scene may include a network.
  • the network may include a computing device including a processor, the computing device being in communication with a database, the database storing a plurality of posed image sets respectively corresponding to a plurality of scenes, each of the plurality of posed images sets including a plurality of views of a corresponding scene of the plurality of scenes, a selection tower configured to determine a depth for each output pixel in a requested output image, the requested output image corresponding to a requested view of a scene, and a color tower configured to generate a color for each output pixel of the requested output image.
  • the selection tower and the color tower may be configured to receive selected views from the plurality of views of the scene included in the posed image set corresponding to the requested view of the scene, the requested view of the scene not being included in the plurality of views of the scene of the corresponding posed image set.
  • the selection tower and the color tower may be configured to generate the requested output image for processing by the processor of the computing device to generate the requested view of the scene.
  • FIGs. 1A and IB illustrate new views rendered from existing images, in accordance with implementations described herein.
  • FIG. 2 illustrates a plane sweep stereo re-projection of various images from various views to a new target camera field of view at a range of depths, in accordance with implementations described herein.
  • FIGs. 3 and 4 illustrate a network including a selection tower and a color tower, in accordance with implementations described herein.
  • FIG. 5 illustrates a selection tower, configured to learn to produce a selection probability for each pixel in each depth plane, in accordance with implementations described herein.
  • FIG. 6 illustrates a color tower, configured to learn to combine and warp pixels across sources to produce a color for a particular pixel at a plurality of depth planes, in accordance with implementations described herein.
  • FIG. 7 illustrates two different examples of reprojected images produced by a system and method, in accordance with implementations described herein.
  • FIG. 8 illustrates an example of a computing device and a mobile computing device that can be used to implement the techniques described herein.
  • Deep networks may be applied to various different types of recognition and classification problems.
  • deep networks may be applied to recognition and classification problems in computer vision and graphics using, for example, a deep architecture that learns to perform new view synthesis directly from a large number, such as, for example, millions, of posed image sets, for example, sets of images having known characteristics, such as, for example, perspective and/or color and/or depth.
  • a deep network may be trained from this relatively large number of posed image sets, and then, in operation, may rely on a relatively small number of pertinent images to generate a new, previously unseen view based on the training conducted with the large number of posed image sets.
  • a system and method in accordance with implementations described herein may be trained end-to-end, by, for example, presenting images, for example pixels of images, of neighboring views of a scene to the network to directly produce images of an unseen arbitrary /requested view of the scene.
  • the requested view may be, for example, a view of a subject, or subject area, from a particular perspective and/or point of view and/or depth.
  • the neighboring images may be, for example, images available from the posed image sets adjacent to the requested view.
  • the system may be trained by presenting a set of input images of various views of a scene, requesting a view not included in the set of input images, and then presenting the network with the image that should be produced in response to the request.
  • one or more sets of posed images may used as training set(s) by removing one image and setting it aside, and then reproducing the removed image from the remaining images. Relatively large amounts of data, mined from a relatively expansive database may be used to train models, in accordance with
  • An example of this type of data base is a street view mapping database, which contains a massive collection of posed imagery spanning much of the globe.
  • this example may be referred to hereinafter, where appropriate for ease of explanation and illustration, the principles discussed herein may be applied to other implementations making use of other such expansive collections of data.
  • This end-to-end approach may allow for a certain level of generality, relying on posed image sets and applying to different domains, and filling in unseen pixels based on various features such as color, depth and texture learned from previous data.
  • This approach may reduce the occurrence of artifacts, due to, for example, occlusions and ambiguity, when generating view interpolation and view-synthesized omni-directional stereo imagery, and may produce a relatively high quality result for a relatively difficult scene to synthesize.
  • Estimating a three-dimensional (3D) shape from multiple posed images may be a fundamental task in generating 3D representations of scenes that can be rendered and edited.
  • novel view synthesis may be performed by a form of image based rendering (IBR) where a new view of a scene is synthesized by warping and combining images from nearby posed images of the scene to synthesize the new view of the scene.
  • IBR image based rendering
  • This approach may be applied to, for example, augmented and/or virtual reality systems, teleconferencing systems, 3D monocular film systems, cinematography, image stabilization, and other such implementations.
  • a system and method in accordance with implementations described herein may leverage deep networks to directly learn to synthesize new views from posed images.
  • deep networks may be used to regress directly to output pixel colors given posed input images.
  • This system may interpolate between views separated by a wide baseline due, for example, to the end-to-end nature of the training, and the ability of deep networks to learn extremely complex non-linear functions related to the inputs.
  • minimal assumptions about the scene being rendered may include, for example, a relatively static nature of the scene within a finite range of depths. Even outside of these parameters, the resulting images may degrade relatively gracefully and may remain visually plausible.
  • a system and method making use of the deep architecture as described herein may also apply to other stereo and graphics problems given suitable training data.
  • a new view Cy of a given scene A such as the house shown in FIG. 1A
  • a new view Cy of a given scene B such as the outdoor scene shown in FIG. IB
  • Application of deep networks to image understanding and interpretation may, in some implementations, provide a basis for applying deep learning to graphics generation, and in particular, to depth and stereo associated with particular graphics applications. Further, improved performance may be achieved by using many layers of small convolutions and pooling, to pooling to address the large number of parameters associated with deep networks which may cause these systems to otherwise be prone to over-fitting in the absence of large quantities of data.
  • a system and method, in accordance with implementations described herein, may make minimal assumptions about the scene being rendered as discussed above, such as, for example, that the scene is static and that the scene exists within a finite range of depths.
  • a model may generalize beyond the training data to novel imagery, including, for example, image collections used in prior work.
  • a new image may be rendered from the viewpoint of a new target camera CT, the new target camera CT representing a capture field of view, or new viewpoint, associated with the new scene to be generated from the posed input images Ii through I n .
  • FIG. 2 given a set of posed input images Ii through In, a new image may be rendered from the viewpoint of a new target camera CT, the new target camera CT representing a capture field of view, or new viewpoint, associated with the new scene to be generated from the posed input images Ii through I n .
  • plane sweep stereo reprojects the posed input images Ii and h from viewpoints Vi and V 2 , respectively, to the target camera CT at numerous different depths, for example, from a minimum depth dito a maximum depth do, with one or more reprojected images of the posed input images Ii and h taken an one or more intermediate depths d k .
  • reprojected images Ri through R n of the posed input image Ii may be generated at depths ranging from di to do
  • reprojected images Si through S n of the posed image input h may be generated at depths ranging from di to do.
  • the dotted lines may indicate pixels from the input images Ii and h
  • One approach to rendering the new image may be to naively train a deep network to synthesize new views by directly supplying input images to the network.
  • the pose parameters of the new viewpoint (the new view Cy to be generated, corresponding to the new target camera CT field of view) may be supplied as inputs to the network to produce the desired view/image. Due to the complex, non-linear relationship of the pose parameters, the input images and the output image, this also involves the network learning how to interpret rotation angles and how to perform image re-projection, which may be an inefficient use of network resources.
  • the network may compare and combine potentially distant pixels in the original input images to synthesize a new view, driving a need for relatively dense, long range connections in the network, causing the network to be slow to train and prone to over-fitting, particularly when dealing with large scale data.
  • this could be addressed by a network structure configured apply an internal epipolar constraint to limit connections to hose on corresponding epipolar lines.
  • this type of structure may be pose-dependent and computationally inefficient, particularly when dealing with large scale data.
  • the series of input images Ii through In from viewpoints Vi through V n may be re-projected at different distances between di and do with respect to the viewpoint, or new target camera CT, from which a new/requested view Cy is to be rendered, rendering slices at these different re-projection distances.
  • the system may sweep through the range of depths (for example, in some implementations, 64 to 96 different depths), from a position that is relatively close to the new target camera CT (for example, approximately 2m from the new target camera CT in the case of a street view) to a position that is relatively far from the new target camera CT (for example, approximately 30 meters, or more, from the new target camera CT in the case of a street view, at given intervals.
  • the system may step through the range of depths at an inverse distance, or 1/distance, for example, a minimum interval of 1/max distance do, or another interval based on another distance between the minimum distance dl and the maximum distance do .
  • a re-projected depth slice for each input view may be generated, with numerous images for each re-projected depth slice available (for example, 4- 5 input views, each with 4-5 images per depth slice). For example, pixels of a left re- projected image and pixels of a right projected image may (respectively) align at the correct depth.
  • the re-projected depth slices may be combined to produce the best color for that depth slice, and to determine the correct depth for a particular pixel/image.
  • the network may determine the correct depth and the correct depth slice for a particular pixel, and then the correct color. These elements may be learned together. The network may then determine, or learn, the probability that a particular pixel is at a particular depth slice, multiply that probability by the computed color, and sum this to produce a final image.
  • a network may include a set of 3D plane sweep volumes as input.
  • Re-projecting an input image into the target camera C may be done using, for example, texture mapping, and may be performed by a graphics processing unit (GPU).
  • a separate plane sweep volume V k C may be generated for each input image.
  • Each voxel v Xi y> z in each plane sweep volume V k C may have R, G, B and A (alpha) components, the alpha channel indicating the availability of source pixels for that particular voxel.
  • the pose parameters may be implicit inputs used in the construction of the plane sweep volume, rather than separately supplied inputs.
  • the epipolar constraint may be trivially enforced within a plane sweep volume, as corresponding pixels are now arranged in corresponding columns of the plane sweep volume. Thus, long range connections between pixels/images are not needed, and a given output pixel/image depends only on a small column of voxels from each of the per-source plane sweep volumes.
  • a convolutional neural network which may involve fewer parameters than fully connected networks and thus may be faster to train, may be used. It may also be faster to run inference on a convolutional neural network, and computation from earlier layers may be shared.
  • a model may apply two-dimensional (2D) convolutional layers to each plane within the input plane sweep volume.
  • the model may also take advantage of weight sharing across planes in the plane sweep volume, this allowing the computation performed on each plane to be independent of the depth of the plane.
  • a model may include two towers of layers, where the input to each tower is the set of plane sweep volumes V k C .
  • This dual architecture may allow for both depth prediction and color prediction.
  • depth prediction an approximate depth for each pixel in the output image may be determined to in turn determine the source image pixels to be used to generate the output pixel. This probability over depth may be computed using training data, rather than a sum of squared distances (SSD) approach, a normalized cross correlation (NCC) approach, or a variance approach.
  • SSD sum of squared distances
  • NCC normalized cross correlation
  • color prediction a color for the output pixel, given all of the relevant source image pixels, may be produced by optimally combining the source pixels using training data, rather than just performing simple averaging.
  • the two towers shown in FIG. 3 may accomplish the tasks of depth prediction and color prediction.
  • the selection tower may generate a probability map (or selection map) for each depth indicating a likelihood of each pixel having a particular depth.
  • the color tower may generate a full color output image for each depth. For example, the color tower may produce the best color possible for each depth, assuming that the depth is correct for that particular pixel.
  • the color images may then be combined as a per-pixel weighted sum, with weights drawn from the selection maps. That is, the selection maps may be used to determine the best color layers to use for each output pixel.
  • This approach to view synthesis may allow the system to learn all of the parameters of both the selection tower and the color tower simultaneously, end-to-end using deep learning methods.
  • the weighted averaging across color layers may yield some resilience to uncertainty, particularly in regions where the algorithm is less confident.
  • the first layer of each tower may concatenate the input plane sweep volume over the source, allowing the networks to compare and combine re-projected pixel values across sources.
  • the selection tower as shown in FIGs. 3 and 4 may compute, for each pixel py, in each plane Pd, a selection probability s y , d for the pixel at that plane, as shown in FIG. 4.
  • the color tower as shown in FIGs. 3 and 4 may compute for each pixel py, in each plane Pd, a color cy , d for the pixel at that plane as shown in FIG. 5.
  • the final output color for each pixel may be computed by summing over the depth of the planes as shown in equation (1).
  • the input to each tower may include the set of plane sweep volumes Vk C (including all of the reprojected images N-D over all volumes, where N is the number of source images, and D is the number of depth planes).
  • the first layer of each tower may operate on each reprojected image Pk 1 independently, allowing the system to learn low-level image features.
  • the feature maps corresponding to the N sources may be concatenated over each depth plane, and subsequent layers may operate on these per-depth- plane feature maps.
  • the final layers of the selection tower may also make use of connections across depth planes.
  • the selection tower may include two main stages.
  • the early layers may include, for example, a number of 2D convolutional rectified linear layers that share weights across all depth planes, and within a depth plane for the first layer. For example, based on previous learning, the early layers may compute features that are independent of depth, such as pixel differences, so their weights may be shared.
  • the final set of layers may be connected across depth planes, so that depth plane interactions between depth planes, such as those caused by occlusion (e.g., the network may learn to prefer closer planes having higher scores in cases of ambiguities in depth) may be modeled.
  • the final layer of the network may be a per pixel softmax normalization transformer over depth, as shown in FIG. 4.
  • the softmax transformer may cause the model to pick a single depth plane per pixel, while ensuring that the sum over all depth planes is 1.
  • using a tank activation for the penultimate layer may yield more stable training than a linear layer.
  • a linear layer may shut down at certain depth planes and not recover due to, for example, relatively large gradients associated with the softmax layer.
  • the output of the selection tower may be a 3D volume of single channel nodes Sj j i d as shown in equation (2).
  • the color tower may include, for example, 2D
  • the output of the color tower may be a 3D volume of nodes Ci j j ,j. Each node in the output may have three channels, corresponding to R, G and B.
  • the output of the selection tower and the color tower may be multiplied together, by node, to produce the output image.
  • the resulting image may be compared with the known target image I 1 , or training image, using, for example, a per-pixel loss Li.
  • a total loss L may be determined in accordance with equation (3), where C j j is the target color at the pixel i,j.
  • the system may predict the output image patch-by -patch. Passing in a set of lower resolution versions of successively larger areas around the input patches may improve results by providing the network with more context, wherein an improved result means that the view which is predicted is more accurate.
  • the system may pass in four different resolutions. Each resolution may first be processed independently by several layers, and then be upsampled and concatenated before entering the final layers. The upsampling may make use of nearest neighbor interpolation. More detail of the complete network is shown in FIG. 6.
  • a relatively large amount of data may be mined from a relatively expansive database to train models.
  • the massive collection of posed imagery spanning much of the globe contained by a street view database may be used to train such a model.
  • images of street scenes captured by a moving vehicle may be used to train a network.
  • the images may be posed using, for example, odometry and structure from motion techniques.
  • the vehicle may intermittently capture clusters, or groupings, or rosettes, of images.
  • the vehicle may capture a rosette of images at each of a plurality of predetermined time stamps.
  • Each rosette may include a plurality of images in a predetermined arrangement captured from predetermined point(s) of view.
  • each rosette may include 8 images, or 15 images, or another number of images, depending on a type of image capture device, or camera, employed.
  • the rosettes, each including a plurality of images, may define a stream of input images.
  • training of the system may include a substantially continuously running sample generation pipeline, selecting and re-projecting random patches from the input images included in a relatively large number of rosettes. For example, in one example implementation, up to 100,000 rosettes may be included in the sample generation pipeline, and the network may be trained to produce 8 x 8 output patches from 26 x 26 input patches, as shown in FIG. 3. Patches from numerous images may be combined to generate mini-batches, having a predetermined size of, for example, 400. The network may then be trained using, for example, distributed gradient descent. Due to sample randomization, and the relatively large volume of training data available, duplicate use of any of the patches during training is highly unlikely in this example implementation.
  • the first network was trained based on image data provided by an expansive street view database as described above.
  • the images included in the street view database were posed using a combination of odometry and other motion methods, with a vehicle mounted camera using a rolling shutter capturing a set of images, or rosettes, as described above, with different directions for each exposure.
  • the second network was trained using posed image sequences from a standard odometry dataset. In evaluating the performance of the first and second networks on the task of view interpolation, novel images were generated from the same viewpoint as known (but withheld) images.
  • each rosette of the street view database used to train the first network provides pixels in every direction, so the reprojected depth planes always have valid pixels.
  • the standard dataset used to train the second network some parts of the depth planes were not visible from all cameras. Since the model had not encountered missing pixels during training, the missing pixels generated some error in images generated by the second network, predominantly at the boundaries of the images.
  • a baseline IBR algorithm was implemented to compute depth using the four nearest input images to splat the pixels from the two nearest images into the target view, diffusing neighboring valid pixels to fill any small remaining holes.
  • the system and method in accordance with implementations described herein, outperformed the baseline IBR algorithm for all spacings.
  • the system and method, in accordance with implementations described herein also outperformed an optical flow algorithm applied to interpolate in-between images. As there is no notion of 3D pose when implementing this type of optical flow algorithm, the interpolated image is only approximately at the viewpoint of the withheld image.
  • the model implemented in a system and method, in accordance with implementations described herein may produce relatively high quality output images that may be difficult to distinguish from the original, actual imagery.
  • the model may process a variety of different types of challenging surfaces and textures, such as, for example, the trees and glass shown in FIG. IB, with performance for specular surfaces degrading gracefully, and relatively unnoticeably.
  • Moving objects which may often be encountered during training, such as, for example, a flag waving in the wind in a scene, may be blurred in a fashion that evokes motion blur.
  • An example of network learning to produce these images is illustrated in FIG. 7.
  • FIG. 7 illustrates two different examples of reprojected images produced in this manner.
  • One image is of a table having a relatively smooth surface texture and the second image is of a tree having a more complex visual texture.
  • These images are reprojected at a single depth plane, and have been selected so that the cropped regions represented by these images have strong selection probability at that particular plane.
  • the reprojected input views are shown at the left portion of FIG. 7, the outputs of the selection layer and the color layer at the given depth plane are shown in the middle portion of FIG. 7, and a comparison to the average is shown in the right portion of FIG. 7.
  • the color layer may contribute more than simply averaging the input reprojected images. Rather, it may learn to warp and robustly combine the input to produce the color image for that depth plane. This may allow the system to produce depth planes that are separated by more than one pixel of disparity.
  • a deep network may be trained end-to-end to perform novel view synthesis, using only sets of posed imagery, to provide high quality, accurate, synthesized views from the sets of posed images.
  • a system and method for leaming to predict may be implemented by deep leaming, facilitated by deep networks, to generate and improve models and
  • the data powering such deep networks may be garnered from multiple levels of processing/computing devices, these multiple levels of processing/computing devices forming a hierarchy from low-level to high-level features, based on a particular problem to be solved.
  • FIG. 8 provides an example of a generic electronic computing device 700 and a generic mobile electronic computing device 780, which may be included in a deep network.
  • Computing device 700 is intended to represent various forms of digital computers, such as laptop computers, convertible computers, tablet computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers.
  • Computing device 780 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, and other similar computing devices.
  • the components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.
  • Computing device 700 includes a processor 702, memory 704, a storage device 706, a high-speed interface 708 connecting to memory 704 and high-speed expansion ports 710, and a low speed interface 712 connecting to low speed bus 714 and storage device 706.
  • Each of the components 702, 704, 706, 708, 710, and 712, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate.
  • the processor 702 can process instructions for execution within the computing device 700, including instructions stored in the memory 704 or on the storage device 706 to display graphical information for a GUI on an external input/output device, such as display 716 coupled to high speed interface 708.
  • multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory.
  • multiple computing devices 700 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
  • the memory 704 stores information within the computing device 700.
  • the memory 704 is a volatile memory unit or units. In another
  • the memory 704 is a non-volatile memory unit or units.
  • the memory 704 may also be another form of computer-readable medium, such as a magnetic or optical disk.
  • the storage device 706 is capable of providing mass storage for the computing device 700.
  • the storage device 706 may be or contain a computer- readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations.
  • a computer program product can be tangibly embodied in an information carrier.
  • the computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above.
  • the information carrier is a computer- or machine-readable medium, such as the memory 704, the storage device 706, or memory on processor 702.
  • the high speed controller 708 manages bandwidth-intensive operations for the computing device 600, while the low speed controller 712 manages lower bandwidth- intensive operations. Such allocation of functions is exemplary only.
  • the high-speed controller 708 is coupled to memory 704, display 716 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 710, which may accept various expansion cards (not shown).
  • low-speed controller 712 is coupled to storage device 706 and low-speed expansion port 714.
  • the low-speed expansion port which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
  • input/output devices such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
  • the computing device 700 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 720, or multiple times in a group of such servers. It may also be implemented as part of a rack server system 724. In addition, it may be implemented in a personal computer such as a laptop computer 722. Alternatively, components from computing device 700 may be combined with other components in a mobile device (not shown), such as device 780. Each of such devices may contain one or more of computing device 700, 780, and an entire system may be made up of multiple computing devices 700, 780 communicating with each other.
  • Computing device 780 includes a processor 782, memory 764, and an input/output device such as a display 784, a communication interface 766, and a transceiver 768, among other components.
  • the device 780 may also be provided with a storage device, such as a microdrive or other device, to provide additional storage.
  • a storage device such as a microdrive or other device, to provide additional storage.
  • Each of the components 780, 782, 764, 784, 766, and 768, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
  • the processor 782 can execute instructions within the computing device 780, including instructions stored in the memory 764.
  • the processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors.
  • the processor may provide, for example, for coordination of the other components of the device 780, such as control of user interfaces, applications run by device 780, and wireless communication by device 780.
  • Processor 782 may communicate with a user through control interface 788 and display interface 786 coupled to a display 784.
  • the display 784 may be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology.
  • the display interface 786 may comprise appropriate circuitry for driving the display 784 to present graphical and other information to a user.
  • the control interface 788 may receive commands from a user and convert them for submission to the processor 782.
  • control interface 788 may receive in input entered by a user via, for example, the keyboard 780, and transmit the input to the processor 782 for processing, such as, for entry of corresponding text into a displayed text box.
  • an external interface 762 may be provide in communication with processor 782, so as to enable near area communication of device 780 with other devices.
  • Extemal interface 762 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used.
  • the memory 764 stores information within the computing device 780.
  • the memory 764 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units.
  • Expansion memory 774 may also be provided and connected to device 880 through expansion interface 772, which may include, for example, a SIMM (Single In Line Memory Module) card interface.
  • SIMM Single In Line Memory Module
  • expansion memory 774 may provide extra storage space for device 780, or may also store applications or other information for device 780.
  • expansion memory 774 may include instructions to carry out or supplement the processes described above, and may include secure information also.
  • expansion memory 774 may be provide as a security module for device 880, and may be programmed with instructions that permit secure use of device 880.
  • secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
  • the memory may include, for example, flash memory and/or NVRAM memory, as discussed below.
  • a computer program product is tangibly embodied in an information carrier.
  • the computer program product contains instructions that, when executed, perform one or more methods, such as those described above.
  • the information carrier is a computer- or machine-readable medium, such as the memory 764, expansion memory 874, or memory on processor 782, that may be received, for example, over transceiver 768 or external interface 762.
  • Device 780 may communicate wirelessly through communication interface 766, which may include digital signal processing circuitry where necessary.
  • Communication interface 76 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver 768.
  • short-range communication may occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown).
  • GPS (Global Positioning System) receiver module 770 may provide additional navigation- and location- related wireless data to device 780, which may be used as appropriate by applications running on device 780.
  • Device 780 may also communicate audibly using audio codec 760, which may receive spoken information from a user and convert it to usable digital information. Audio codec 760 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device 780. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device 780.
  • Audio codec 760 may receive spoken information from a user and convert it to usable digital information. Audio codec 760 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device 780. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device 780.
  • the computing device 780 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone 780. It may also be implemented as part of a smart phone 782, personal digital assistant, or other similar mobile device. [0058] Implementations of the various techniques described herein may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Implementations may implemented as a computer program product, i.e., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable storage device (computer-readable medium), for processing by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers.
  • data processing apparatus e.g., a programmable processor, a computer, or multiple computers.
  • a computer-readable storage medium can be configured to store instructions that when executed cause a processor (e.g., a processor at a host device, a processor at a client device) to perform a process.
  • a computer program such as the computer program(s) described above, can be written in any form of programming language, including compiled or interpreted languages, and can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program can be deployed to be processed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
  • Method steps may be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Method steps also may be performed by, and an apparatus may be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
  • FPGA field programmable gate array
  • ASIC application-specific integrated circuit
  • processors suitable for the processing of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read-only memory or a random access memory or both.
  • Elements of a computer may include at least one processor for executing instructions and one or more memory devices for storing instructions and data.
  • a computer also may include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto-optical disks e.g., CD-ROM and DVD-ROM disks.
  • the processor and the memory may be supplemented by, or incorporated in special purpose logic circuitry.
  • implementations may be implemented on a computer having a display device, e.g., a cathode ray tube (CRT), a light emitting diode (LED), or liquid crystal display (LCD) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.
  • a display device e.g., a cathode ray tube (CRT), a light emitting diode (LED), or liquid crystal display (LCD) monitor
  • CTR cathode ray tube
  • LED light emitting diode
  • LCD liquid crystal display
  • Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • Implementations may be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation, or any combination of such back-end, middleware, or front-end
  • a back-end component e.g., as a data server
  • a middleware component e.g., an application server
  • a front-end component e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation, or any combination of such back-end, middleware, or front-end
  • Components may be interconnected by any form or medium of digital data communication, e.g., a communication network.
  • Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.
  • LAN local area network
  • WAN wide area network

Abstract

A system and method of deep learning using deep networks to predict new views from existing images may generate and improve models and representations from large-scale data. This system and method of deep learning may employ a deep architecture performing new view synthesis directly from pixels, trained from large numbers of posed image sets. A system employing this type of deep network may produce pixels of an unseen view based on pixels of neighboring views, lending itself to applications in graphics generation.

Description

DEEPSTEREO: LEARNING TO PREDICT NEW VIEWS FROM REAL WORLD IMAGERY
[0001 ] This application claims priority to provisional U.S. Application Serial No. 62/161,159, filed on May 13, 2015, the entirety of which is incorporated by reference as if fully set forth herein.
FIELD
[0002] This document relates, generally, to deep networks and deep learning associated with images.
BACKGROUND
[0003] Deep networks, and deep learning, may be applied to generate and improve models and representations from large-scale data. Deep leaming may be considered part of a more broad application of machine leaming. Deep learning may be based on unsupervised learning of feature representations in, for example, computer vision, garnered from multiple levels of processing/computing devices, these multiple levels of processing/computing devices forming a hierarchy from low-level to high-level features. The composition and arrangement of these multiple layers may be developed based on, for example, a particular problem to be solved.
SUMMARY
[0004] In one aspect, a method may include accessing a plurality of posed image sets from a database, the plurality of posed image sets respectively corresponding to a plurality of scenes, each of the plurality of posed image sets including a plurality of views of a corresponding scene of the plurality of scenes, generating a requested view of a scene based on views selected from the plurality of views of the scene included in the posed image set corresponding to the scene in accordance with an automated view generating algorithm, wherein the requested view of the scene is not included in the plurality of views of the scene included in the corresponding posed image set, comparing the view of the scene generated by the automated view generating algorithm to a known view of the scene, and updating the view generating algorithm based on the comparison.
[0005] In another aspect, a method may include receiving a request for a view of a scene to be rendered, accessing, from a database, a plurality of stored posed images respectively representing a plurality of views of the scene, selecting a plurality of images from the plurality of stored posed images, the selected plurality of images representing views of the scene neighboring the requested view of the scene, reprojecting depth slices of each of the selected plurality of images at a plurality of depths, determining a depth for the requested view of the scene and determining a color for each pixel of the requested view of the scene at the determined depth based on pixels at the reprojected depth slices, and generating the requested view of the scene.
[0006] In another aspect, a method may include receiving a request for a view of a scene, retrieving, from a database storing a plurality of posed image sets, each of the plurality of posed images sets including a plurality of views of a corresponding scene, a posed image set corresponding to the requested view of the scene, and generating the requested view of the scene based on selected views from the plurality of views of the scene included in the corresponding posed image set, the requested view not included in the plurality of views of the scene of the corresponding posed image set.
[0007] In another aspect, a system for generating a view of a scene may include a network. The network may include a computing device including a processor, the computing device being in communication with a database, the database storing a plurality of posed image sets respectively corresponding to a plurality of scenes, each of the plurality of posed images sets including a plurality of views of a corresponding scene of the plurality of scenes, a selection tower configured to determine a depth for each output pixel in a requested output image, the requested output image corresponding to a requested view of a scene, and a color tower configured to generate a color for each output pixel of the requested output image. The selection tower and the color tower may be configured to receive selected views from the plurality of views of the scene included in the posed image set corresponding to the requested view of the scene, the requested view of the scene not being included in the plurality of views of the scene of the corresponding posed image set. The selection tower and the color tower may be configured to generate the requested output image for processing by the processor of the computing device to generate the requested view of the scene.
[0008] The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims. BRIEF DESCRIPTION OF THE DRAWINGS
[0009] FIGs. 1A and IB illustrate new views rendered from existing images, in accordance with implementations described herein.
[0010] FIG. 2 illustrates a plane sweep stereo re-projection of various images from various views to a new target camera field of view at a range of depths, in accordance with implementations described herein.
[001 1 ] FIGs. 3 and 4 illustrate a network including a selection tower and a color tower, in accordance with implementations described herein.
[0012] FIG. 5 illustrates a selection tower, configured to learn to produce a selection probability for each pixel in each depth plane, in accordance with implementations described herein.
[0013] FIG. 6 illustrates a color tower, configured to learn to combine and warp pixels across sources to produce a color for a particular pixel at a plurality of depth planes, in accordance with implementations described herein.
[0014] FIG. 7 illustrates two different examples of reprojected images produced by a system and method, in accordance with implementations described herein.
[0015] FIG. 8 illustrates an example of a computing device and a mobile computing device that can be used to implement the techniques described herein.
DETAILED DESCRIPTION
[0016] Deep networks may be applied to various different types of recognition and classification problems. In a system and method, in accordance with implementations as described herein, deep networks may be applied to recognition and classification problems in computer vision and graphics using, for example, a deep architecture that learns to perform new view synthesis directly from a large number, such as, for example, millions, of posed image sets, for example, sets of images having known characteristics, such as, for example, perspective and/or color and/or depth. For example, in a system and method in accordance with implementations described herein, a deep network may be trained from this relatively large number of posed image sets, and then, in operation, may rely on a relatively small number of pertinent images to generate a new, previously unseen view based on the training conducted with the large number of posed image sets.
[0017] In contrast to relying on multiple complex stages of processing, a system and method in accordance with implementations described herein may be trained end-to-end, by, for example, presenting images, for example pixels of images, of neighboring views of a scene to the network to directly produce images of an unseen arbitrary /requested view of the scene. The requested view may be, for example, a view of a subject, or subject area, from a particular perspective and/or point of view and/or depth. The neighboring images may be, for example, images available from the posed image sets adjacent to the requested view. For example, in some implementations, the system may be trained by presenting a set of input images of various views of a scene, requesting a view not included in the set of input images, and then presenting the network with the image that should be produced in response to the request. In some implementations, for view synthesis, one or more sets of posed images may used as training set(s) by removing one image and setting it aside, and then reproducing the removed image from the remaining images. Relatively large amounts of data, mined from a relatively expansive database may be used to train models, in accordance with
implementations described herein. An example of this type of data base is a street view mapping database, which contains a massive collection of posed imagery spanning much of the globe. Although this example may be referred to hereinafter, where appropriate for ease of explanation and illustration, the principles discussed herein may be applied to other implementations making use of other such expansive collections of data.
[0018] Through repetition, taking advantage images/views available in a large scale database (such as, for example, a street view mapping database), the system may
continuously learn and refine its process to produce the requested image. This end-to-end approach may allow for a certain level of generality, relying on posed image sets and applying to different domains, and filling in unseen pixels based on various features such as color, depth and texture learned from previous data. This approach may reduce the occurrence of artifacts, due to, for example, occlusions and ambiguity, when generating view interpolation and view-synthesized omni-directional stereo imagery, and may produce a relatively high quality result for a relatively difficult scene to synthesize.
[0019] Estimating a three-dimensional (3D) shape from multiple posed images may be a fundamental task in generating 3D representations of scenes that can be rendered and edited. In some implementations, novel view synthesis may be performed by a form of image based rendering (IBR) where a new view of a scene is synthesized by warping and combining images from nearby posed images of the scene to synthesize the new view of the scene. This approach may be applied to, for example, augmented and/or virtual reality systems, teleconferencing systems, 3D monocular film systems, cinematography, image stabilization, and other such implementations. In addition to techniques involving multi-view stereo or image warping methods, which may model the stereo, color, and occlusion components of each target pixel, a system and method in accordance with implementations described herein may leverage deep networks to directly learn to synthesize new views from posed images.
[0020] In this type of approach to new view synthesis, deep networks may be used to regress directly to output pixel colors given posed input images. This system may interpolate between views separated by a wide baseline due, for example, to the end-to-end nature of the training, and the ability of deep networks to learn extremely complex non-linear functions related to the inputs. In a system and method in accordance with implementations described herein, minimal assumptions about the scene being rendered may include, for example, a relatively static nature of the scene within a finite range of depths. Even outside of these parameters, the resulting images may degrade relatively gracefully and may remain visually plausible. In addition to application to synthesis of new views, a system and method making use of the deep architecture as described herein may also apply to other stereo and graphics problems given suitable training data.
[0021 ] For example, as shown in FIG. 1 A, a new view Cy of a given scene A, such as the house shown in FIG. 1A, may be rendered by the network from existing first and second images Ii and h at first and second viewpoints Vi and V2, respectively, without additional information related to the new view Cy. Similarly, a new view Cy of a given scene B, such as the outdoor scene shown in FIG. IB, may be rendered by the network from existing multiple images Ii through In, each at a different respective viewpoint Vi through Vn, without additional information related to the new view Cy.
[0022] Application of deep networks to image understanding and interpretation may, in some implementations, provide a basis for applying deep learning to graphics generation, and in particular, to depth and stereo associated with particular graphics applications. Further, improved performance may be achieved by using many layers of small convolutions and pooling, to pooling to address the large number of parameters associated with deep networks which may cause these systems to otherwise be prone to over-fitting in the absence of large quantities of data.
[0023] A system and method, in accordance with implementations described herein, may make minimal assumptions about the scene being rendered as discussed above, such as, for example, that the scene is static and that the scene exists within a finite range of depths. In some implementations, a model may generalize beyond the training data to novel imagery, including, for example, image collections used in prior work. [0024] As shown in FIG. 2, given a set of posed input images Ii through In, a new image may be rendered from the viewpoint of a new target camera CT, the new target camera CT representing a capture field of view, or new viewpoint, associated with the new scene to be generated from the posed input images Ii through In. In FIG. 2, plane sweep stereo reprojects the posed input images Ii and h from viewpoints Vi and V2, respectively, to the target camera CT at numerous different depths, for example, from a minimum depth dito a maximum depth do, with one or more reprojected images of the posed input images Ii and h taken an one or more intermediate depths dk. For example, reprojected images Ri through Rn of the posed input image Ii may be generated at depths ranging from di to do, and reprojected images Si through Sn of the posed image input h may be generated at depths ranging from di to do. In FIG. 2, the dotted lines may indicate pixels from the input images Ii and h
reprojected to a particular output image pixel Ri through Rn, Si through Sn.
[0025] One approach to rendering the new image may be to naively train a deep network to synthesize new views by directly supplying input images to the network. In this approach, the pose parameters of the new viewpoint (the new view Cy to be generated, corresponding to the new target camera CT field of view) may be supplied as inputs to the network to produce the desired view/image. Due to the complex, non-linear relationship of the pose parameters, the input images and the output image, this also involves the network learning how to interpret rotation angles and how to perform image re-projection, which may be an inefficient use of network resources. Additionally, in this approach, the network may compare and combine potentially distant pixels in the original input images to synthesize a new view, driving a need for relatively dense, long range connections in the network, causing the network to be slow to train and prone to over-fitting, particularly when dealing with large scale data. In some instances, this could be addressed by a network structure configured apply an internal epipolar constraint to limit connections to hose on corresponding epipolar lines. However, this type of structure may be pose-dependent and computationally inefficient, particularly when dealing with large scale data.
[0026] As described above, the series of input images Ii through In from viewpoints Vi through Vn, respectively, may be re-projected at different distances between di and do with respect to the viewpoint, or new target camera CT, from which a new/requested view Cy is to be rendered, rendering slices at these different re-projection distances. The system may sweep through the range of depths (for example, in some implementations, 64 to 96 different depths), from a position that is relatively close to the new target camera CT (for example, approximately 2m from the new target camera CT in the case of a street view) to a position that is relatively far from the new target camera CT (for example, approximately 30 meters, or more, from the new target camera CT in the case of a street view, at given intervals. In some implementations, the system may step through the range of depths at an inverse distance, or 1/distance, for example, a minimum interval of 1/max distance do, or another interval based on another distance between the minimum distance dl and the maximum distance do . From these re-projected depth slices, a re-projected depth slice for each input view may be generated, with numerous images for each re-projected depth slice available (for example, 4- 5 input views, each with 4-5 images per depth slice). For example, pixels of a left re- projected image and pixels of a right projected image may (respectively) align at the correct depth. The re-projected depth slices may be combined to produce the best color for that depth slice, and to determine the correct depth for a particular pixel/image. The network may determine the correct depth and the correct depth slice for a particular pixel, and then the correct color. These elements may be learned together. The network may then determine, or learn, the probability that a particular pixel is at a particular depth slice, multiply that probability by the computed color, and sum this to produce a final image.
[0027] More specifically, in a system and method in accordance with
implementations described herein, a network may include a set of 3D plane sweep volumes as input. A plane sweep volume (PSV) may include, for example, a stack of images re- projected to the new target camera CT. Each image in the stack may be re-projected into the new target camera CT at a set of varying depths d e di...do to form a plane sweep volume Vc = Pi . . . Pd . . . PD That is, as shown in FIG. 2, plane sweep stereo may re-project images I1; from Vi and V2 to the new target camera CT at a range of depths d e di...do. Re-projecting an input image into the target camera C may be done using, for example, texture mapping, and may be performed by a graphics processing unit (GPU). A separate plane sweep volume Vk C may be generated for each input image. Each voxel v Xi y> z in each plane sweep volume Vk C may have R, G, B and A (alpha) components, the alpha channel indicating the availability of source pixels for that particular voxel.
[0028] By using plane sweep volumes as inputs to the network, the pose parameters may be implicit inputs used in the construction of the plane sweep volume, rather than separately supplied inputs. Additionally, the epipolar constraint may be trivially enforced within a plane sweep volume, as corresponding pixels are now arranged in corresponding columns of the plane sweep volume. Thus, long range connections between pixels/images are not needed, and a given output pixel/image depends only on a small column of voxels from each of the per-source plane sweep volumes. Similarly, as a computation performed to produce an output pixel p at location i,j is largely independent of a location of the pixel pi, j, a convolutional neural network, which may involve fewer parameters than fully connected networks and thus may be faster to train, may be used. It may also be faster to run inference on a convolutional neural network, and computation from earlier layers may be shared. For example, in one embodiment, a model may apply two-dimensional (2D) convolutional layers to each plane within the input plane sweep volume. In addition to sharing weights within convolutional layers, the model may also take advantage of weight sharing across planes in the plane sweep volume, this allowing the computation performed on each plane to be independent of the depth of the plane.
[0029] In some implementations, a model may include two towers of layers, where the input to each tower is the set of plane sweep volumes Vk C. This dual architecture may allow for both depth prediction and color prediction. For example, in depth prediction, an approximate depth for each pixel in the output image may be determined to in turn determine the source image pixels to be used to generate the output pixel. This probability over depth may be computed using training data, rather than a sum of squared distances (SSD) approach, a normalized cross correlation (NCC) approach, or a variance approach. In color prediction, a color for the output pixel, given all of the relevant source image pixels, may be produced by optimally combining the source pixels using training data, rather than just performing simple averaging.
[0030] The two towers shown in FIG. 3 may accomplish the tasks of depth prediction and color prediction. The selection tower may generate a probability map (or selection map) for each depth indicating a likelihood of each pixel having a particular depth. The color tower may generate a full color output image for each depth. For example, the color tower may produce the best color possible for each depth, assuming that the depth is correct for that particular pixel. The color images may then be combined as a per-pixel weighted sum, with weights drawn from the selection maps. That is, the selection maps may be used to determine the best color layers to use for each output pixel. This approach to view synthesis may allow the system to learn all of the parameters of both the selection tower and the color tower simultaneously, end-to-end using deep learning methods. Further, the weighted averaging across color layers may yield some resilience to uncertainty, particularly in regions where the algorithm is less confident. [0031 ] In particular, the first layer of each tower may concatenate the input plane sweep volume over the source, allowing the networks to compare and combine re-projected pixel values across sources. For example, the selection tower as shown in FIGs. 3 and 4, may compute, for each pixel py, in each plane Pd, a selection probability s y, d for the pixel at that plane, as shown in FIG. 4. The color tower as shown in FIGs. 3 and 4, may compute for each pixel py, in each plane Pd, a color cy, d for the pixel at that plane as shown in FIG. 5. The final output color for each pixel may be computed by summing over the depth of the planes as shown in equation (1).
f
' = 2^ *^ -d ' ··*· <■<
Equation (1)
[0032] The input to each tower may include the set of plane sweep volumes VkC (including all of the reprojected images N-D over all volumes, where N is the number of source images, and D is the number of depth planes). The first layer of each tower may operate on each reprojected image Pk1 independently, allowing the system to learn low-level image features. After the first layer, the feature maps corresponding to the N sources may be concatenated over each depth plane, and subsequent layers may operate on these per-depth- plane feature maps. The final layers of the selection tower may also make use of connections across depth planes.
[0033] In some implementations, the selection tower may include two main stages. The early layers may include, for example, a number of 2D convolutional rectified linear layers that share weights across all depth planes, and within a depth plane for the first layer. For example, based on previous learning, the early layers may compute features that are independent of depth, such as pixel differences, so their weights may be shared. The final set of layers may be connected across depth planes, so that depth plane interactions between depth planes, such as those caused by occlusion (e.g., the network may learn to prefer closer planes having higher scores in cases of ambiguities in depth) may be modeled. The final layer of the network may be a per pixel softmax normalization transformer over depth, as shown in FIG. 4. The softmax transformer may cause the model to pick a single depth plane per pixel, while ensuring that the sum over all depth planes is 1. In some embodiments, using a tank activation for the penultimate layer may yield more stable training than a linear layer. For example, under some circumstances, a linear layer may shut down at certain depth planes and not recover due to, for example, relatively large gradients associated with the softmax layer. The output of the selection tower may be a 3D volume of single channel nodes Sj ji d as shown in equation (2).
D
Equation (2)
[0034] As shown in FIG. 5, the color tower may include, for example, 2D
convolutional rectified linear layers that share weights across all planes, followed by a linear reconstruction layer. Occlusion effects do not have the same level of relevance in the color layer, so across depth interaction may not be necessary. The output of the color tower may be a 3D volume of nodes Ci jj ,j. Each node in the output may have three channels, corresponding to R, G and B.
[0035] The output of the selection tower and the color tower may be multiplied together, by node, to produce the output image. During training, the resulting image may be compared with the known target image I1, or training image, using, for example, a per-pixel loss Li. A total loss L may be determined in accordance with equation (3), where C j j is the target color at the pixel i,j.
Equation (3)
[0036] In some implementations, rather than predicting a full image all at once, the system may predict the output image patch-by -patch. Passing in a set of lower resolution versions of successively larger areas around the input patches may improve results by providing the network with more context, wherein an improved result means that the view which is predicted is more accurate. For example, in some implementations, the system may pass in four different resolutions. Each resolution may first be processed independently by several layers, and then be upsampled and concatenated before entering the final layers. The upsampling may make use of nearest neighbor interpolation. More detail of the complete network is shown in FIG. 6.
[0037] In one example implementation, as described above, a relatively large amount of data may be mined from a relatively expansive database to train models. In the example implementation described above, the massive collection of posed imagery spanning much of the globe contained by a street view database may be used to train such a model. In this example implementation, images of street scenes captured by a moving vehicle may be used to train a network. The images may be posed using, for example, odometry and structure from motion techniques. As the vehicle moves, the vehicle may intermittently capture clusters, or groupings, or rosettes, of images. For example, the vehicle may capture a rosette of images at each of a plurality of predetermined time stamps. Each rosette may include a plurality of images in a predetermined arrangement captured from predetermined point(s) of view. For example, in some implementations, each rosette may include 8 images, or 15 images, or another number of images, depending on a type of image capture device, or camera, employed. The rosettes, each including a plurality of images, may define a stream of input images.
[0038] In some implementations, training of the system may include a substantially continuously running sample generation pipeline, selecting and re-projecting random patches from the input images included in a relatively large number of rosettes. For example, in one example implementation, up to 100,000 rosettes may be included in the sample generation pipeline, and the network may be trained to produce 8 x 8 output patches from 26 x 26 input patches, as shown in FIG. 3. Patches from numerous images may be combined to generate mini-batches, having a predetermined size of, for example, 400. The network may then be trained using, for example, distributed gradient descent. Due to sample randomization, and the relatively large volume of training data available, duplicate use of any of the patches during training is highly unlikely in this example implementation.
[0039] To evaluate the effectiveness of training the network in the manner described above, two networks were trained using the same model or algorithm, but using two different types of training data. The first network was trained based on image data provided by an expansive street view database as described above. The images included in the street view database were posed using a combination of odometry and other motion methods, with a vehicle mounted camera using a rolling shutter capturing a set of images, or rosettes, as described above, with different directions for each exposure. The second network was trained using posed image sequences from a standard odometry dataset. In evaluating the performance of the first and second networks on the task of view interpolation, novel images were generated from the same viewpoint as known (but withheld) images. During training, each rosette of the street view database used to train the first network provides pixels in every direction, so the reprojected depth planes always have valid pixels. With the standard dataset used to train the second network, some parts of the depth planes were not visible from all cameras. Since the model had not encountered missing pixels during training, the missing pixels generated some error in images generated by the second network, predominantly at the boundaries of the images.
[0040] In further comparison, a baseline IBR algorithm was implemented to compute depth using the four nearest input images to splat the pixels from the two nearest images into the target view, diffusing neighboring valid pixels to fill any small remaining holes. The system and method, in accordance with implementations described herein, outperformed the baseline IBR algorithm for all spacings. The system and method, in accordance with implementations described herein, also outperformed an optical flow algorithm applied to interpolate in-between images. As there is no notion of 3D pose when implementing this type of optical flow algorithm, the interpolated image is only approximately at the viewpoint of the withheld image.
[0041 ] Overall, the model implemented in a system and method, in accordance with implementations described herein may produce relatively high quality output images that may be difficult to distinguish from the original, actual imagery. The model may process a variety of different types of challenging surfaces and textures, such as, for example, the trees and glass shown in FIG. IB, with performance for specular surfaces degrading gracefully, and relatively unnoticeably. Moving objects, which may often be encountered during training, such as, for example, a flag waving in the wind in a scene, may be blurred in a fashion that evokes motion blur. An example of network learning to produce these images is illustrated in FIG. 7.
[0042] FIG. 7 illustrates two different examples of reprojected images produced in this manner. One image is of a table having a relatively smooth surface texture and the second image is of a tree having a more complex visual texture. These images are reprojected at a single depth plane, and have been selected so that the cropped regions represented by these images have strong selection probability at that particular plane. The reprojected input views are shown at the left portion of FIG. 7, the outputs of the selection layer and the color layer at the given depth plane are shown in the middle portion of FIG. 7, and a comparison to the average is shown in the right portion of FIG. 7. As shown in FIG. 7, the color layer may contribute more than simply averaging the input reprojected images. Rather, it may learn to warp and robustly combine the input to produce the color image for that depth plane. This may allow the system to produce depth planes that are separated by more than one pixel of disparity.
[0043] In a system and method, in accordance with implementations described herein, a deep network may be trained end-to-end to perform novel view synthesis, using only sets of posed imagery, to provide high quality, accurate, synthesized views from the sets of posed images. As noted above, a system and method for leaming to predict may be implemented by deep leaming, facilitated by deep networks, to generate and improve models and
representations from large-scale data. The data powering such deep networks may be garnered from multiple levels of processing/computing devices, these multiple levels of processing/computing devices forming a hierarchy from low-level to high-level features, based on a particular problem to be solved.
[0044] FIG. 8 provides an example of a generic electronic computing device 700 and a generic mobile electronic computing device 780, which may be included in a deep network. Computing device 700 is intended to represent various forms of digital computers, such as laptop computers, convertible computers, tablet computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Computing device 780 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.
[0045] Computing device 700 includes a processor 702, memory 704, a storage device 706, a high-speed interface 708 connecting to memory 704 and high-speed expansion ports 710, and a low speed interface 712 connecting to low speed bus 714 and storage device 706. Each of the components 702, 704, 706, 708, 710, and 712, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 702 can process instructions for execution within the computing device 700, including instructions stored in the memory 704 or on the storage device 706 to display graphical information for a GUI on an external input/output device, such as display 716 coupled to high speed interface 708. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 700 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
[0046] The memory 704 stores information within the computing device 700. In one implementation, the memory 704 is a volatile memory unit or units. In another
implementation, the memory 704 is a non-volatile memory unit or units. The memory 704 may also be another form of computer-readable medium, such as a magnetic or optical disk.
[0047] The storage device 706 is capable of providing mass storage for the computing device 700. In one implementation, the storage device 706 may be or contain a computer- readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 704, the storage device 706, or memory on processor 702.
[0048] The high speed controller 708 manages bandwidth-intensive operations for the computing device 600, while the low speed controller 712 manages lower bandwidth- intensive operations. Such allocation of functions is exemplary only. In one implementation, the high-speed controller 708 is coupled to memory 704, display 716 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 710, which may accept various expansion cards (not shown). In the implementation, low-speed controller 712 is coupled to storage device 706 and low-speed expansion port 714. The low-speed expansion port, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
[0049] The computing device 700 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 720, or multiple times in a group of such servers. It may also be implemented as part of a rack server system 724. In addition, it may be implemented in a personal computer such as a laptop computer 722. Alternatively, components from computing device 700 may be combined with other components in a mobile device (not shown), such as device 780. Each of such devices may contain one or more of computing device 700, 780, and an entire system may be made up of multiple computing devices 700, 780 communicating with each other. [0050] Computing device 780 includes a processor 782, memory 764, and an input/output device such as a display 784, a communication interface 766, and a transceiver 768, among other components. The device 780 may also be provided with a storage device, such as a microdrive or other device, to provide additional storage. Each of the components 780, 782, 764, 784, 766, and 768, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
[0051 ] The processor 782 can execute instructions within the computing device 780, including instructions stored in the memory 764. The processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor may provide, for example, for coordination of the other components of the device 780, such as control of user interfaces, applications run by device 780, and wireless communication by device 780.
[0052] Processor 782 may communicate with a user through control interface 788 and display interface 786 coupled to a display 784. The display 784 may be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface 786 may comprise appropriate circuitry for driving the display 784 to present graphical and other information to a user. The control interface 788 may receive commands from a user and convert them for submission to the processor 782. For example, the control interface 788 may receive in input entered by a user via, for example, the keyboard 780, and transmit the input to the processor 782 for processing, such as, for entry of corresponding text into a displayed text box. In addition, an external interface 762 may be provide in communication with processor 782, so as to enable near area communication of device 780 with other devices. Extemal interface 762 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used.
[0053] The memory 764 stores information within the computing device 780. The memory 764 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. Expansion memory 774 may also be provided and connected to device 880 through expansion interface 772, which may include, for example, a SIMM (Single In Line Memory Module) card interface. Such expansion memory 774 may provide extra storage space for device 780, or may also store applications or other information for device 780. Specifically, expansion memory 774 may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, expansion memory 774 may be provide as a security module for device 880, and may be programmed with instructions that permit secure use of device 880. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
[0054] The memory may include, for example, flash memory and/or NVRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 764, expansion memory 874, or memory on processor 782, that may be received, for example, over transceiver 768 or external interface 762.
[0055] Device 780 may communicate wirelessly through communication interface 766, which may include digital signal processing circuitry where necessary. Communication interface 76 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver 768. In addition, short-range communication may occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module 770 may provide additional navigation- and location- related wireless data to device 780, which may be used as appropriate by applications running on device 780.
[0056] Device 780 may also communicate audibly using audio codec 760, which may receive spoken information from a user and convert it to usable digital information. Audio codec 760 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device 780. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device 780.
[0057] The computing device 780 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone 780. It may also be implemented as part of a smart phone 782, personal digital assistant, or other similar mobile device. [0058] Implementations of the various techniques described herein may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Implementations may implemented as a computer program product, i.e., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable storage device (computer-readable medium), for processing by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. Thus, a computer-readable storage medium can be configured to store instructions that when executed cause a processor (e.g., a processor at a host device, a processor at a client device) to perform a process. A computer program, such as the computer program(s) described above, can be written in any form of programming language, including compiled or interpreted languages, and can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be processed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
[0059] Method steps may be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Method steps also may be performed by, and an apparatus may be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
[0060] Processors suitable for the processing of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. Elements of a computer may include at least one processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer also may include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory may be supplemented by, or incorporated in special purpose logic circuitry.
[0061 ] To provide for interaction with a user, implementations may be implemented on a computer having a display device, e.g., a cathode ray tube (CRT), a light emitting diode (LED), or liquid crystal display (LCD) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
[0062] Implementations may be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation, or any combination of such back-end, middleware, or front-end
components. Components may be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.
[0063] While certain features of the described implementations have been illustrated as described herein, many modifications, substitutions, changes and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the scope of the implementations. It should be understood that they have been presented by way of example only, not limitation, and various changes in form and details may be made. Any portion of the apparatus and/or methods described herein may be combined in any combination, except mutually exclusive combinations. The implementations described herein can include various combinations and/or sub-combinations of the functions, components and/or features of the different implementations described.

Claims

WHAT IS CLAIMED IS:
1. A method, comprising:
accessing a plurality of posed image sets from a database, the plurality of posed image sets respectively corresponding to a plurality of scenes, each of the plurality of posed image sets including a plurality of views of a corresponding scene of the plurality of scenes;
generating a requested view of a scene based on views selected from the plurality of views of the scene included in posed image set corresponding to the scene in accordance with an automated view generating algorithm, wherein the requested view of the scene is not included in the plurality of views of the scene included in the corresponding posed image set;
comparing the view of the scene generated by the automated view generating algorithm to a known view of the scene; and
updating the view generating algorithm based on the comparison.
2. The method of claim 1 , wherein generating the requested view of the scene includes:
reprojecting depth slices of each of the selected views at a plurality of depths;
applying the updated view generating algorithm to the reprojected depth slices and matching pixels of the reprojected depth slices of the selected views at corresponding depths; and
determining a depth for a requested pixel of the requested view and a color for each pixel of the requested view at the determined depth.
3. The method of claim 2, wherein reprojecting depth slices of each of the selected views at a plurality of depths includes:
determining an interval between adjacent depth slices of each of the plurality of depths, extending between a minimum reprojection distance and a maximum reprojection distance; and
applying the determined interval to the reprojection of depth slices for each of the selected views.
4. The method of claim 2 or 3, wherein generating the requested view also includes:
determining, for each pixel, a probability that the pixel is located at a particular depth; multiplying the determined probability by a computed color for the pixel; and summing the resulting products of the multiplication to generate the requested view.
5. The method of claim 4, wherein determining, for each pixel, a probability that the pixel is located at a particular depth slice includes:
generating, by a selection tower, a probability map for each of the plurality of depths; generating, by a selection tower, a color output image for each of the plurality of depths; and
determining, based on the color output image generated for each of the plurality of depths and the probability map generated for each of the plurality of depths, a selection probability for each pixel representing a probability that the pixel is at a particular depth.
6. The method of one of the previous claims, further comprising:
iteratively performing the generating and comparing until the requested view of the image matches the known view of the requested view of the image within a predetermined threshold.
7. A method, comprising:
receiving a request for a view of a scene to be rendered;
accessing, from a database, a plurality of stored posed images respectively representing a plurality of views of the scene;
selecting a plurality of images from the plurality of stored posed images, the selected plurality of images representing views of the scene neighboring the requested view of the scene;
reprojecting depth slices of each of the selected plurality of images at a plurality of depths;
determining a depth for the requested view of the scene and determining a color for each pixel of the requested view of the scene at the determined depth based on pixels at the reprojected depth slices; and
generating the requested view of the scene.
8. The method of claim 7, wherein reprojecting depth slices of each of the selected plurality of images at a plurality of depths includes:
determining an interval between adjacent depth slices of each of the plurality of depth slices, extending between a minimum reprojection distance and a maximum reprojection distance; and
applying the determined interval to the reprojection of depth slices for each of the selected plurality of images.
9. The method of claim 7 or 8, wherein determining a depth for the requested view and determining a color for each pixel of the requested view at the determined depth based on pixels at the reprojected depth slices includes:
matching pixels of the reprojected depth slices of the selected plurality of images at corresponding depths; and
determining a depth for a requested pixel of the requested view and a color for each pixel of the requested view at the determined depth.
10. The method of one of claims 7 to 9, wherein generating the requested view includes:
determining, for each pixel, a probability that the pixel is located at a particular depth slice;
multiplying the calculated probability by a computed color for the pixel; and summing the resulting products of the multiplication to generate the requested view.
11. A method, comprising:
receiving a request for a view of a scene;
retrieving, from a database storing a plurality of posed image sets, each of the plurality of posed images sets including a plurality of views of a corresponding scene, a posed image set corresponding to the requested view of the scene; and
generating the requested view of the scene based on selected views from the plurality of views of the scene included in the corresponding posed image set, the requested view not included in the plurality of views of the scene of the corresponding posed image set.
12. A system for generating a view of a scene, the system comprising:
a network, including:
a computing device including a processor, the computing device being in communication with a database, the database storing a plurality of posed image sets respectively corresponding to a plurality of scenes, each of the plurality of posed images sets including a plurality of views of a corresponding scene of the plurality of scenes;
a selection tower configured to determine a depth for each output pixel in a requested output image, the requested output image corresponding to a requested view of a scene; and
a color tower configured to generate a color for each output pixel of the requested output image,
wherein the selection tower and the color tower are configured to receive selected views from the plurality of views of the scene included in the posed image set corresponding to the requested view of the scene, the requested view of the scene not being included in the plurality of views of the scene of the corresponding posed image set, and
the selection tower and the color tower are configured to generate the requested output image for processing by the processor of the computing device to generate the requested view of the scene.
EP16726706.1A 2015-05-13 2016-05-13 Deepstereo: learning to predict new views from real world imagery Pending EP3295368A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562161159P 2015-05-13 2015-05-13
PCT/US2016/032410 WO2016183464A1 (en) 2015-05-13 2016-05-13 Deepstereo: learning to predict new views from real world imagery

Publications (1)

Publication Number Publication Date
EP3295368A1 true EP3295368A1 (en) 2018-03-21

Family

ID=56097288

Family Applications (1)

Application Number Title Priority Date Filing Date
EP16726706.1A Pending EP3295368A1 (en) 2015-05-13 2016-05-13 Deepstereo: learning to predict new views from real world imagery

Country Status (6)

Country Link
US (1) US9916679B2 (en)
EP (1) EP3295368A1 (en)
JP (1) JP6663926B2 (en)
KR (1) KR102047031B1 (en)
CN (1) CN107438866B (en)
WO (1) WO2016183464A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10902314B2 (en) 2018-09-19 2021-01-26 Industrial Technology Research Institute Neural network-based classification method and classification device thereof

Families Citing this family (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10706321B1 (en) 2016-05-20 2020-07-07 Ccc Information Services Inc. Image processing system to align a target object in a target object image with an object model
US10319094B1 (en) 2016-05-20 2019-06-11 Ccc Information Services Inc. Technology for capturing, transmitting, and analyzing images of objects
US11288789B1 (en) 2016-05-20 2022-03-29 Ccc Intelligent Solutions Inc. Systems and methods for repairing a damaged vehicle using image processing
US10636148B1 (en) 2016-05-20 2020-04-28 Ccc Information Services Inc. Image processing system to detect contours of an object in a target object image
US9886771B1 (en) * 2016-05-20 2018-02-06 Ccc Information Services Inc. Heat map of vehicle damage
US10740891B1 (en) 2016-05-20 2020-08-11 Ccc Information Services Inc. Technology for analyzing images depicting vehicles according to base image models
US10657647B1 (en) 2016-05-20 2020-05-19 Ccc Information Services Image processing system to detect changes to target objects using base object models
WO2018064502A1 (en) * 2016-09-30 2018-04-05 Visbit Inc. View-optimized light field image and video streaming
US10621747B2 (en) 2016-11-15 2020-04-14 Magic Leap, Inc. Deep learning system for cuboid detection
US10121262B2 (en) * 2016-12-20 2018-11-06 Canon Kabushiki Kaisha Method, system and apparatus for determining alignment data
US9978118B1 (en) 2017-01-25 2018-05-22 Microsoft Technology Licensing, Llc No miss cache structure for real-time image transformations with data compression
US10242654B2 (en) 2017-01-25 2019-03-26 Microsoft Technology Licensing, Llc No miss cache structure for real-time image transformations
US10657376B2 (en) * 2017-03-17 2020-05-19 Magic Leap, Inc. Room layout estimation methods and techniques
US10410349B2 (en) 2017-03-27 2019-09-10 Microsoft Technology Licensing, Llc Selective application of reprojection processing on layer sub-regions for optimizing late stage reprojection power
US10514753B2 (en) * 2017-03-27 2019-12-24 Microsoft Technology Licensing, Llc Selectively applying reprojection processing to multi-layer scenes for optimizing late stage reprojection power
US10255891B2 (en) 2017-04-12 2019-04-09 Microsoft Technology Licensing, Llc No miss cache structure for real-time image transformations with multiple LSR processing engines
CN108805261B (en) 2017-04-28 2021-11-12 微软技术许可有限责任公司 Convolutional neural network based on octree
US10776992B2 (en) 2017-07-05 2020-09-15 Qualcomm Incorporated Asynchronous time warp with depth data
US10497257B2 (en) * 2017-08-31 2019-12-03 Nec Corporation Parking lot surveillance with viewpoint invariant object recognition by synthesization and domain adaptation
CN111033524A (en) 2017-09-20 2020-04-17 奇跃公司 Personalized neural network for eye tracking
US10922878B2 (en) * 2017-10-04 2021-02-16 Google Llc Lighting for inserted content
IL273991B2 (en) 2017-10-26 2023-11-01 Magic Leap Inc Gradient normalization systems and methods for adaptive loss balancing in deep multitask networks
US10803546B2 (en) * 2017-11-03 2020-10-13 Baidu Usa Llc Systems and methods for unsupervised learning of geometry from images using depth-normal consistency
WO2019222467A1 (en) * 2018-05-17 2019-11-21 Niantic, Inc. Self-supervised training of a depth estimation system
US10362491B1 (en) 2018-07-12 2019-07-23 At&T Intellectual Property I, L.P. System and method for classifying a physical object
EP4339905A2 (en) * 2018-07-17 2024-03-20 NVIDIA Corporation Regression-based line detection for autonomous driving machines
WO2020068073A1 (en) * 2018-09-26 2020-04-02 Google Llc Soft-occlusion for computer graphics rendering
EP3824620A4 (en) * 2018-10-25 2021-12-01 Samsung Electronics Co., Ltd. Method and device for processing video
US10957099B2 (en) * 2018-11-16 2021-03-23 Honda Motor Co., Ltd. System and method for display of visual representations of vehicle associated information based on three dimensional model
US11610110B2 (en) 2018-12-05 2023-03-21 Bank Of America Corporation De-conflicting data labeling in real time deep learning systems
US10311578B1 (en) * 2019-01-23 2019-06-04 StradVision, Inc. Learning method and learning device for segmenting an image having one or more lanes by using embedding loss to support collaboration with HD maps required to satisfy level 4 of autonomous vehicles and softmax loss, and testing method and testing device using the same
US10325179B1 (en) * 2019-01-23 2019-06-18 StradVision, Inc. Learning method and learning device for pooling ROI by using masking parameters to be used for mobile devices or compact networks via hardware optimization, and testing method and testing device using the same
US11044462B2 (en) 2019-05-02 2021-06-22 Niantic, Inc. Self-supervised training of a depth estimation model using depth hints
CN110113593B (en) * 2019-06-11 2020-11-06 南开大学 Wide baseline multi-view video synthesis method based on convolutional neural network
US10950037B2 (en) * 2019-07-12 2021-03-16 Adobe Inc. Deep novel view and lighting synthesis from sparse images
CN110443874B (en) * 2019-07-17 2021-07-30 清华大学 Viewpoint data generation method and device based on convolutional neural network
CN110471279B (en) * 2019-07-25 2020-09-29 浙江大学 Vine-copulas-based industrial production simulation scene generator and scene generation method
US11424037B2 (en) 2019-11-22 2022-08-23 International Business Machines Corporation Disease simulation in medical images
CN112203023B (en) * 2020-09-18 2023-09-12 西安拙河安见信息科技有限公司 Billion pixel video generation method and device, equipment and medium
US11302071B1 (en) 2020-09-24 2022-04-12 Eagle Technology, Llc Artificial intelligence (AI) system using height seed initialization for extraction of digital elevation models (DEMs) and related methods
US11747468B2 (en) 2020-09-24 2023-09-05 Eagle Technology, Llc System using a priori terrain height data for interferometric synthetic aperture radar (IFSAR) phase disambiguation and related methods
US11238307B1 (en) 2020-09-24 2022-02-01 Eagle Technology, Llc System for performing change detection within a 3D geospatial model based upon semantic change detection using deep learning and related methods
US11587249B2 (en) 2020-09-24 2023-02-21 Eagle Technology, Llc Artificial intelligence (AI) system and methods for generating estimated height maps from electro-optic imagery
US11816793B2 (en) 2021-01-06 2023-11-14 Eagle Technology, Llc Geospatial modeling system providing 3D geospatial model update based upon iterative predictive image registration and related methods
US11636649B2 (en) 2021-01-06 2023-04-25 Eagle Technology, Llc Geospatial modeling system providing 3D geospatial model update based upon predictively registered image and related methods
KR20230035721A (en) * 2021-09-06 2023-03-14 한국전자통신연구원 Electronic device generating multi-plane-image of arbitrary view point and operating method thereof
EP4167199A1 (en) 2021-10-14 2023-04-19 Telefonica Digital España, S.L.U. Method and system for tracking and quantifying visual attention on a computing device
US20230281913A1 (en) * 2022-03-01 2023-09-07 Google Llc Radiance Fields for Three-Dimensional Reconstruction and Novel View Synthesis in Large-Scale Environments
CN115147577A (en) * 2022-09-06 2022-10-04 深圳市明源云科技有限公司 VR scene generation method, device, equipment and storage medium

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5917937A (en) * 1997-04-15 1999-06-29 Microsoft Corporation Method for performing stereo matching to recover depths, colors and opacities of surface elements
AU2002952873A0 (en) * 2002-11-25 2002-12-12 Dynamic Digital Depth Research Pty Ltd Image encoding system
JP4052331B2 (en) * 2003-06-20 2008-02-27 日本電信電話株式会社 Virtual viewpoint image generation method, three-dimensional image display method and apparatus
BRPI0721462A2 (en) * 2007-03-23 2013-01-08 Thomson Licensing 2d image region classification system and method for 2d to 3d conversion
JP5035195B2 (en) * 2008-09-25 2012-09-26 Kddi株式会社 Image generating apparatus and program
US8698799B2 (en) * 2009-01-20 2014-04-15 Adobe Systems Incorporated Method and apparatus for rendering graphics using soft occlusion
US8701167B2 (en) * 2009-05-28 2014-04-15 Kjaya, Llc Method and system for fast access to advanced visualization of medical scans using a dedicated web portal
US8391603B2 (en) * 2009-06-18 2013-03-05 Omisa Inc. System and method for image segmentation
JP5812599B2 (en) * 2010-02-25 2015-11-17 キヤノン株式会社 Information processing method and apparatus
JP5645079B2 (en) * 2011-03-31 2014-12-24 ソニー株式会社 Image processing apparatus and method, program, and recording medium
US8498448B2 (en) * 2011-07-15 2013-07-30 International Business Machines Corporation Multi-view object detection using appearance model transfer from similar scenes
US9275078B2 (en) * 2013-09-05 2016-03-01 Ebay Inc. Estimating depth from a single image
US9202144B2 (en) * 2013-10-30 2015-12-01 Nec Laboratories America, Inc. Regionlets with shift invariant neural patterns for object detection
US9400925B2 (en) * 2013-11-15 2016-07-26 Facebook, Inc. Pose-aligned networks for deep attribute modeling
US20150324690A1 (en) * 2014-05-08 2015-11-12 Microsoft Corporation Deep Learning Training System
WO2015180101A1 (en) * 2014-05-29 2015-12-03 Beijing Kuangshi Technology Co., Ltd. Compact face representation
US9536293B2 (en) * 2014-07-30 2017-01-03 Adobe Systems Incorporated Image assessment using deep convolutional neural networks
US9756375B2 (en) * 2015-01-22 2017-09-05 Microsoft Technology Licensing, Llc Predictive server-side rendering of scenes
US9633306B2 (en) * 2015-05-07 2017-04-25 Siemens Healthcare Gmbh Method and system for approximating deep neural networks for anatomical object detection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10902314B2 (en) 2018-09-19 2021-01-26 Industrial Technology Research Institute Neural network-based classification method and classification device thereof

Also Published As

Publication number Publication date
US20160335795A1 (en) 2016-11-17
CN107438866B (en) 2020-12-01
KR20170120639A (en) 2017-10-31
JP2018514031A (en) 2018-05-31
KR102047031B1 (en) 2019-11-20
WO2016183464A1 (en) 2016-11-17
JP6663926B2 (en) 2020-03-13
US9916679B2 (en) 2018-03-13
CN107438866A (en) 2017-12-05

Similar Documents

Publication Publication Date Title
US9916679B2 (en) Deepstereo: learning to predict new views from real world imagery
US20200320777A1 (en) Neural rerendering from 3d models
US8467628B2 (en) Method and system for fast dense stereoscopic ranging
US8929645B2 (en) Method and system for fast dense stereoscopic ranging
WO2015139574A1 (en) Static object reconstruction method and system
US11823322B2 (en) Utilizing voxel feature transformations for view synthesis
US20220301252A1 (en) View synthesis of a dynamic scene
US11049288B2 (en) Cross-device supervisory computer vision system
WO2023015409A1 (en) Object pose detection method and apparatus, computer device, and storage medium
US20210150799A1 (en) Generating Environmental Data
CN116563493A (en) Model training method based on three-dimensional reconstruction, three-dimensional reconstruction method and device
WO2022000469A1 (en) Method and apparatus for 3d object detection and segmentation based on stereo vision
US11272164B1 (en) Data synthesis using three-dimensional modeling
CN114937073A (en) Image processing method of multi-view three-dimensional reconstruction network model MA-MVSNet based on multi-resolution adaptivity
CN113963117A (en) Multi-view three-dimensional reconstruction method and device based on variable convolution depth network
CN116097307A (en) Image processing method and related equipment
US11887241B2 (en) Learning 2D texture mapping in volumetric neural rendering
US20230145498A1 (en) Image reprojection and multi-image inpainting based on geometric depth parameters
CN117058380B (en) Multi-scale lightweight three-dimensional point cloud segmentation method and device based on self-attention
CN116091703A (en) Real-time three-dimensional reconstruction method based on multi-view stereo matching
Lazorenko Synthesizing novel views for Street View experience
Zhou et al. Sparse Depth Completion with Semantic Mesh Deformation Optimization
Hu et al. Adaptive region aggregation for multi‐view stereo matching using deformable convolutional networks
CN117392228A (en) Visual mileage calculation method and device, electronic equipment and storage medium
CN115797674A (en) Fast stereo matching algorithm for self-adaptive iterative residual optimization

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20170919

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20200914

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS