WO2023086198A1 - Robustifying nerf model novel view synthesis to sparse data - Google Patents

Robustifying nerf model novel view synthesis to sparse data Download PDF

Info

Publication number
WO2023086198A1
WO2023086198A1 PCT/US2022/047539 US2022047539W WO2023086198A1 WO 2023086198 A1 WO2023086198 A1 WO 2023086198A1 US 2022047539 W US2022047539 W US 2022047539W WO 2023086198 A1 WO2023086198 A1 WO 2023086198A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural
model
renderings
images
image
Prior art date
Application number
PCT/US2022/047539
Other languages
French (fr)
Inventor
Noha Radwan
Michael Niemeyer
Seyed Mohammad Mehdi Sajjadi
Jonathan Tilton Barron
Benjamin Joseph MILDENHALL
Original Assignee
Google Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google Llc filed Critical Google Llc
Priority to EP22812912.8A priority Critical patent/EP4392935A1/en
Priority to CN202280075411.XA priority patent/CN118251698A/en
Publication of WO2023086198A1 publication Critical patent/WO2023086198A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation

Definitions

  • the present disclosure relates generally to training a neural radiance field model by utilizing image patches and a flow model. More particularly, the present disclosure relates to segmenting training images into patches and comparing generated patch renderings in order to train the neural radiance field model.
  • Neural Radiance Fields have emerged as a powerful representation for the task of novel-view synthesis due to their simplicity and state-of-the-art performance. While allowing for photorealistic renderings of unseen viewpoints when many input views are available, the performance drops significantly when only sparse inputs are available. Such a multitude of images may however not always be feasible or easily obtainable for applications such as AR/VR, autonomous driving, and robotics.
  • NeRF performs well for dense inputs
  • the performance of NeRF models can drop significantly for sparse inputs, thereby limiting NeRF model applications for areas where obtaining dense input data is challenging (e.g., robotic applications and Streetview where the scene changes frequently between captures).
  • the system can include one or more processors and one or more non-transitory computer- readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations.
  • the operations can include obtaining an image dataset.
  • the image dataset can include a plurality of images and a plurality of respective three-dimensional locations, and the plurality of images can be descriptive of a scene.
  • the operations can include generating one or more ground truth patches based on the plurality of images. Each ground truth patch can include a proper subset of one of the plurality of images.
  • the operations can include processing one or more of the plurality of three-dimensional locations of the image dataset with a neural radiance field model to generate one or more view synthesis renderings.
  • the one or more view synthesis renderings can be descriptive of different views of the scene.
  • the operations can include evaluating a loss function that evaluates a difference between the one or more view synthesis renderings and the one or more ground truth patches and adjusting one or more parameters of the neural radiance field model based at least in part on the loss function.
  • generating one or more ground truth patches can include processing an image of the plurality of images to determine a portion of the image descriptive of an object in the scene and generating the one or more ground truth patches by segmenting the portion of the image descriptive of the object.
  • the operations can include obtaining an input dataset.
  • the input dataset can include one or more respective input locations.
  • the operations can include processing the input dataset with the neural radiance field model to generate a novel view rendering and providing the novel view rendering for display.
  • the plurality of images can be descriptive of one or more input views of the scene.
  • the novel view rendering can be descriptive of one or more output views.
  • the one or more output views can differ from the one or more input views.
  • the one or more view synthesis renderings can include one or more predicted patches.
  • the loss function can include at least one of a perceptual loss or a discriminator loss.
  • adjusting the one or more parameters of the neural radiance field model can include adjusting the one or more parameters to maximize a log likelihood of output renderings from the neural radiance field model.
  • the operations can include processing the one or more view synthesis renderings with a discriminator model to generate a discriminator output and adjusting the one or more parameters of the neural radiance field model based on the discriminator output.
  • the discriminator model can include a convolutional discriminator.
  • the discriminator model can be part of a generative adversarial network.
  • the loss function can include an adversarial loss.
  • the method can include obtaining, by a computing system including one or more processors, a training dataset.
  • the training dataset can include a plurality of images and a plurality of respective three-dimensional locations, and the plurality of images can depict a scene.
  • the method can include generating, by the computing system, a plurality of image patches based on the plurality of images.
  • each image patch can include a proper subset of one of the plurality of images.
  • the method can include processing, by the computing system, the plurality of respective three-dimensional locations with a neural radiance field model to generate one or more patch renderings.
  • the one or more patch renderings can be descriptive of views of the scene.
  • the method can include obtaining, by the computing system, a flow model.
  • the flow model can include a pre-trained model trained on a flow training dataset.
  • the method can include processing, by the computing system, the one or more patch renderings with the flow model to generate a flow output and adjusting, by the computing system, one or more parameters of the neural radiance field model based at least in part on the flow output.
  • the method can include evaluating, by the computing system, a ground truth loss function that evaluates a difference between the one or more patch renderings and one or more of the plurality of image patches and adjusting, by the computing system, one or more parameters of the neural radiance field model based at least in part on the ground truth loss function.
  • the method can include storing, by the computing system, the plurality of image patches in a database.
  • the one or more patch renderings can include one or more color predictions and one or more depth predictions.
  • the flow output can include a geometry regularization.
  • Another example aspect of the present disclosure is directed to one or more non- transitory computer-readable media that collectively store instructions that, when executed by one or more computing devices, cause the one or more computing devices to perform operations.
  • the operations can include obtaining an input dataset.
  • the input dataset can include one or more locations, and the one or more locations can be descriptive of a position in an environment.
  • the operations can include processing the input dataset with a neural radiance field model to generate one or more novel view renderings.
  • the novel view rendering can include a view of at least a portion of the environment.
  • the neural radiance field model may have been trained by comparing patches from a training dataset to generated predicted view renderings, and the patches can be generated by segmenting one or more training images.
  • the operations can include providing the one or more novel view renderings for display.
  • the neural radiance field model can include a first model configured to process an input dataset and a second model configured to process a neural radiance field output generated by a neural radiance field model. Processing the image dataset with the neural radiance field model can include processing the input dataset with the first model to generate neural radiance field data. In some implementations, processing the image dataset with the neural radiance field model can include processing the neural radiance field with the second model to generate the one or more novel view renderings.
  • the input dataset can include one or more view directions.
  • Figure 1A depicts a block diagram of an example computing system that performs neural radiance field model training and inference according to example embodiments of the present disclosure.
  • Figure IB depicts a block diagram of an example computing device that performs neural radiance field model training and inference according to example embodiments of the present disclosure.
  • Figure 1C depicts a block diagram of an example computing device that performs neural radiance field model training and inference according to example embodiments of the present disclosure.
  • Figure 2 depicts a block diagram of an example neural radiance field model training system according to example embodiments of the present disclosure.
  • Figure 3 depicts a block diagram of an example training system with a flow model according to example embodiments of the present disclosure.
  • Figure 4 depicts a block diagram of an example training system according to example embodiments of the present disclosure.
  • Figure 5 depicts a block diagram of an example training system with a discriminator model according to example embodiments of the present disclosure.
  • Figure 6 depicts a flow chart diagram of an example method to perform neural radiance field model training according to example embodiments of the present disclosure.
  • Figure 7 depicts a flow chart diagram of an example method to perform neural radiance field model training with a flow model according to example embodiments of the present disclosure.
  • Figure 8 depicts a flow chart diagram of an example method to perform novel view rendering generation according to example embodiments of the present disclosure.
  • Figure 9 depicts a flow chart diagram of an example method to perform neural radiance field model training with a flow model according to example embodiments of the present disclosure.
  • the present disclosure is directed to systems and methods for training a neural radiance field model by utilizing geometric priors.
  • the systems and methods can include obtaining an image dataset.
  • the image dataset can include a plurality of images and a plurality of respective three-dimensional locations.
  • the plurality of images may depict a scene.
  • One or more ground truth images can be generated based on the plurality of images.
  • Each ground truth patch can include a proper subset of one of the plurality of images.
  • One or more of the plurality of three-dimensional locations of the image dataset can be processed with a neural radiance field model to generate one or more view synthesis renderings.
  • the one or more view synthesis renderings can be descriptive of different views of the object.
  • a loss function can be evaluated based on a difference between the one or more view synthesis renderings and the one or more ground truth patches.
  • One or more parameters of the neural radiance field model can then be adjusted based at least in part on the loss function.
  • Training the one or more neural radiance field models can further include the use of a flow model.
  • training the one or more neural radiance field models can include obtaining a training dataset.
  • the training dataset can include a plurality of images and a plurality of respective three-dimensional locations, and the plurality of images can depict a scene.
  • a plurality of image patches can be generated based on the plurality of images.
  • Each image patch may include a proper subset of one of the plurality of images.
  • the plurality of respective three-dimensional locations can be processed with a neural radiance field model to generate one or more patch renderings.
  • the one or more patch renderings can be descriptive of views of the scene differing from views depicted in the plurality of images.
  • a flow model can be trained based at least in part on the plurality of image patches.
  • the one or more patch renderings can be processed with the flow model to generate a flow output.
  • the systems and methods can include adjusting one or more parameters of the neural radiance field model based at least in part on the flow output.
  • the trained neural radiance field model can then be utilized for model inference to generate novel view renderings.
  • the systems and methods for model inference can include obtaining an input dataset.
  • the input dataset can include one or more locations.
  • the one or more locations can be descriptive of a position in an environment.
  • the systems and methods can include processing the input dataset with a neural radiance field model to generate one or more novel view renderings.
  • the one or more novel view renderings can include a view of at least a portion of the environment.
  • the neural radiance field model can be trained by comparing patches from a training dataset to generate predicted view renderings. The patches may be generated by segmenting one or more training images.
  • the one or more novel view renderings can then be provided for display.
  • the systems and methods disclosed herein can be utilized for robotic applications and autonomous vehicles where the amount of training data for each scene is limited to the data captured by the robot in real-time and frequently changes with the environment. Similarly, the systems and methods can be beneficial for augmented-reality (AR) and/or virtual -reality (VR) scenarios where the user-data is limited to that captured by the device.
  • the systems and methods disclosed herein can train a neural radiance field model on a sparse amount of data (e.g., four to nine images). In particular, the systems and methods can break-up the training images into patches and train on the patches (e.g., the renderings can be patch renderings that can be compared against patches of a ground truth image). The patches can provide more geometric awareness and detail to the model.
  • the training may involve focusing on one sector of a training image or a training dataset and building out in order to reduce variance.
  • the systems and methods disclosed herein can involve training with ground truth patch training and normalization utilizing a flow model (e.g., a normalizing flow model which can be trained on the training dataset or a different flow training dataset separate from the scene-specific training dataset).
  • a flow model e.g., a normalizing flow model which can be trained on the training dataset or a different flow training dataset separate from the scene-specific training dataset.
  • the systems and methods can utilize the ground truth patch training to train a neural radiance field model with sparse training inputs.
  • the systems and methods can utilize the flow model for mitigating artifacts in outputs.
  • the systems and methods can include obtaining an image dataset.
  • the image dataset can include one or more images and one or more respective three-dimensional locations, and the one or more images can be descriptive of a scene.
  • ground truth patches can be generated based on the one or more images.
  • each ground truth patch can include a proper subset of one of the one or more images.
  • generating one or more ground truth patches can include processing an image of the plurality of images to determine a portion of the image descriptive of an object in the scene and generating the one or more ground truth patches by segmenting the portion of the image descriptive of the object.
  • the one or more three-dimensional locations of the image dataset can be processed with a neural radiance field model to generate one or more view synthesis renderings.
  • the one or more view synthesis renderings can be descriptive of different views of the scene.
  • the one or more images can be descriptive of one or more input views of the scene, and the one or more one or more view synthesis renderings can be descriptive of an output view.
  • the output view can differ from the one or more input views.
  • the one or more view synthesis renderings can include one or more predicted patches.
  • the systems and methods for training can include evaluating a loss function that evaluates a difference between the one or more view synthesis renderings and the one or more ground truth patches.
  • the loss function can include a perceptual loss, an adversarial loss, and/or a discriminator loss.
  • the systems and methods can include adjusting one or more parameters of the neural radiance field model based at least in part on the loss function.
  • Adjusting the one or more parameters of the neural radiance field model can include adjusting the one or more parameters to maximize a log likelihood of output renderings from the neural radiance field model.
  • the systems and methods can include obtaining an input dataset.
  • the input dataset can include one or more respective input locations.
  • the input dataset can be processed with the neural radiance field model to generate a novel view rendering, and the novel view rendering can be provided for display.
  • the one or more images may be descriptive of one or more input views of the scene, and the novel view rendering can be descriptive of one or more output views.
  • the one or more output views can differ from the one or more input views.
  • the systems and methods can include processing the one or more view synthesis renderings with a discriminator model to generate a discriminator output and adjusting the one or more parameters of the neural radiance field model based on the discriminator output.
  • the discriminator model can include a convolutional discriminator, and the discriminator model can be part of a generative adversarial network.
  • the training may be completed on a pixel by pixel basis.
  • the systems and methods can sample points throughout the scene for pixel by pixel analysis.
  • the neural radiance field model can be trained using a flow model.
  • the systems and methods can include obtaining a training dataset.
  • the training dataset can include one or more images and one or more respective three-dimensional locations, and the one or more images can depict a scene.
  • One or more image patches can be generated based on the one or more images.
  • Each image patch can include a proper subset of one of the plurality of images.
  • the image patches can include sixteen pixel by sixteen pixel patches or can be a variety of other sizes.
  • the patches can be generated by equally spitting up an image into pieces. Alternatively and/or additionally, the patches may be generated through randomly sampling portions of the images.
  • the one or more respective three-dimensional locations can be processed with a neural radiance field model to generate one or more patch renderings.
  • the one or more patch renderings can be descriptive of views of the scene differing from views depicted in the plurality of images.
  • the one or more patch renderings can include one or more color predictions (e.g., a red-green-blue value prediction) and one or more depth predictions (e.g., a volume density prediction).
  • the depth rendering can utilize a prior to remove or penalize spikes in depths.
  • a flow model can be trained based at least in part on the one or more image patches.
  • the flow model can be a normalized flow model trained on a patch database.
  • the patch database can include images descriptive of a variety of different scenes.
  • the flow model may be a general model that can be used for a variety of scenes, objects, etc.
  • the flow model can be trained to be aware of geometry and color transitions.
  • the flow model can be a pretrained model obtained from a server computing system.
  • the flow model may include a pretrained model trained on an image dataset that includes a plurality of images associated with a plurality of different scenes.
  • the pre-trained flow model can be trained on full images and may be trained such that a singular model can be utilized to train a plurality of neural radiance field models being trained on a plurality of different respective scenes.
  • the one or more patch renderings can be processed with the flow model to generate a flow output.
  • the flow output can include a geometry regularization and/or a color normalization.
  • the systems and methods can include adjusting one or more parameters of the neural radiance field model based at least in part on the flow output.
  • the systems and methods can include evaluating a ground truth loss function that evaluates a difference between the one or more patch renderings and one or more of the plurality of image patches and adjusting one or more parameters of the neural radiance field model based at least in part on the ground truth loss function.
  • the systems and methods can include storing the plurality of image patches in a database.
  • the database can be utilized by a future user or the same user in order to train a different neural radiance field model or a different flow model.
  • the trained neural radiance field model can then be utilized to generate novel view renderings.
  • the systems and methods can include obtaining an input dataset.
  • the input dataset can include one or more locations, and the one or more locations can be descriptive of a position in an environment.
  • the input dataset can include one or more view directions.
  • the input dataset can be processed with a neural radiance field model to generate one or more novel view renderings.
  • the novel view rendering can include a view of at least a portion of the environment.
  • the neural radiance field model may have been trained by comparing patches from a training dataset to generate predicted view renderings.
  • the patches can be generated by segmenting one or more training images.
  • the neural radiance field model can include a first model configured to process an input dataset and a second model configured to process a neural radiance field output generated by a neural radiance field model.
  • Processing the image dataset with the neural radiance field model can include processing the input dataset with the first model to generate neural radiance field data.
  • processing the image dataset with the neural radiance field model can include processing the neural radiance field with the second model to generate the one or more novel view renderings.
  • the one or more novel view renderings can then be provided for display.
  • the one or more novel view renderings can be of a same or comparable size to a training image size. Additionally and/or alternatively, the one or more novel view renderings can be descriptive of a view of a scene that differs from a view included in the training dataset.
  • the systems and methods of the present disclosure provide a number of technical effects and benefits.
  • the system and methods can train a neural radiance field model for generating a view synthesis rendering. More specifically, the systems and methods can utilize ground truth image patches in order to train the neural radiance field model with sparse inputs.
  • the systems and methods can include generating a plurality of patches for an image, which can then be compared against patch renderings in order to train a neural radiance field model.
  • the training can include processing the patch renderings with a flow model to generate a flow output which can include a distribution that can be backpropagated to the neural radiance field model to train the model.
  • the flow model can aid in providing more realistic geometry and color.
  • Another example technical effect and benefit relates to improved computational efficiency and improvements in the functioning of a computing system.
  • the systems and methods disclosed herein can leverage patches and a flow model in order to train a neural radiance field model with a small amount of training data (e.g., four images).
  • the systems and methods disclosed herein can be applicable to train a model for realistic and informed novel view synthesis rendering with a small amount of training data.
  • Figure 1 A depicts a block diagram of an example computing system 100 that performs view synthesis rendering according to example embodiments of the present disclosure.
  • the system 100 includes a user computing device 102, a server computing system 130, and a training computing system 150 that are communicatively coupled over a network
  • the user computing device 102 can be any type of computing device, such as, for example, a personal computing device (e.g., laptop or desktop), a mobile computing device (e.g., smartphone or tablet), a gaming console or controller, a wearable computing device, an embedded computing device, or any other type of computing device.
  • a personal computing device e.g., laptop or desktop
  • a mobile computing device e.g., smartphone or tablet
  • a gaming console or controller e.g., a gaming console or controller
  • a wearable computing device e.g., an embedded computing device, or any other type of computing device.
  • the user computing device 102 includes one or more processors 112 and a memory 114.
  • the one or more processors 112 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected.
  • the memory 114 can include one or more non-transitory computer-readable storage mediums, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof.
  • the memory 114 can store data 116 and instructions 118 which are executed by the processor 112 to cause the user computing device 102 to perform operations.
  • the user computing device 102 can store or include one or more neural radiance field models 120.
  • the neural radiance field models 120 can be or can otherwise include various machine-learned models such as neural networks (e.g., deep neural networks) or other types of machine-learned models, including non-linear models and/or linear models.
  • Neural networks can include feed-forward neural networks, recurrent neural networks (e.g., long short-term memory recurrent neural networks), convolutional neural networks or other forms of neural networks.
  • Example neural radiance field models 120 are discussed with reference to Figures 2 - 5.
  • the one or more neural radiance field models 120 can be received from the server computing system 130 over network 180, stored in the user computing device memory 114, and then used or otherwise implemented by the one or more processors 112.
  • the user computing device 102 can implement multiple parallel instances of a single neural radiance field model 120 (e.g., to perform parallel view rendering across multiple instances of positions and/or view directions).
  • the one or more neural radiance field models can process an input dataset to generate a view rendering.
  • the input dataset may include one or more three- dimensional positions and one or more two-dimensional view directions.
  • the neural radiance field model can process the position, or location, in an observation space, and can map the position and direction to a color prediction and a volume density prediction, which can then be utilized to generate the view rendering.
  • the view rendering may be a novel view rendering depicting a predicted image of a view not depicted in the training dataset.
  • one or more neural radiance field models 140 can be included in or otherwise stored and implemented by the server computing system 130 that communicates with the user computing device 102 according to a client-server relationship.
  • the neural radiance field models 140 can be implemented by the server computing system 140 as a portion of a web service (e.g., a view rendering service).
  • a web service e.g., a view rendering service.
  • one or more models 120 can be stored and implemented at the user computing device 102 and/or one or more models 140 can be stored and implemented at the server computing system 130.
  • the user computing device 102 can also include one or more user input component 122 that receives user input.
  • the user input component 122 can be a touch-sensitive component (e.g., a touch-sensitive display screen or a touch pad) that is sensitive to the touch of a user input object (e.g., a finger or a stylus).
  • the touch-sensitive component can serve to implement a virtual keyboard.
  • Other example user input components include a microphone, a traditional keyboard, or other means by which a user can provide user input.
  • the server computing system 130 includes one or more processors 132 and a memory 134.
  • the one or more processors 132 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected.
  • the memory 134 can include one or more non-transitory computer-readable storage mediums, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof.
  • the memory 134 can store data 136 and instructions 138 which are executed by the processor 132 to cause the server computing system 130 to perform operations.
  • the server computing system 130 includes or is otherwise implemented by one or more server computing devices. In instances in which the server computing system 130 includes plural server computing devices, such server computing devices can operate according to sequential computing architectures, parallel computing architectures, or some combination thereof.
  • the server computing system 130 can store or otherwise include one or more machine-learned neural radiance field models 140.
  • the models 140 can be or can otherwise include various machine-learned models.
  • Example machine-learned models include neural networks or other multi-layer non-linear models.
  • Example neural networks include feed forward neural networks, deep neural networks, recurrent neural networks, and convolutional neural networks.
  • Example models 140 are discussed with reference to Figures 2 - 5.
  • the user computing device 102 and/or the server computing system 130 can train the models 120 and/or 140 via interaction with the training computing system 150 that is communicatively coupled over the network 180.
  • the training computing system 150 can be separate from the server computing system 130 or can be a portion of the server computing system 130.
  • the training computing system 150 includes one or more processors 152 and a memory 154.
  • the one or more processors 152 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected.
  • the memory 154 can include one or more non-transitory computer-readable storage mediums, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof.
  • the memory 154 can store data 156 and instructions 158 which are executed by the processor 152 to cause the training computing system 150 to perform operations.
  • the training computing system 150 includes or is otherwise implemented by one or more server computing devices.
  • the training computing system 150 can include a model trainer 160 that trains the machine-learned models 120 and/or 140 stored at the user computing device 102 and/or the server computing system 130 using various training or learning techniques, such as, for example, backwards propagation of errors.
  • a loss function can be backpropagated through the model(s) to update one or more parameters of the model(s) (e.g., based on a gradient of the loss function).
  • Various loss functions can be used such as mean squared error, likelihood loss, cross entropy loss, hinge loss, and/or various other loss functions.
  • Gradient descent techniques can be used to iteratively update the parameters over a number of training iterations.
  • performing backwards propagation of errors can include performing truncated backpropagation through time.
  • the model trainer 160 can perform a number of generalization techniques (e.g., weight decays, dropouts, etc.) to improve the generalization capability of the models being trained.
  • the model trainer 160 can train the neural radiance field models 120 and/or 140 based on a set of training data 162.
  • the training data 162 can include, for example, a plurality of training images, a plurality of respective three-dimensional locations, and/or one or more two-dimensional view directions.
  • the plurality of training images may be deconstructed, or segmented, into a plurality of ground truth patches for each respective training image.
  • the training examples can be provided by the user computing device 102.
  • the model 120 provided to the user computing device 102 can be trained by the training computing system 150 on user-specific data received from the user computing device 102. In some instances, this process can be referred to as personalizing the model.
  • the model trainer 160 includes computer logic utilized to provide desired functionality.
  • the model trainer 160 can be implemented in hardware, firmware, and/or software controlling a general purpose processor.
  • the model trainer 160 includes program files stored on a storage device, loaded into a memory and executed by one or more processors.
  • the model trainer 160 includes one or more sets of computer-executable instructions that are stored in a tangible computer-readable storage medium such as RAM hard disk or optical or magnetic media.
  • the network 180 can be any type of communications network, such as a local area network (e.g., intranet), wide area network (e.g., Internet), or some combination thereof and can include any number of wired or wireless links.
  • communication over the network 180 can be carried via any type of wired and/or wireless connection, using a wide variety of communication protocols (e.g., TCP/IP, HTTP, SMTP, FTP), encodings or formats (e.g., HTML, XML), and/or protection schemes (e.g., VPN, secure HTTP, SSL).
  • TCP/IP Transmission Control Protocol/IP
  • HTTP HyperText Transfer Protocol
  • SMTP Simple Stream Transfer Protocol
  • FTP e.g., HTTP, HTTP, HTTP, HTTP, FTP
  • encodings or formats e.g., HTML, XML
  • protection schemes e.g., VPN, secure HTTP, SSL
  • the machine-learned models described in this specification may be used in a variety of tasks, applications, and/or use cases.
  • the input to the machine-learned model(s) of the present disclosure can be image data.
  • the machine-learned model(s) can process the image data to generate an output.
  • the machine-learned model(s) can process the image data to generate an image recognition output (e.g., a recognition of the image data, a latent embedding of the image data, an encoded representation of the image data, a hash of the image data, etc.).
  • the machine-learned model(s) can process the image data to generate an image segmentation output.
  • the machine- learned model(s) can process the image data to generate an image classification output.
  • the machine-learned model(s) can process the image data to generate an image data modification output (e.g., an alteration of the image data, etc.).
  • the machine-learned model(s) can process the image data to generate an encoded image data output (e.g., an encoded and/or compressed representation of the image data, etc.).
  • the machine-learned model(s) can process the image data to generate an upscaled image data output.
  • the machine-learned model(s) can process the image data to generate a prediction output.
  • the input to the machine-learned model(s) of the present disclosure can be latent encoding data (e.g., a latent space representation of an input, etc.).
  • the machine-learned model(s) can process the latent encoding data to generate an output.
  • the machine-learned model(s) can process the latent encoding data to generate a recognition output.
  • the machine-learned model(s) can process the latent encoding data to generate a reconstruction output.
  • the machine-learned model(s) can process the latent encoding data to generate a search output.
  • the machine-learned model(s) can process the latent encoding data to generate a reclustering output.
  • the machine-learned model(s) can process the latent encoding data to generate a prediction output.
  • the input to the machine-learned model(s) of the present disclosure can be statistical data.
  • the machine-learned model(s) can process the statistical data to generate an output.
  • the machine-learned model(s) can process the statistical data to generate a recognition output.
  • the machine- learned model(s) can process the statistical data to generate a prediction output.
  • the machine-learned model(s) can process the statistical data to generate a classification output.
  • the machine-learned model(s) can process the statistical data to generate a segmentation output.
  • the machine-learned model(s) can process the statistical data to generate a segmentation output.
  • the machine-learned model(s) can process the statistical data to generate a visualization output.
  • the machine-learned model(s) can process the statistical data to generate a diagnostic output.
  • the input to the machine-learned model(s) of the present disclosure can be sensor data.
  • the machine-learned model(s) can process the sensor data to generate an output.
  • the machine-learned model(s) can process the sensor data to generate a recognition output.
  • the machine-learned model(s) can process the sensor data to generate a prediction output.
  • the machine-learned model(s) can process the sensor data to generate a classification output.
  • the machine-learned model(s) can process the sensor data to generate a segmentation output.
  • the machine-learned model(s) can process the sensor data to generate a segmentation output.
  • the machine-learned model(s) can process the sensor data to generate a visualization output.
  • the machine-learned model(s) can process the sensor data to generate a diagnostic output.
  • the machine-learned model(s) can process the sensor data to generate a detection output.
  • the input includes visual data and the task is a computer vision task.
  • the input includes pixel data for one or more images and the task is an image processing task.
  • the image processing task can be image classification, where the output is a set of scores, each score corresponding to a different object class and representing the likelihood that the one or more images depict an object belonging to the object class.
  • the image processing task may be object detection, where the image processing output identifies one or more regions in the one or more images and, for each region, a likelihood that region depicts an object of interest.
  • the image processing task can be image segmentation, where the image processing output defines, for each pixel in the one or more images, a respective likelihood for each category in a predetermined set of categories.
  • the set of categories can be foreground and background.
  • the set of categories can be object classes.
  • the image processing task can be depth estimation, where the image processing output defines, for each pixel in the one or more images, a respective depth value.
  • the image processing task can be motion estimation, where the network input includes multiple images, and the image processing output defines, for each pixel of one of the input images, a motion of the scene depicted at the pixel between the images in the network input.
  • Figure 1 A illustrates one example computing system that can be used to implement the present disclosure.
  • the user computing device 102 can include the model trainer 160 and the training dataset 162.
  • the models 120 can be both trained and used locally at the user computing device 102.
  • the user computing device 102 can implement the model trainer 160 to personalize the models 120 based on user-specific data.
  • Figure IB depicts a block diagram of an example computing device 10 that performs according to example embodiments of the present disclosure.
  • the computing device 10 can be a user computing device or a server computing device.
  • the computing device 10 includes a number of applications (e.g., applications 1 through N). Each application contains its own machine learning library and machine-learned model(s). For example, each application can include a machine-learned model.
  • Example applications include a text messaging application, an email application, a dictation application, a virtual keyboard application, a browser application, etc.
  • each application can communicate with a number of other components of the computing device, such as, for example, one or more sensors, a context manager, a device state component, and/or additional components.
  • each application can communicate with each device component using an API (e.g., a public API).
  • the API used by each application is specific to that application.
  • Figure 1C depicts a block diagram of an example computing device 50 that performs according to example embodiments of the present disclosure.
  • the computing device 50 can be a user computing device or a server computing device.
  • the computing device 50 includes a number of applications (e.g., applications 1 through N). Each application is in communication with a central intelligence layer.
  • Example applications include a text messaging application, an email application, a dictation application, a virtual keyboard application, a browser application, etc.
  • each application can communicate with the central intelligence layer (and model(s) stored therein) using an API (e.g., a common API across all applications).
  • the central intelligence layer includes a number of machine-learned models. For example, as illustrated in Figure 1C, a respective machine-learned model (e.g., a model) can be provided for each application and managed by the central intelligence layer. In other implementations, two or more applications can share a single machine-learned model. For example, in some implementations, the central intelligence layer can provide a single model (e.g., a single model) for all of the applications. In some implementations, the central intelligence layer is included within or otherwise implemented by an operating system of the computing device 50.
  • a respective machine-learned model e.g., a model
  • two or more applications can share a single machine-learned model.
  • the central intelligence layer can provide a single model (e.g., a single model) for all of the applications.
  • the central intelligence layer is included within or otherwise implemented by an operating system of the computing device 50.
  • the central intelligence layer can communicate with a central device data layer.
  • the central device data layer can be a centralized repository of data for the computing device 50. As illustrated in Figure 1C, the central device data layer can communicate with a number of other components of the computing device, such as, for example, one or more sensors, a context manager, a device state component, and/or additional components. In some implementations, the central device data layer can communicate with each device component using an API (e.g., a private API).
  • API e.g., a private API
  • FIG. 2 depicts a block diagram of an example neural radiance field model training system 200 according to example embodiments of the present disclosure.
  • the neural radiance field model training system 200 is trained to receive a set of input data 202 descriptive of a scene and, as a result of receipt of the input data 202, provide a trained neural radiance field model that is operable to generate view renderings.
  • the neural radiance field model training system 200 can include a patch generation model 204 that is operable to generate a plurality of patches 206 for each respective input view and input image.
  • the input data 202 can include an input dataset that includes a plurality of images and a plurality of locations, or positions.
  • the plurality of images can be descriptive of a scene that is being observed, and the plurality of locations can be descriptive of locations in the observed space.
  • the neural radiance field model may process the plurality of locations and generate a plurality of outputs.
  • the outputs can include predicted view renderings for the respective locations. Additionally and/or alternatively, the outputs can include data descriptive of color distributions and density distributions.
  • the plurality of images of the input data 202 can be processed with the patch generation model 204 to generate a plurality of image patches 206 (e.g., a plurality of ground truth image patches).
  • the image patches 206 can be descriptive of portions of the plurality of images.
  • the patch generation model 204 can extract portions of the images to use for patches.
  • the segmentation can be completed at random (e.g., via a random sampling technique) and/or with a pre-determined sequence.
  • the image patches 206 may include overlapping pixel data or may have no overlapping pixel data.
  • the image patches may be of the same uniform size or may vary in size. For example, each image may be deconstructed into four equal sized patches with no overlapping coverage. Alternatively and/or additionally, a focal point of an image can be determined, and various patches of varying sizes can be generated with that same image with the focal point being a center for each of the patches.
  • the plurality of image patches 206 can then be compared to the plurality of outputs.
  • the outputs can be patch renderings that can be compared against ground truth patches that were generated with the patch generation model 204.
  • the comparison can be completed in order to evaluate a loss function, which can output a gradient descent.
  • the gradient descent can then be backpropagated to the neural radiance field model in order to adjust one or more parameters of the neural radiance field model.
  • Figure 3 depicts a block diagram of an example training system 300 with a flow model according to example embodiments of the present disclosure.
  • the training system 300 is similar to the neural radiance field model training system 200 of Figure 2 except that the training system 300 further includes a flow model 306.
  • the training system 300 can include obtaining a density estimation model (e.g., a normalizing flow model 306) pretrained on a patch database 308.
  • the patch database 308 can be generated based on a plurality of images descriptive of a plurality of different scenes.
  • the patch database 308 can include ground truth image patches generated based on input views of a scene being modeled by the neural radiance field model.
  • the patch database 308 can be replaced or supplemented with an image database (e.g., the JFT dataset).
  • the flow model may be trained to be applied for training a plurality of different neural radiance field models in which each of the plurality of different neural radiance field models may be trained for view synthesis of different respective scenes.
  • FIG. 3 depicts an example training system 300 with a flow model that can be utilized to normalize an output to generate smooth transitions and minimize and/or mitigate artifact generation.
  • the training system 300 can involve generating a database of patches 308 from the available input views 302.
  • the training system 300 can process three-dimensional locations with a neural radiance field model in order to render patches 304 from novel views and maximize the log likelihood 310 of the rendered patch 304 given the database 308 available.
  • the training system 300 can add a TV norm regularization loss on the depth of the rendered patches 304.
  • FIG. 4 depicts a block diagram of an example training system 400 according to example embodiments of the present disclosure.
  • the example training system 400 can involve obtaining a training dataset 402.
  • the training dataset 402 can be obtained from one or more sensors and may include one or more inputs 404 and one or more ground truth images 410.
  • the one or more inputs 404 can include one or more locations (e.g., one or more three- dimensional positions in an observation space). Additionally and/or alternatively, the one or more inputs 404 can include one or more view directions (e.g., one or more two-dimensional view directions).
  • Each input 404 may be associated with a particular ground truth image 410 (e.g., a location and view direction may be descriptive of the location and view direction of where a ground truth image was captured with an image sensor).
  • the training system 400 can include processing the input 404 with a neural radiance field model 406 to generate one or more rendered patches 408.
  • the rendered patches 408 can then be compared against corresponding ground truth patches generated with the ground truth images 410.
  • the rendered patches 408 and the ground truth patches can be utilized to evaluate a ground truth loss function.
  • the output of the ground truth loss function can then be backpropagated to the neural radiance field model 406.
  • the output can then be used to adjust one or more parameters of the neural radiance field model 406.
  • the rendered patches 408 can also be processed by a normalizing flow model 414 in order to generate a flow output, or flow loss.
  • the flow output can include a gradient descent that is backpropagated to the neural radiance field model and can be used to modify one or more parameters of the neural radiance field model 406.
  • the normalizing flow model 414 can be pretrained on a flow training dataset 412 that differs from the obtained training dataset 412.
  • the normalizing flow model 414 can be configured to smoothen transitions in rendered images and minimize artifact generation.
  • the systems and methods disclosed herein can be utilized to remove floating artifacts in rendering, provide more realistic texture, and retain geometric details. Moreover, the systems and methods can include the addition of depth regularization. For example, for random viewpoints, patch renderings can be rendered, and a regularization (tv norm) can be applied on the depth of the patch.
  • tv norm regularization
  • the sampling planes can be annealed (e.g., the sampling planes can be analyzed to approximate a global optimum).
  • the systems and methods instead of sampling evenly within the scene bounding box, the systems and methods can start with a smaller box around the center of an object, sample points within that volume, and gradually increase the size of the box to cover the full scene. Annealing the sampling planes can allow the systems and methods to avoid high density values at the beginning of the rays, which can lead to divergence in the training.
  • the systems and methods can include geometry regularization (e.g., TV-regularization on rendered depth map patches, which can be quickly anneal from high value to lower value.), color regularization (which can involve likelihood maximization of rendered 16x16 patches), near/far annealing (which can involve annealing a near/far plane to avoid degenerated solutions), reduced training iterations (e.g., training for only 50k iterations), increased learning rate (e.g., decay from 2e-3 to 2e-5), and gradient clipping (e.g., clipping gradients (at 0.1) to allow for higher learning rate).
  • geometry regularization e.g., TV-regularization on rendered depth map patches, which can be quickly anneal from high value to lower value.
  • color regularization which can involve likelihood maximization of rendered 16x16 patches
  • near/far annealing which can involve annealing a near/far plane to avoid degenerated solutions
  • reduced training iterations e.g., training for only 50k iterations
  • Figure 5 depicts a block diagram of an example training system 500 with a discriminator model according to example embodiments of the present disclosure.
  • the training system 500 of Figure 5 is similar to the training system 300 of Figure 3 except the training system 500 of Figure 5 includes a discriminator model 506 in place of a flow model 306.
  • the training system 500 can include obtaining a discriminator model 506 (e.g., a discriminator model of a generative adversarial network) pretrained on a patch database 508.
  • a discriminator model 506 e.g., a discriminator model of a generative adversarial network
  • the patch database 508 can be generated based on a plurality of images descriptive of a plurality of different scenes.
  • the patch database 508 can include ground truth image patches generated based on input views of a scene being modeled by the neural radiance field model.
  • the discriminator model 306 can then process rendered patches 504 generated by the neural radiance field model in order to generate a discriminator output 510.
  • the discriminator output 510 can be descriptive of whether the discriminator model 506 classifies the rendered patch 504 as real or fake.
  • the discriminator output 510 can then be utilized to adjust one or more parameters of the neural radiance field model.
  • Figure 6 depicts a flow chart diagram of an example method to perform according to example embodiments of the present disclosure. Although Figure 6 depicts steps performed in a particular order for purposes of illustration and discussion, the methods of the present disclosure are not limited to the particularly illustrated order or arrangement. The various steps of the method 600 can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure.
  • a computing system can obtain an image dataset.
  • the image dataset can include a plurality of images and a plurality of respective three-dimensional locations, or positions.
  • the plurality of images can be descriptive of different views of a scene.
  • Each of the plurality of images may be associated with one or more of the plurality of respective three-dimensional locations.
  • the image dataset may include a plurality of view directions associated with the plurality of images and the plurality of respective three-dimensional locations.
  • the scene can include one or more features descriptive of one or more objects (e.g., a car in a driveway, a squirrel in a park, or person in a restaurant).
  • the computing system can generate one or more ground truth patches based on the plurality of images.
  • Each ground truth patch can include a proper subset of one of the plurality of images.
  • Generating the ground truth patches can include segmenting and/or deconstructing one or more of the images to generate a patch for different portions of the image.
  • the patches can be utilized as individual training images or may be utilized as groups to train for particular views individually or in combination.
  • the computing system can process one or more of the three-dimensional locations of the image dataset with a neural radiance field model to generate one or more view synthesis renderings.
  • the one or more view synthesis renderings can be descriptive of different views of the scene.
  • the view synthesis renderings can include view synthesis patch renderings that are of the same or comparable scale to the ground truth patches.
  • the computing system can evaluate a loss function that evaluates a difference between the one or more view synthesis renderings and the one or more ground truth patches.
  • the loss function can include one or more regularization terms.
  • the loss function can include a perceptual loss, a photometric loss, an L2 loss, etc.
  • the computing system can adjust one or more parameters of the neural radiance field model based at least in part on the loss function.
  • one or more generative embeddings may be modified based at least in part on the one or more view synthesis renderings and the one or more ground truth patches.
  • Figure 7 depicts a flow chart diagram of an example method to perform according to example embodiments of the present disclosure. Although Figure 7 depicts steps performed in a particular order for purposes of illustration and discussion, the methods of the present disclosure are not limited to the particularly illustrated order or arrangement. The various steps of the method 700 can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure.
  • a computing system can obtain a training dataset.
  • the training dataset can include a plurality of images and a plurality of respective three-dimensional locations.
  • the images can be descriptive of a scene, or an environment.
  • the three-dimensional locations can include positions in a three-dimensional observation space.
  • the images can include red- green-blue images with a plurality of pixels with varying color values.
  • the computing system can generate a plurality of image patches.
  • the plurality of image patches can include a set of image patches for each of the plurality of images, such that for each image can be utilized to generate two or more image patches.
  • the images can be segmented into equal portions. Additionally and/or alternatively, portions of the images may be randomly selected for patch generation. In some implementations, each portion of each of the plurality of images may be utilized for image patch generation. Additionally and/or alternatively, the image patches may include different portions of images altogether such that each portion of an image is only utilized for that specific image patch. In some implementations, the image patches can include overlapping data. For example, one or more pixels of an image may be utilized for a plurality of image patches.
  • the computing system can process the plurality of respective three- dimensional locations with a neural radiance field model to generate one or more patch renderings.
  • the neural radiance field model can include one or more multi-layer perceptrons and may be configured to map three-dimensional positions and/or view directions to one or more color values and one or more volume density values.
  • the one or more patch renderings can be descriptive of views of the scene.
  • the one or more patch renderings can be descriptive of predicted views associated with one or more of the three-dimensional locations.
  • the one or more patch renderings may correspond with one or more ground truth image patches or may depict a different view altogether.
  • the computing system can train a flow model based at least in part on the plurality of image patches.
  • the flow model may be pre-trained with an unrelated dataset.
  • the flow model can be trained on different scenes from the scenes depicted in the ground truth images.
  • the flow model can be pretrained on an image dataset in place of or in complement to the patch dataset.
  • the computing system can process the one or more patch renderings with the flow model to generate a flow output.
  • the flow output can include a gradient descent.
  • the flow output can be descriptive of a color “smoothness” and/or a geometric “smoothness” in the one or more patch renderings.
  • the computing system can adjust one or more parameters of the neural radiance field model based at least in part on the flow output.
  • the parameters may be adjusted based at least in part on the gradient descent via backpropagation.
  • the method 700 and the method 600 can be utilized in parallel and/or in series. The methods can be utilized individually and/or in combination.
  • the flow model can be replaced with and/or used with a discriminator model.
  • Figure 8 depicts a flow chart diagram of an example method to perform according to example embodiments of the present disclosure. Although Figure 8 depicts steps performed in a particular order for purposes of illustration and discussion, the methods of the present disclosure are not limited to the particularly illustrated order or arrangement. The various steps of the method 800 can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure.
  • a computing system can obtain an input dataset.
  • the input dataset can include one or more locations.
  • the one or more locations can be locations not included in the training dataset for the neural radiance field model.
  • the one or more locations may be descriptive of a position in an environment.
  • the computing system can process the input dataset with a neural radiance field model to generate one or more novel view renderings.
  • the novel view rendering can include at least a portion of the environment.
  • the novel view rendering can be descriptive of a view that differs from the views depicted in the training dataset for the neural radiance field model.
  • the neural radiance field model may have been trained by comparing patches from a training dataset to generated predicted view renderings, in which the patches may have been generated by segmenting one or more training images.
  • the computing system can provide the one or more novel view renderings for display.
  • the one or more novel view renderings may be sent for visual display and may be displayed on a screen of a user device.
  • the novel view rendering may be stored in a database. Additionally and/or alternatively, the novel view rendering may be utilized for training a neural radiance field model and/or a flow model.
  • Figure 9 depicts a flow chart diagram of an example method to perform according to example embodiments of the present disclosure. Although Figure 9 depicts steps performed in a particular order for purposes of illustration and discussion, the methods of the present disclosure are not limited to the particularly illustrated order or arrangement. The various steps of the method 900 can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure.
  • a computing system can obtain a training dataset.
  • the training dataset can include a plurality of images and a plurality of respective three-dimensional locations.
  • the images can be descriptive of a scene, or an environment.
  • the three-dimensional locations can include positions in a three-dimensional observation space.
  • the images can include red- green-blue images with a plurality of pixels with varying color values.
  • the computing system can generate a plurality of image patches.
  • the plurality of image patches can include a set of image patches for each of the plurality of images, such that for each image can be utilized to generate two or more image patches.
  • the images can be segmented into equal portions. Additionally and/or alternatively, portions of the images may be randomly selected for patch generation. In some implementations, each portion of each of the plurality of images may be utilized for image patch generation. Additionally and/or alternatively, the image patches may include different portions of images altogether such that each portion of an image is only utilized for that specific image patch. In some implementations, the image patches can include overlapping data. For example, one or more pixels of an image may be utilized for a plurality of image patches.
  • the computing system can process the plurality of respective three- dimensional locations with a neural radiance field model to generate one or more patch renderings.
  • the neural radiance field model can include one or more multi-layer perceptrons and may be configured to map three-dimensional positions and/or view directions to one or more color values and one or more volume density values.
  • the one or more patch renderings can be descriptive of views of the scene.
  • the one or more patch renderings can be descriptive of predicted views associated with one or more of the three-dimensional locations.
  • the one or more patch renderings may correspond with one or more ground truth image patches or may depict a different view altogether.
  • the computing system can obtain a flow model in which the flow model includes a pre-trained model.
  • the flow model may be pre-trained on a flow training dataset.
  • the flow training dataset can include a plurality of image datasets descriptive of a plurality of different scenes.
  • the flow training dataset can include a different dataset than the training dataset.
  • the flow model may be utilized to train a plurality of different neural radiance field models being trained on a plurality of different respective scenes.
  • the computing system can process the one or more patch renderings with the flow model to generate a flow output.
  • the flow output can include a gradient descent.
  • the flow output can be descriptive of a color “smoothness” and/or a geometric “smoothness” in the one or more patch renderings.
  • the computing system can adjust one or more parameters of the neural radiance field model based at least in part on the flow output.
  • the parameters may be adjusted based at least in part on the gradient descent via backpropagation.
  • the systems and methods disclosed herein can include a patch-based approach. More specifically, in some implementations, the majority of artifacts may be caused by errors in scene geometry and divergent training start behavior. Therefore, the systems and methods disclosed herein can utilize a normalizing flow model to regularize the color of the reconstructed scene.
  • the novel-view synthesis task of trained neural radiance field models can be used to render unseen viewpoints of a scene for a given set of input images.
  • Neural radiance field models can rely on having a large amount of training data to learn scenes.
  • the input In real-world applications such as AR/VR, autonomous driving, and robotics, the input, however, may be sparse and only few views may be available.
  • the systems and methods may include a patch-based regularization to the rendered depth maps of unobserved viewpoints.
  • the patch-based regularization can have the effect of reducing floating artifacts and gravely improving the learned scene geometry.
  • the systems and methods can include an annealing strategy for sampling points along the ray, where the systems and methods can first sample scene content within a small range before annealing to the full scenes, thereby preventing divergent behavior at the beginning of training.
  • the systems and methods can include a normalizing flow model to regularize the color prediction of unseen viewpoints by maximizing the loglikelihood of the rendered patches. Therefore, the systems and methods may avoid shifts in color between different views.
  • the optimization procedure for NeRF for sparse inputs can utilize a mip-NeRF system, which can use a multi-scale radiance field-based model to represent scenes.
  • the systems and methods may utilize a patch-based approach to regularize the geometry as well as the color prediction of unseen view- points.
  • the approach can provide a simple annealing strategy of the sampled scene space to avoid divergent training start behavior. Additionally and/or alternatively, the systems and methods can use higher learning rates in combination with gradient clipping to further speed up the optimization process.
  • Figure 3 depicts an example overview of one implementation of the method.
  • a neural radiance field can include a continuous function f mapping a three- dimensional location and viewing direction v to a volume density and color value c
  • the neural radiance fields can be parameterized using a multi-layer perceptron (MLP) where the weights of the MLP are optimized for given input images of a scene: where 6 can indicate the network weights and y a predefined positional encoding applied elementwise to x and d.
  • MLP multi-layer perceptron
  • the pixel’s predicted color value c can be obtained as where and can indicate the density and color prediction of radiance field respectively.
  • a neural radiance field may be optimized for a set of input images together with their camera poses by minimizing the mean squared error where can indicate the set of all rays and c GT the ground truth color for the pixel.
  • the systems and methods disclosed herein may implement the mip-NeRF representation.
  • the systems and methods may include patch-based regularization.
  • a NeRF model’s performance may drop significantly if the number of input views is sparse.
  • the systems and methods can regularize unseen viewpoints. More specifically, the systems and methods may define a space of unseen but relevant viewpoints and render small patches from these cameras. Using these patches, the key idea may be to regularize the geometry to be smooth as well as the color prediction to have a high likelihood.
  • the systems and methods may first need to define the space of unobserved viewpoints from which the system may sample camera poses. To this end, the systems and methods may make use of a set of target poses where
  • the target poses can be assumed to be given as they can be a factor for the task of novel-view synthesis.
  • the systems and methods may then define the space of all possible camera positions as the bounding box of all given target camera positions where t min and t max can be the elementwise minimum and maximum values of respectively.
  • the systems and methods may first define a common “up” direction by taking the mean over the up directions of all target poses. Next, the system may calculate the mean focus point for all target poses. To learn more robust representations, the system may add some jittering to the focal point before calculating the camera rotation matrix. The system can define the set of all possible camera rotations as where indicates the resulting camera rotation matrix and e a small jitter added to the focus point.
  • the systems and methods can regularize unseen viewpoints such that scene geometry and color prediction can include a value of highest probability.
  • Geometry may tend to be smooth in the real world (e.g., flat surfaces can be much more likely than high-frequency variable surfaces). Therefore, the systems and methods can include geometry regularization that may enforce depth smoothness by adding a TV prior on depth map patches from unobserved viewpoints. More specifically, the systems and methods can let S patch be the patch size of the rendered patches from unobserved viewpoints. The expected depth of a pixel can be obtained via
  • the systems and methods can let indicate the expected depth of the ray / pixel at position (i,j) of the patch.
  • the system can formulate a total variation loss as:
  • the systems and methods can estimate the likelihood of rendered patches and can maximize the estimated likelihood during optimization.
  • the systems and methods may make use of readily-available abundant data of unstructured image collections of two-dimensional images. While datasets of multi-view images with pose information can be expensive to collect, collections of unstructured images may be more easily accessible.
  • the systems and methods may train a normalizing flow model on the JFT- dataset. In some implementations, the dataset can include natural images. As a result, the systems and methods can reuse the same flow model for any type of scene optimized.
  • the systems and methods can use the trained flow model to estimate the log-likelihood (LL) of rendered patches. More specifically, the systems and methods can let be the trained flow model.
  • the system can define the color regularization loss as where P can indicate the predicted RGB color patch from an unobserved viewpoint.
  • the systems and methods may observe another failure mode of a mip-NeRF system: divergent training start behavior can lead to high density values at the ray starts. As a result, the input views may be correctly reconstructed, but novel views may be degenerated.
  • the systems and methods can anneal the sampled scene space quickly over the first iterations. More specifically, the system can let n, f be the near and far plane, respectively, and m be the defined scene center (usually midpoint between n and f). The system can then define where i indicates the training iteration, N t a hyperparameter indicating after which iteration the full range should be reached, and p start a hyperparameter indicating with which range to start with (e.g., 0.5).
  • the systems and methods may clip the gradients at a maximum value of 0: 1 and at a maximum norm of 0: 1. Additionally and/or alternatively, the systems and methods may train with the Adam optimizer and a learning rate of 0.002 and may exponentially decay it to 0.00002 over the optimization process.
  • the systems and methods for training NeRF models in data-limited regimes can include a method that leverages multi-view consistency constraints for the rendered depth maps to enforce the learning of correct scene geometry. In order to regularize the color predictions, the systems and methods may maximize the log-likelihood of the rendered patches relative to the input views using a normalizing flow model. Additionally and/or alternatively, the systems and methods can include an annealing-based ray sampling strategy to avoid divergent behavior at the beginning of training.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

Systems and methods for training a neural radiance field model can include the use of image patches for ground truth training. For example, the systems and methods can include generating patch renderings with a neural radiance field model, comparing the patch renderings to ground truth patches from ground truth images, and adjusting one or more parameters based on the comparison. Additionally and/or alternatively, the systems and methods can include the utilization of a flow model for mitigating and/or minimizing artifact generation.

Description

ROBUSTIFYINGNERF MODEL NOVEL VIEW SYNTHESIS TO SPARSE DATA
RELATED APPLICATIONS
[0001] This application claims priority to and the benefit of U.S. Provisional Patent Application No. 63/279,455, filed November 15, 2021. U.S. Provisional Patent Application No. 63/279,455 is hereby incorporated by reference in its entirety.
FIELD
[0002] The present disclosure relates generally to training a neural radiance field model by utilizing image patches and a flow model. More particularly, the present disclosure relates to segmenting training images into patches and comparing generated patch renderings in order to train the neural radiance field model.
BACKGROUND
[0003] Neural Radiance Fields (NeRFs) have emerged as a powerful representation for the task of novel-view synthesis due to their simplicity and state-of-the-art performance. While allowing for photorealistic renderings of unseen viewpoints when many input views are available, the performance drops significantly when only sparse inputs are available. Such a multitude of images may however not always be feasible or easily obtainable for applications such as AR/VR, autonomous driving, and robotics.
[0004] While NeRF performs well for dense inputs, the performance of NeRF models can drop significantly for sparse inputs, thereby limiting NeRF model applications for areas where obtaining dense input data is challenging (e.g., robotic applications and Streetview where the scene changes frequently between captures).
SUMMARY
[0005] Aspects and advantages of embodiments of the present disclosure will be set forth in part in the following description, or can be learned from the description, or can be learned through practice of the embodiments.
[0006] One example aspect of the present disclosure is directed to a computing system. The system can include one or more processors and one or more non-transitory computer- readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations. The operations can include obtaining an image dataset. In some implementations, the image dataset can include a plurality of images and a plurality of respective three-dimensional locations, and the plurality of images can be descriptive of a scene. The operations can include generating one or more ground truth patches based on the plurality of images. Each ground truth patch can include a proper subset of one of the plurality of images. In some implementations, the operations can include processing one or more of the plurality of three-dimensional locations of the image dataset with a neural radiance field model to generate one or more view synthesis renderings. The one or more view synthesis renderings can be descriptive of different views of the scene. The operations can include evaluating a loss function that evaluates a difference between the one or more view synthesis renderings and the one or more ground truth patches and adjusting one or more parameters of the neural radiance field model based at least in part on the loss function.
[0007] In some implementations, generating one or more ground truth patches can include processing an image of the plurality of images to determine a portion of the image descriptive of an object in the scene and generating the one or more ground truth patches by segmenting the portion of the image descriptive of the object. The operations can include obtaining an input dataset. The input dataset can include one or more respective input locations. The operations can include processing the input dataset with the neural radiance field model to generate a novel view rendering and providing the novel view rendering for display. In some implementations, the plurality of images can be descriptive of one or more input views of the scene. The novel view rendering can be descriptive of one or more output views. The one or more output views can differ from the one or more input views.
[0008] In some implementations, the one or more view synthesis renderings can include one or more predicted patches. The loss function can include at least one of a perceptual loss or a discriminator loss. In some implementations, adjusting the one or more parameters of the neural radiance field model can include adjusting the one or more parameters to maximize a log likelihood of output renderings from the neural radiance field model. The operations can include processing the one or more view synthesis renderings with a discriminator model to generate a discriminator output and adjusting the one or more parameters of the neural radiance field model based on the discriminator output. In some implementations, the discriminator model can include a convolutional discriminator. The discriminator model can be part of a generative adversarial network. In some implementations, the loss function can include an adversarial loss.
[0009] Another example aspect of the present disclosure is directed to a computer- implemented method. The method can include obtaining, by a computing system including one or more processors, a training dataset. The training dataset can include a plurality of images and a plurality of respective three-dimensional locations, and the plurality of images can depict a scene. The method can include generating, by the computing system, a plurality of image patches based on the plurality of images. In some implementations, each image patch can include a proper subset of one of the plurality of images. The method can include processing, by the computing system, the plurality of respective three-dimensional locations with a neural radiance field model to generate one or more patch renderings. In some implementations, the one or more patch renderings can be descriptive of views of the scene. The method can include obtaining, by the computing system, a flow model. The flow model can include a pre-trained model trained on a flow training dataset. The method can include processing, by the computing system, the one or more patch renderings with the flow model to generate a flow output and adjusting, by the computing system, one or more parameters of the neural radiance field model based at least in part on the flow output.
[0010] In some implementations, the method can include evaluating, by the computing system, a ground truth loss function that evaluates a difference between the one or more patch renderings and one or more of the plurality of image patches and adjusting, by the computing system, one or more parameters of the neural radiance field model based at least in part on the ground truth loss function. The method can include storing, by the computing system, the plurality of image patches in a database. In some implementations, the one or more patch renderings can include one or more color predictions and one or more depth predictions. The flow output can include a geometry regularization.
[0011] Another example aspect of the present disclosure is directed to one or more non- transitory computer-readable media that collectively store instructions that, when executed by one or more computing devices, cause the one or more computing devices to perform operations. The operations can include obtaining an input dataset. In some implementations, the input dataset can include one or more locations, and the one or more locations can be descriptive of a position in an environment. The operations can include processing the input dataset with a neural radiance field model to generate one or more novel view renderings. The novel view rendering can include a view of at least a portion of the environment. In some implementations, the neural radiance field model may have been trained by comparing patches from a training dataset to generated predicted view renderings, and the patches can be generated by segmenting one or more training images. The operations can include providing the one or more novel view renderings for display. [0012] In some implementations, the neural radiance field model can include a first model configured to process an input dataset and a second model configured to process a neural radiance field output generated by a neural radiance field model. Processing the image dataset with the neural radiance field model can include processing the input dataset with the first model to generate neural radiance field data. In some implementations, processing the image dataset with the neural radiance field model can include processing the neural radiance field with the second model to generate the one or more novel view renderings. The input dataset can include one or more view directions.
[0013] Other aspects of the present disclosure are directed to various systems, apparatuses, non-transitory computer-readable media, user interfaces, and electronic devices. [0014] These and other features, aspects, and advantages of various embodiments of the present disclosure will become better understood with reference to the following description and appended claims. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate example embodiments of the present disclosure and, together with the description, serve to explain the related principles.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] Detailed discussion of embodiments directed to one of ordinary skill in the art is set forth in the specification, which makes reference to the appended figures, in which: [0016] Figure 1A depicts a block diagram of an example computing system that performs neural radiance field model training and inference according to example embodiments of the present disclosure.
[0017] Figure IB depicts a block diagram of an example computing device that performs neural radiance field model training and inference according to example embodiments of the present disclosure.
[0018] Figure 1C depicts a block diagram of an example computing device that performs neural radiance field model training and inference according to example embodiments of the present disclosure.
[0019] Figure 2 depicts a block diagram of an example neural radiance field model training system according to example embodiments of the present disclosure.
[0020] Figure 3 depicts a block diagram of an example training system with a flow model according to example embodiments of the present disclosure.
[0021] Figure 4 depicts a block diagram of an example training system according to example embodiments of the present disclosure. [0022] Figure 5 depicts a block diagram of an example training system with a discriminator model according to example embodiments of the present disclosure. [0023] Figure 6 depicts a flow chart diagram of an example method to perform neural radiance field model training according to example embodiments of the present disclosure. [0024] Figure 7 depicts a flow chart diagram of an example method to perform neural radiance field model training with a flow model according to example embodiments of the present disclosure.
[0025] Figure 8 depicts a flow chart diagram of an example method to perform novel view rendering generation according to example embodiments of the present disclosure. [0026] Figure 9 depicts a flow chart diagram of an example method to perform neural radiance field model training with a flow model according to example embodiments of the present disclosure.
[0027] Reference numerals that are repeated across plural figures are intended to identify the same features in various implementations.
DETAILED DESCRIPTION
Overview
[0028] Generally, the present disclosure is directed to systems and methods for training a neural radiance field model by utilizing geometric priors. For example, the systems and methods can include obtaining an image dataset. In some implementations, the image dataset can include a plurality of images and a plurality of respective three-dimensional locations. The plurality of images may depict a scene. One or more ground truth images can be generated based on the plurality of images. Each ground truth patch can include a proper subset of one of the plurality of images. One or more of the plurality of three-dimensional locations of the image dataset can be processed with a neural radiance field model to generate one or more view synthesis renderings. The one or more view synthesis renderings can be descriptive of different views of the object. A loss function can be evaluated based on a difference between the one or more view synthesis renderings and the one or more ground truth patches. One or more parameters of the neural radiance field model can then be adjusted based at least in part on the loss function.
[0029] Training the one or more neural radiance field models can further include the use of a flow model. For example, training the one or more neural radiance field models can include obtaining a training dataset. The training dataset can include a plurality of images and a plurality of respective three-dimensional locations, and the plurality of images can depict a scene. A plurality of image patches can be generated based on the plurality of images. Each image patch may include a proper subset of one of the plurality of images. The plurality of respective three-dimensional locations can be processed with a neural radiance field model to generate one or more patch renderings. In some implementations, the one or more patch renderings can be descriptive of views of the scene differing from views depicted in the plurality of images. A flow model can be trained based at least in part on the plurality of image patches. The one or more patch renderings can be processed with the flow model to generate a flow output. The systems and methods can include adjusting one or more parameters of the neural radiance field model based at least in part on the flow output.
[0030] The trained neural radiance field model can then be utilized for model inference to generate novel view renderings. The systems and methods for model inference can include obtaining an input dataset. The input dataset can include one or more locations. In some implementations, the one or more locations can be descriptive of a position in an environment. The systems and methods can include processing the input dataset with a neural radiance field model to generate one or more novel view renderings. The one or more novel view renderings can include a view of at least a portion of the environment. In some implementations, the neural radiance field model can be trained by comparing patches from a training dataset to generate predicted view renderings. The patches may be generated by segmenting one or more training images. The one or more novel view renderings can then be provided for display.
[0031] The systems and methods disclosed herein can be utilized for robotic applications and autonomous vehicles where the amount of training data for each scene is limited to the data captured by the robot in real-time and frequently changes with the environment. Similarly, the systems and methods can be beneficial for augmented-reality (AR) and/or virtual -reality (VR) scenarios where the user-data is limited to that captured by the device. [0032] The systems and methods disclosed herein can train a neural radiance field model on a sparse amount of data (e.g., four to nine images). In particular, the systems and methods can break-up the training images into patches and train on the patches (e.g., the renderings can be patch renderings that can be compared against patches of a ground truth image). The patches can provide more geometric awareness and detail to the model. The training may involve focusing on one sector of a training image or a training dataset and building out in order to reduce variance.
[0033] The systems and methods disclosed herein can involve training with ground truth patch training and normalization utilizing a flow model (e.g., a normalizing flow model which can be trained on the training dataset or a different flow training dataset separate from the scene-specific training dataset). The systems and methods can utilize the ground truth patch training to train a neural radiance field model with sparse training inputs. The systems and methods can utilize the flow model for mitigating artifacts in outputs.
[0034] The systems and methods can include obtaining an image dataset. The image dataset can include one or more images and one or more respective three-dimensional locations, and the one or more images can be descriptive of a scene.
[0035] One or more ground truth patches can be generated based on the one or more images. In some implementations, each ground truth patch can include a proper subset of one of the one or more images. Additionally and/or alternatively, generating one or more ground truth patches can include processing an image of the plurality of images to determine a portion of the image descriptive of an object in the scene and generating the one or more ground truth patches by segmenting the portion of the image descriptive of the object.
[0036] The one or more three-dimensional locations of the image dataset can be processed with a neural radiance field model to generate one or more view synthesis renderings. In some implementations, the one or more view synthesis renderings can be descriptive of different views of the scene. In some implementations, the one or more images can be descriptive of one or more input views of the scene, and the one or more one or more view synthesis renderings can be descriptive of an output view. The output view can differ from the one or more input views. The one or more view synthesis renderings can include one or more predicted patches.
[0037] The systems and methods for training can include evaluating a loss function that evaluates a difference between the one or more view synthesis renderings and the one or more ground truth patches. In some implementations, the loss function can include a perceptual loss, an adversarial loss, and/or a discriminator loss.
[0038] In some implementations, the systems and methods can include adjusting one or more parameters of the neural radiance field model based at least in part on the loss function. [0039] Adjusting the one or more parameters of the neural radiance field model can include adjusting the one or more parameters to maximize a log likelihood of output renderings from the neural radiance field model.
[0040] In some implementations, the systems and methods can include obtaining an input dataset. The input dataset can include one or more respective input locations. The input dataset can be processed with the neural radiance field model to generate a novel view rendering, and the novel view rendering can be provided for display. In some implementations, the one or more images may be descriptive of one or more input views of the scene, and the novel view rendering can be descriptive of one or more output views. The one or more output views can differ from the one or more input views.
[0041] Alternatively and/or additionally, the systems and methods can include processing the one or more view synthesis renderings with a discriminator model to generate a discriminator output and adjusting the one or more parameters of the neural radiance field model based on the discriminator output. In some implementations, the discriminator model can include a convolutional discriminator, and the discriminator model can be part of a generative adversarial network.
[0042] In some implementations, the training may be completed on a pixel by pixel basis. For example, the systems and methods can sample points throughout the scene for pixel by pixel analysis.
[0043] Additionally and/or alternatively, the neural radiance field model can be trained using a flow model. For example, the systems and methods can include obtaining a training dataset. The training dataset can include one or more images and one or more respective three-dimensional locations, and the one or more images can depict a scene.
[0044] One or more image patches can be generated based on the one or more images. Each image patch can include a proper subset of one of the plurality of images. The image patches can include sixteen pixel by sixteen pixel patches or can be a variety of other sizes. In some implementations, the patches can be generated by equally spitting up an image into pieces. Alternatively and/or additionally, the patches may be generated through randomly sampling portions of the images.
[0045] The one or more respective three-dimensional locations can be processed with a neural radiance field model to generate one or more patch renderings. In some implementations, the one or more patch renderings can be descriptive of views of the scene differing from views depicted in the plurality of images. The one or more patch renderings can include one or more color predictions (e.g., a red-green-blue value prediction) and one or more depth predictions (e.g., a volume density prediction). The depth rendering can utilize a prior to remove or penalize spikes in depths.
[0046] A flow model can be trained based at least in part on the one or more image patches. In some implementations, the flow model can be a normalized flow model trained on a patch database. The patch database can include images descriptive of a variety of different scenes. The flow model may be a general model that can be used for a variety of scenes, objects, etc. In some implementations, the flow model can be trained to be aware of geometry and color transitions. In some implementations, the flow model can be a pretrained model obtained from a server computing system. For example, the flow model may include a pretrained model trained on an image dataset that includes a plurality of images associated with a plurality of different scenes. The pre-trained flow model can be trained on full images and may be trained such that a singular model can be utilized to train a plurality of neural radiance field models being trained on a plurality of different respective scenes.
[0047] The one or more patch renderings can be processed with the flow model to generate a flow output. In some implementations, the flow output can include a geometry regularization and/or a color normalization.
[0048] The systems and methods can include adjusting one or more parameters of the neural radiance field model based at least in part on the flow output.
[0049] Additionally and/or alternatively, the systems and methods can include evaluating a ground truth loss function that evaluates a difference between the one or more patch renderings and one or more of the plurality of image patches and adjusting one or more parameters of the neural radiance field model based at least in part on the ground truth loss function.
[0050] In some implementations, the systems and methods can include storing the plurality of image patches in a database. The database can be utilized by a future user or the same user in order to train a different neural radiance field model or a different flow model. [0051] The trained neural radiance field model can then be utilized to generate novel view renderings.
[0052] For example, the systems and methods can include obtaining an input dataset. In some implementations, the input dataset can include one or more locations, and the one or more locations can be descriptive of a position in an environment. The input dataset can include one or more view directions.
[0053] The input dataset can be processed with a neural radiance field model to generate one or more novel view renderings. In some implementations, the novel view rendering can include a view of at least a portion of the environment.
[0054] The neural radiance field model may have been trained by comparing patches from a training dataset to generate predicted view renderings. The patches can be generated by segmenting one or more training images. In some implementations, the neural radiance field model can include a first model configured to process an input dataset and a second model configured to process a neural radiance field output generated by a neural radiance field model. Processing the image dataset with the neural radiance field model can include processing the input dataset with the first model to generate neural radiance field data. Additionally and/or alternatively, processing the image dataset with the neural radiance field model can include processing the neural radiance field with the second model to generate the one or more novel view renderings.
[0055] The one or more novel view renderings can then be provided for display. The one or more novel view renderings can be of a same or comparable size to a training image size. Additionally and/or alternatively, the one or more novel view renderings can be descriptive of a view of a scene that differs from a view included in the training dataset.
[0056] The systems and methods of the present disclosure provide a number of technical effects and benefits. As one example, the system and methods can train a neural radiance field model for generating a view synthesis rendering. More specifically, the systems and methods can utilize ground truth image patches in order to train the neural radiance field model with sparse inputs. For example, in some implementations, the systems and methods can include generating a plurality of patches for an image, which can then be compared against patch renderings in order to train a neural radiance field model.
[0057] Another technical benefit of the systems and methods of the present disclosure is the ability to mitigate artifact generation by utilizing a flow model. For example, the training can include processing the patch renderings with a flow model to generate a flow output which can include a distribution that can be backpropagated to the neural radiance field model to train the model. The flow model can aid in providing more realistic geometry and color.
[0058] Another example technical effect and benefit relates to improved computational efficiency and improvements in the functioning of a computing system. For example, the systems and methods disclosed herein can leverage patches and a flow model in order to train a neural radiance field model with a small amount of training data (e.g., four images). Moreover, the systems and methods disclosed herein can be applicable to train a model for realistic and informed novel view synthesis rendering with a small amount of training data.
[0059] With reference now to the Figures, example embodiments of the present disclosure will be discussed in further detail.
Example Devices and Systems
[0060] Figure 1 A depicts a block diagram of an example computing system 100 that performs view synthesis rendering according to example embodiments of the present disclosure. The system 100 includes a user computing device 102, a server computing system 130, and a training computing system 150 that are communicatively coupled over a network
180.
[0061] The user computing device 102 can be any type of computing device, such as, for example, a personal computing device (e.g., laptop or desktop), a mobile computing device (e.g., smartphone or tablet), a gaming console or controller, a wearable computing device, an embedded computing device, or any other type of computing device.
[0062] The user computing device 102 includes one or more processors 112 and a memory 114. The one or more processors 112 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory 114 can include one or more non-transitory computer-readable storage mediums, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 114 can store data 116 and instructions 118 which are executed by the processor 112 to cause the user computing device 102 to perform operations. [0063] In some implementations, the user computing device 102 can store or include one or more neural radiance field models 120. For example, the neural radiance field models 120 can be or can otherwise include various machine-learned models such as neural networks (e.g., deep neural networks) or other types of machine-learned models, including non-linear models and/or linear models. Neural networks can include feed-forward neural networks, recurrent neural networks (e.g., long short-term memory recurrent neural networks), convolutional neural networks or other forms of neural networks. Example neural radiance field models 120 are discussed with reference to Figures 2 - 5.
[0064] In some implementations, the one or more neural radiance field models 120 can be received from the server computing system 130 over network 180, stored in the user computing device memory 114, and then used or otherwise implemented by the one or more processors 112. In some implementations, the user computing device 102 can implement multiple parallel instances of a single neural radiance field model 120 (e.g., to perform parallel view rendering across multiple instances of positions and/or view directions).
[0065] More particularly, the one or more neural radiance field models can process an input dataset to generate a view rendering. The input dataset may include one or more three- dimensional positions and one or more two-dimensional view directions. The neural radiance field model can process the position, or location, in an observation space, and can map the position and direction to a color prediction and a volume density prediction, which can then be utilized to generate the view rendering. The view rendering may be a novel view rendering depicting a predicted image of a view not depicted in the training dataset.
[0066] Additionally or alternatively, one or more neural radiance field models 140 can be included in or otherwise stored and implemented by the server computing system 130 that communicates with the user computing device 102 according to a client-server relationship. For example, the neural radiance field models 140 can be implemented by the server computing system 140 as a portion of a web service (e.g., a view rendering service). Thus, one or more models 120 can be stored and implemented at the user computing device 102 and/or one or more models 140 can be stored and implemented at the server computing system 130.
[0067] The user computing device 102 can also include one or more user input component 122 that receives user input. For example, the user input component 122 can be a touch-sensitive component (e.g., a touch-sensitive display screen or a touch pad) that is sensitive to the touch of a user input object (e.g., a finger or a stylus). The touch-sensitive component can serve to implement a virtual keyboard. Other example user input components include a microphone, a traditional keyboard, or other means by which a user can provide user input.
[0068] The server computing system 130 includes one or more processors 132 and a memory 134. The one or more processors 132 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory 134 can include one or more non-transitory computer-readable storage mediums, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 134 can store data 136 and instructions 138 which are executed by the processor 132 to cause the server computing system 130 to perform operations.
[0069] In some implementations, the server computing system 130 includes or is otherwise implemented by one or more server computing devices. In instances in which the server computing system 130 includes plural server computing devices, such server computing devices can operate according to sequential computing architectures, parallel computing architectures, or some combination thereof.
[0070] As described above, the server computing system 130 can store or otherwise include one or more machine-learned neural radiance field models 140. For example, the models 140 can be or can otherwise include various machine-learned models. Example machine-learned models include neural networks or other multi-layer non-linear models. Example neural networks include feed forward neural networks, deep neural networks, recurrent neural networks, and convolutional neural networks. Example models 140 are discussed with reference to Figures 2 - 5.
[0071] The user computing device 102 and/or the server computing system 130 can train the models 120 and/or 140 via interaction with the training computing system 150 that is communicatively coupled over the network 180. The training computing system 150 can be separate from the server computing system 130 or can be a portion of the server computing system 130.
[0072] The training computing system 150 includes one or more processors 152 and a memory 154. The one or more processors 152 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory 154 can include one or more non-transitory computer-readable storage mediums, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 154 can store data 156 and instructions 158 which are executed by the processor 152 to cause the training computing system 150 to perform operations. In some implementations, the training computing system 150 includes or is otherwise implemented by one or more server computing devices.
[0073] The training computing system 150 can include a model trainer 160 that trains the machine-learned models 120 and/or 140 stored at the user computing device 102 and/or the server computing system 130 using various training or learning techniques, such as, for example, backwards propagation of errors. For example, a loss function can be backpropagated through the model(s) to update one or more parameters of the model(s) (e.g., based on a gradient of the loss function). Various loss functions can be used such as mean squared error, likelihood loss, cross entropy loss, hinge loss, and/or various other loss functions. Gradient descent techniques can be used to iteratively update the parameters over a number of training iterations.
[0074] In some implementations, performing backwards propagation of errors can include performing truncated backpropagation through time. The model trainer 160 can perform a number of generalization techniques (e.g., weight decays, dropouts, etc.) to improve the generalization capability of the models being trained.
[0075] In particular, the model trainer 160 can train the neural radiance field models 120 and/or 140 based on a set of training data 162. The training data 162 can include, for example, a plurality of training images, a plurality of respective three-dimensional locations, and/or one or more two-dimensional view directions. In some implementations the plurality of training images may be deconstructed, or segmented, into a plurality of ground truth patches for each respective training image.
[0076] In some implementations, if the user has provided consent, the training examples can be provided by the user computing device 102. Thus, in such implementations, the model 120 provided to the user computing device 102 can be trained by the training computing system 150 on user-specific data received from the user computing device 102. In some instances, this process can be referred to as personalizing the model.
[0077] The model trainer 160 includes computer logic utilized to provide desired functionality. The model trainer 160 can be implemented in hardware, firmware, and/or software controlling a general purpose processor. For example, in some implementations, the model trainer 160 includes program files stored on a storage device, loaded into a memory and executed by one or more processors. In other implementations, the model trainer 160 includes one or more sets of computer-executable instructions that are stored in a tangible computer-readable storage medium such as RAM hard disk or optical or magnetic media. [0078] The network 180 can be any type of communications network, such as a local area network (e.g., intranet), wide area network (e.g., Internet), or some combination thereof and can include any number of wired or wireless links. In general, communication over the network 180 can be carried via any type of wired and/or wireless connection, using a wide variety of communication protocols (e.g., TCP/IP, HTTP, SMTP, FTP), encodings or formats (e.g., HTML, XML), and/or protection schemes (e.g., VPN, secure HTTP, SSL).
[0079] The machine-learned models described in this specification may be used in a variety of tasks, applications, and/or use cases.
[0080] In some implementations, the input to the machine-learned model(s) of the present disclosure can be image data. The machine-learned model(s) can process the image data to generate an output. As an example, the machine-learned model(s) can process the image data to generate an image recognition output (e.g., a recognition of the image data, a latent embedding of the image data, an encoded representation of the image data, a hash of the image data, etc.). As another example, the machine-learned model(s) can process the image data to generate an image segmentation output. As another example, the machine- learned model(s) can process the image data to generate an image classification output. As another example, the machine-learned model(s) can process the image data to generate an image data modification output (e.g., an alteration of the image data, etc.). As another example, the machine-learned model(s) can process the image data to generate an encoded image data output (e.g., an encoded and/or compressed representation of the image data, etc.). As another example, the machine-learned model(s) can process the image data to generate an upscaled image data output. As another example, the machine-learned model(s) can process the image data to generate a prediction output.
[0081] In some implementations, the input to the machine-learned model(s) of the present disclosure can be latent encoding data (e.g., a latent space representation of an input, etc.). The machine-learned model(s) can process the latent encoding data to generate an output. As an example, the machine-learned model(s) can process the latent encoding data to generate a recognition output. As another example, the machine-learned model(s) can process the latent encoding data to generate a reconstruction output. As another example, the machine-learned model(s) can process the latent encoding data to generate a search output. As another example, the machine-learned model(s) can process the latent encoding data to generate a reclustering output. As another example, the machine-learned model(s) can process the latent encoding data to generate a prediction output.
[0082] In some implementations, the input to the machine-learned model(s) of the present disclosure can be statistical data. The machine-learned model(s) can process the statistical data to generate an output. As an example, the machine-learned model(s) can process the statistical data to generate a recognition output. As another example, the machine- learned model(s) can process the statistical data to generate a prediction output. As another example, the machine-learned model(s) can process the statistical data to generate a classification output. As another example, the machine-learned model(s) can process the statistical data to generate a segmentation output. As another example, the machine-learned model(s) can process the statistical data to generate a segmentation output. As another example, the machine-learned model(s) can process the statistical data to generate a visualization output. As another example, the machine-learned model(s) can process the statistical data to generate a diagnostic output.
[0083] In some implementations, the input to the machine-learned model(s) of the present disclosure can be sensor data. The machine-learned model(s) can process the sensor data to generate an output. As an example, the machine-learned model(s) can process the sensor data to generate a recognition output. As another example, the machine-learned model(s) can process the sensor data to generate a prediction output. As another example, the machine-learned model(s) can process the sensor data to generate a classification output. As another example, the machine-learned model(s) can process the sensor data to generate a segmentation output. As another example, the machine-learned model(s) can process the sensor data to generate a segmentation output. As another example, the machine-learned model(s) can process the sensor data to generate a visualization output. As another example, the machine-learned model(s) can process the sensor data to generate a diagnostic output. As another example, the machine-learned model(s) can process the sensor data to generate a detection output.
[0084] In some cases, the input includes visual data and the task is a computer vision task. In some cases, the input includes pixel data for one or more images and the task is an image processing task. For example, the image processing task can be image classification, where the output is a set of scores, each score corresponding to a different object class and representing the likelihood that the one or more images depict an object belonging to the object class. The image processing task may be object detection, where the image processing output identifies one or more regions in the one or more images and, for each region, a likelihood that region depicts an object of interest. As another example, the image processing task can be image segmentation, where the image processing output defines, for each pixel in the one or more images, a respective likelihood for each category in a predetermined set of categories. For example, the set of categories can be foreground and background. As another example, the set of categories can be object classes. As another example, the image processing task can be depth estimation, where the image processing output defines, for each pixel in the one or more images, a respective depth value. As another example, the image processing task can be motion estimation, where the network input includes multiple images, and the image processing output defines, for each pixel of one of the input images, a motion of the scene depicted at the pixel between the images in the network input.
[0085] Figure 1 A illustrates one example computing system that can be used to implement the present disclosure. Other computing systems can be used as well. For example, in some implementations, the user computing device 102 can include the model trainer 160 and the training dataset 162. In such implementations, the models 120 can be both trained and used locally at the user computing device 102. In some of such implementations, the user computing device 102 can implement the model trainer 160 to personalize the models 120 based on user-specific data.
[0086] Figure IB depicts a block diagram of an example computing device 10 that performs according to example embodiments of the present disclosure. The computing device 10 can be a user computing device or a server computing device. [0087] The computing device 10 includes a number of applications (e.g., applications 1 through N). Each application contains its own machine learning library and machine-learned model(s). For example, each application can include a machine-learned model. Example applications include a text messaging application, an email application, a dictation application, a virtual keyboard application, a browser application, etc.
[0088] As illustrated in Figure IB, each application can communicate with a number of other components of the computing device, such as, for example, one or more sensors, a context manager, a device state component, and/or additional components. In some implementations, each application can communicate with each device component using an API (e.g., a public API). In some implementations, the API used by each application is specific to that application.
[0089] Figure 1C depicts a block diagram of an example computing device 50 that performs according to example embodiments of the present disclosure. The computing device 50 can be a user computing device or a server computing device.
[0090] The computing device 50 includes a number of applications (e.g., applications 1 through N). Each application is in communication with a central intelligence layer. Example applications include a text messaging application, an email application, a dictation application, a virtual keyboard application, a browser application, etc. In some implementations, each application can communicate with the central intelligence layer (and model(s) stored therein) using an API (e.g., a common API across all applications).
[0091] The central intelligence layer includes a number of machine-learned models. For example, as illustrated in Figure 1C, a respective machine-learned model (e.g., a model) can be provided for each application and managed by the central intelligence layer. In other implementations, two or more applications can share a single machine-learned model. For example, in some implementations, the central intelligence layer can provide a single model (e.g., a single model) for all of the applications. In some implementations, the central intelligence layer is included within or otherwise implemented by an operating system of the computing device 50.
[0092] The central intelligence layer can communicate with a central device data layer. The central device data layer can be a centralized repository of data for the computing device 50. As illustrated in Figure 1C, the central device data layer can communicate with a number of other components of the computing device, such as, for example, one or more sensors, a context manager, a device state component, and/or additional components. In some implementations, the central device data layer can communicate with each device component using an API (e.g., a private API).
Example Model Arrangements
[0093] Figure 2 depicts a block diagram of an example neural radiance field model training system 200 according to example embodiments of the present disclosure. In some implementations, the neural radiance field model training system 200 is trained to receive a set of input data 202 descriptive of a scene and, as a result of receipt of the input data 202, provide a trained neural radiance field model that is operable to generate view renderings. Thus, in some implementations, the neural radiance field model training system 200 can include a patch generation model 204 that is operable to generate a plurality of patches 206 for each respective input view and input image.
[0094] The input data 202 can include an input dataset that includes a plurality of images and a plurality of locations, or positions. The plurality of images can be descriptive of a scene that is being observed, and the plurality of locations can be descriptive of locations in the observed space. For training the neural radiance field model, the neural radiance field model may process the plurality of locations and generate a plurality of outputs. The outputs can include predicted view renderings for the respective locations. Additionally and/or alternatively, the outputs can include data descriptive of color distributions and density distributions.
[0095] The plurality of images of the input data 202 (i.e., training dataset or image dataset) can be processed with the patch generation model 204 to generate a plurality of image patches 206 (e.g., a plurality of ground truth image patches). The image patches 206 can be descriptive of portions of the plurality of images. The patch generation model 204 can extract portions of the images to use for patches. The segmentation can be completed at random (e.g., via a random sampling technique) and/or with a pre-determined sequence. The image patches 206 may include overlapping pixel data or may have no overlapping pixel data. The image patches may be of the same uniform size or may vary in size. For example, each image may be deconstructed into four equal sized patches with no overlapping coverage. Alternatively and/or additionally, a focal point of an image can be determined, and various patches of varying sizes can be generated with that same image with the focal point being a center for each of the patches.
[0096] The plurality of image patches 206 can then be compared to the plurality of outputs. In some implementations, the outputs can be patch renderings that can be compared against ground truth patches that were generated with the patch generation model 204. The comparison can be completed in order to evaluate a loss function, which can output a gradient descent. The gradient descent can then be backpropagated to the neural radiance field model in order to adjust one or more parameters of the neural radiance field model.
[0097] Figure 3 depicts a block diagram of an example training system 300 with a flow model according to example embodiments of the present disclosure. The training system 300 is similar to the neural radiance field model training system 200 of Figure 2 except that the training system 300 further includes a flow model 306.
[0098] The training system 300 can include obtaining a density estimation model (e.g., a normalizing flow model 306) pretrained on a patch database 308. In some implementations, the patch database 308 can be generated based on a plurality of images descriptive of a plurality of different scenes. Alternatively and/or additionally, the patch database 308 can include ground truth image patches generated based on input views of a scene being modeled by the neural radiance field model. In some implementations, the patch database 308 can be replaced or supplemented with an image database (e.g., the JFT dataset). The flow model may be trained to be applied for training a plurality of different neural radiance field models in which each of the plurality of different neural radiance field models may be trained for view synthesis of different respective scenes.
[0099] The normalizing flow model 306 can then process rendered patches 304 generated by the neural radiance field model in order to generate a flow output. The flow output can then be utilized to maximize the log likelihood 310 of the rendered patches. [0100] More specifically, Figure 3 depicts an example training system 300 with a flow model that can be utilized to normalize an output to generate smooth transitions and minimize and/or mitigate artifact generation. The training system 300 can involve generating a database of patches 308 from the available input views 302. During training, the training system 300 can process three-dimensional locations with a neural radiance field model in order to render patches 304 from novel views and maximize the log likelihood 310 of the rendered patch 304 given the database 308 available. In order to reduce the floating artifacts, the training system 300 can add a TV norm regularization loss on the depth of the rendered patches 304.
[0101] Figure 4 depicts a block diagram of an example training system 400 according to example embodiments of the present disclosure. The example training system 400 can involve obtaining a training dataset 402. The training dataset 402 can be obtained from one or more sensors and may include one or more inputs 404 and one or more ground truth images 410. The one or more inputs 404 can include one or more locations (e.g., one or more three- dimensional positions in an observation space). Additionally and/or alternatively, the one or more inputs 404 can include one or more view directions (e.g., one or more two-dimensional view directions). Each input 404 may be associated with a particular ground truth image 410 (e.g., a location and view direction may be descriptive of the location and view direction of where a ground truth image was captured with an image sensor).
[0102] The training system 400 can include processing the input 404 with a neural radiance field model 406 to generate one or more rendered patches 408. The rendered patches 408 can then be compared against corresponding ground truth patches generated with the ground truth images 410. The rendered patches 408 and the ground truth patches can be utilized to evaluate a ground truth loss function. The output of the ground truth loss function can then be backpropagated to the neural radiance field model 406. The output can then be used to adjust one or more parameters of the neural radiance field model 406.
[0103] The rendered patches 408 can also be processed by a normalizing flow model 414 in order to generate a flow output, or flow loss. The flow output can include a gradient descent that is backpropagated to the neural radiance field model and can be used to modify one or more parameters of the neural radiance field model 406. The normalizing flow model 414 can be pretrained on a flow training dataset 412 that differs from the obtained training dataset 412. The normalizing flow model 414 can be configured to smoothen transitions in rendered images and minimize artifact generation.
[0104] The systems and methods disclosed herein can be utilized to remove floating artifacts in rendering, provide more realistic texture, and retain geometric details. Moreover, the systems and methods can include the addition of depth regularization. For example, for random viewpoints, patch renderings can be rendered, and a regularization (tv norm) can be applied on the depth of the patch.
[0105] Additionally and/or alternatively, the sampling planes can be annealed (e.g., the sampling planes can be analyzed to approximate a global optimum). In some implementations, instead of sampling evenly within the scene bounding box, the systems and methods can start with a smaller box around the center of an object, sample points within that volume, and gradually increase the size of the box to cover the full scene. Annealing the sampling planes can allow the systems and methods to avoid high density values at the beginning of the rays, which can lead to divergence in the training.
[0106] In some implementations, the systems and methods can include geometry regularization (e.g., TV-regularization on rendered depth map patches, which can be quickly anneal from high value to lower value.), color regularization (which can involve likelihood maximization of rendered 16x16 patches), near/far annealing (which can involve annealing a near/far plane to avoid degenerated solutions), reduced training iterations (e.g., training for only 50k iterations), increased learning rate (e.g., decay from 2e-3 to 2e-5), and gradient clipping (e.g., clipping gradients (at 0.1) to allow for higher learning rate).
[0107] Figure 5 depicts a block diagram of an example training system 500 with a discriminator model according to example embodiments of the present disclosure. The training system 500 of Figure 5 is similar to the training system 300 of Figure 3 except the training system 500 of Figure 5 includes a discriminator model 506 in place of a flow model 306.
[0108] The training system 500 can include obtaining a discriminator model 506 (e.g., a discriminator model of a generative adversarial network) pretrained on a patch database 508. In some implementations, the patch database 508 can be generated based on a plurality of images descriptive of a plurality of different scenes. Alternatively and/or additionally, the patch database 508 can include ground truth image patches generated based on input views of a scene being modeled by the neural radiance field model.
[0109] The discriminator model 306 can then process rendered patches 504 generated by the neural radiance field model in order to generate a discriminator output 510. The discriminator output 510 can be descriptive of whether the discriminator model 506 classifies the rendered patch 504 as real or fake. The discriminator output 510 can then be utilized to adjust one or more parameters of the neural radiance field model.
Example Methods
[0110] Figure 6 depicts a flow chart diagram of an example method to perform according to example embodiments of the present disclosure. Although Figure 6 depicts steps performed in a particular order for purposes of illustration and discussion, the methods of the present disclosure are not limited to the particularly illustrated order or arrangement. The various steps of the method 600 can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure.
[0111] At 602, a computing system can obtain an image dataset. The image dataset can include a plurality of images and a plurality of respective three-dimensional locations, or positions. In some implementations, the plurality of images can be descriptive of different views of a scene. Each of the plurality of images may be associated with one or more of the plurality of respective three-dimensional locations. Additionally and/or alternatively, the image dataset may include a plurality of view directions associated with the plurality of images and the plurality of respective three-dimensional locations. The scene can include one or more features descriptive of one or more objects (e.g., a car in a driveway, a squirrel in a park, or person in a restaurant).
[0112] At 604, the computing system can generate one or more ground truth patches based on the plurality of images. Each ground truth patch can include a proper subset of one of the plurality of images. Generating the ground truth patches can include segmenting and/or deconstructing one or more of the images to generate a patch for different portions of the image. The patches can be utilized as individual training images or may be utilized as groups to train for particular views individually or in combination.
[0113] At 606, the computing system can process one or more of the three-dimensional locations of the image dataset with a neural radiance field model to generate one or more view synthesis renderings. The one or more view synthesis renderings can be descriptive of different views of the scene. In some implementations, the view synthesis renderings can include view synthesis patch renderings that are of the same or comparable scale to the ground truth patches.
[0114] At 608, the computing system can evaluate a loss function that evaluates a difference between the one or more view synthesis renderings and the one or more ground truth patches. The loss function can include one or more regularization terms. In some implementations, the loss function can include a perceptual loss, a photometric loss, an L2 loss, etc.
[0115] At 610, the computing system can adjust one or more parameters of the neural radiance field model based at least in part on the loss function. In some implementations, one or more generative embeddings may be modified based at least in part on the one or more view synthesis renderings and the one or more ground truth patches.
[0116] Figure 7 depicts a flow chart diagram of an example method to perform according to example embodiments of the present disclosure. Although Figure 7 depicts steps performed in a particular order for purposes of illustration and discussion, the methods of the present disclosure are not limited to the particularly illustrated order or arrangement. The various steps of the method 700 can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure.
[0117] At 702, a computing system can obtain a training dataset. The training dataset can include a plurality of images and a plurality of respective three-dimensional locations. The images can be descriptive of a scene, or an environment. The three-dimensional locations can include positions in a three-dimensional observation space. The images can include red- green-blue images with a plurality of pixels with varying color values.
[0118] At 704, the computing system can generate a plurality of image patches. The plurality of image patches can include a set of image patches for each of the plurality of images, such that for each image can be utilized to generate two or more image patches. In some implementations, the images can be segmented into equal portions. Additionally and/or alternatively, portions of the images may be randomly selected for patch generation. In some implementations, each portion of each of the plurality of images may be utilized for image patch generation. Additionally and/or alternatively, the image patches may include different portions of images altogether such that each portion of an image is only utilized for that specific image patch. In some implementations, the image patches can include overlapping data. For example, one or more pixels of an image may be utilized for a plurality of image patches.
[0119] At 706, the computing system can process the plurality of respective three- dimensional locations with a neural radiance field model to generate one or more patch renderings. The neural radiance field model can include one or more multi-layer perceptrons and may be configured to map three-dimensional positions and/or view directions to one or more color values and one or more volume density values. In some implementations, the one or more patch renderings can be descriptive of views of the scene. In some implementations, the one or more patch renderings can be descriptive of predicted views associated with one or more of the three-dimensional locations. The one or more patch renderings may correspond with one or more ground truth image patches or may depict a different view altogether.
[0120] At 708, the computing system can train a flow model based at least in part on the plurality of image patches. In some implementations, the flow model may be pre-trained with an unrelated dataset. The flow model can be trained on different scenes from the scenes depicted in the ground truth images. In some, implementations, the flow model can be pretrained on an image dataset in place of or in complement to the patch dataset.
[0121] At 710, the computing system can process the one or more patch renderings with the flow model to generate a flow output. The flow output can include a gradient descent. In some implementations, the flow output can be descriptive of a color “smoothness” and/or a geometric “smoothness” in the one or more patch renderings.
[0122] At 712, the computing system can adjust one or more parameters of the neural radiance field model based at least in part on the flow output. The parameters may be adjusted based at least in part on the gradient descent via backpropagation. [0123] In some implementations, the method 700 and the method 600 can be utilized in parallel and/or in series. The methods can be utilized individually and/or in combination. In some implementations, the flow model can be replaced with and/or used with a discriminator model.
[0124] Figure 8 depicts a flow chart diagram of an example method to perform according to example embodiments of the present disclosure. Although Figure 8 depicts steps performed in a particular order for purposes of illustration and discussion, the methods of the present disclosure are not limited to the particularly illustrated order or arrangement. The various steps of the method 800 can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure.
[0125] At 802, a computing system can obtain an input dataset. The input dataset can include one or more locations. The one or more locations can be locations not included in the training dataset for the neural radiance field model. The one or more locations may be descriptive of a position in an environment.
[0126] At 804, the computing system can process the input dataset with a neural radiance field model to generate one or more novel view renderings. The novel view rendering can include at least a portion of the environment. In some implementations the novel view rendering can be descriptive of a view that differs from the views depicted in the training dataset for the neural radiance field model. In some implementations, the neural radiance field model may have been trained by comparing patches from a training dataset to generated predicted view renderings, in which the patches may have been generated by segmenting one or more training images.
[0127] At 806, the computing system can provide the one or more novel view renderings for display. The one or more novel view renderings may be sent for visual display and may be displayed on a screen of a user device. In some implementations, the novel view rendering may be stored in a database. Additionally and/or alternatively, the novel view rendering may be utilized for training a neural radiance field model and/or a flow model.
[0128] Figure 9 depicts a flow chart diagram of an example method to perform according to example embodiments of the present disclosure. Although Figure 9 depicts steps performed in a particular order for purposes of illustration and discussion, the methods of the present disclosure are not limited to the particularly illustrated order or arrangement. The various steps of the method 900 can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure. [0129] At 902, a computing system can obtain a training dataset. The training dataset can include a plurality of images and a plurality of respective three-dimensional locations. The images can be descriptive of a scene, or an environment. The three-dimensional locations can include positions in a three-dimensional observation space. The images can include red- green-blue images with a plurality of pixels with varying color values.
[0130] At 904, the computing system can generate a plurality of image patches. The plurality of image patches can include a set of image patches for each of the plurality of images, such that for each image can be utilized to generate two or more image patches. In some implementations, the images can be segmented into equal portions. Additionally and/or alternatively, portions of the images may be randomly selected for patch generation. In some implementations, each portion of each of the plurality of images may be utilized for image patch generation. Additionally and/or alternatively, the image patches may include different portions of images altogether such that each portion of an image is only utilized for that specific image patch. In some implementations, the image patches can include overlapping data. For example, one or more pixels of an image may be utilized for a plurality of image patches.
[0131] At 906, the computing system can process the plurality of respective three- dimensional locations with a neural radiance field model to generate one or more patch renderings. The neural radiance field model can include one or more multi-layer perceptrons and may be configured to map three-dimensional positions and/or view directions to one or more color values and one or more volume density values. In some implementations, the one or more patch renderings can be descriptive of views of the scene. In some implementations, the one or more patch renderings can be descriptive of predicted views associated with one or more of the three-dimensional locations. The one or more patch renderings may correspond with one or more ground truth image patches or may depict a different view altogether.
[0132] At 908, the computing system can obtain a flow model in which the flow model includes a pre-trained model. In some implementations, the flow model may be pre-trained on a flow training dataset. The flow training dataset can include a plurality of image datasets descriptive of a plurality of different scenes. The flow training dataset can include a different dataset than the training dataset. The flow model may be utilized to train a plurality of different neural radiance field models being trained on a plurality of different respective scenes.
[0133] At 910, the computing system can process the one or more patch renderings with the flow model to generate a flow output. The flow output can include a gradient descent. In some implementations, the flow output can be descriptive of a color “smoothness” and/or a geometric “smoothness” in the one or more patch renderings.
[0134] At 912, the computing system can adjust one or more parameters of the neural radiance field model based at least in part on the flow output. The parameters may be adjusted based at least in part on the gradient descent via backpropagation.
Example Implementations
[0135] The systems and methods disclosed herein can include a patch-based approach. More specifically, in some implementations, the majority of artifacts may be caused by errors in scene geometry and divergent training start behavior. Therefore, the systems and methods disclosed herein can utilize a normalizing flow model to regularize the color of the reconstructed scene.
[0136] The novel-view synthesis task of trained neural radiance field models can be used to render unseen viewpoints of a scene for a given set of input images. Neural radiance field models can rely on having a large amount of training data to learn scenes. In real-world applications such as AR/VR, autonomous driving, and robotics, the input, however, may be sparse and only few views may be available.
[0137] The systems and methods may include a patch-based regularization to the rendered depth maps of unobserved viewpoints. The patch-based regularization can have the effect of reducing floating artifacts and gravely improving the learned scene geometry.
[0138] Additionally and/or alternatively, the systems and methods can include an annealing strategy for sampling points along the ray, where the systems and methods can first sample scene content within a small range before annealing to the full scenes, thereby preventing divergent behavior at the beginning of training.
[0139] In some implementations, the systems and methods can include a normalizing flow model to regularize the color prediction of unseen viewpoints by maximizing the loglikelihood of the rendered patches. Therefore, the systems and methods may avoid shifts in color between different views.
[0140] The optimization procedure for NeRF for sparse inputs can utilize a mip-NeRF system, which can use a multi-scale radiance field-based model to represent scenes. For sparse views, however, the quality of rendered novel views may drop for mip-NeRF mainly due to incorrect scene geometry and divergent training start behavior. Therefore, the systems and methods may utilize a patch-based approach to regularize the geometry as well as the color prediction of unseen view- points. The approach can provide a simple annealing strategy of the sampled scene space to avoid divergent training start behavior. Additionally and/or alternatively, the systems and methods can use higher learning rates in combination with gradient clipping to further speed up the optimization process. Figure 3 depicts an example overview of one implementation of the method.
[0141] A neural radiance field can include a continuous function f mapping a three- dimensional location
Figure imgf000029_0005
and viewing direction v
Figure imgf000029_0010
to a volume density
Figure imgf000029_0011
and color value c
Figure imgf000029_0004
The neural radiance fields can be parameterized using a multi-layer perceptron (MLP) where the weights of the MLP are optimized for given input images of a scene:
Figure imgf000029_0001
where 6 can indicate the network weights and y a predefined positional encoding applied elementwise to x and d.
[0142] For volume rendering, a neural radiance field
Figure imgf000029_0006
may be given, such that a pixel can be rendered by casting a ray r(t) = o + td from the camera center o through the pixel along direction d. For a given near and far bounds , the pixel’s predicted color value c
Figure imgf000029_0009
can be obtained as
Figure imgf000029_0002
where and
Figure imgf000029_0007
can indicate the density and color prediction of radiance field
Figure imgf000029_0008
respectively. A neural radiance field may be optimized for a set of input images together with their camera poses by minimizing the mean squared error
Figure imgf000029_0003
where can indicate the set of all rays and cGT the ground truth color for the pixel.
[0143] While a NeRF is utilized, only a single ray may be cast per pixel, in mip-NeRF a cone may be cast instead. The positional encoding can change from representing an infinitely small point to an integration over a volume covered by a canonical frustum. This can be a more adequate representation for scenes with various camera distances and can allow the ability to reduce NeRF’s coarse and fine MLP to a single multiscale MLP increasing training speed and reducing model size. As a result, the systems and methods disclosed herein may implement the mip-NeRF representation. [0144] In some implementations, the systems and methods may include patch-based regularization.
[0145] A NeRF model’s performance may drop significantly if the number of input views is sparse. In some implementations, the systems and methods can regularize unseen viewpoints. More specifically, the systems and methods may define a space of unseen but relevant viewpoints and render small patches from these cameras. Using these patches, the key idea may be to regularize the geometry to be smooth as well as the color prediction to have a high likelihood.
[0146] To apply regularization techniques, the systems and methods may first need to define the space of unobserved viewpoints from which the system may sample camera poses. To this end, the systems and methods may make use of a set of target poses where
Figure imgf000030_0007
Figure imgf000030_0001
[0147] The target poses can be assumed to be given as they can be a factor for the task of novel-view synthesis. The systems and methods may then define the space of all possible camera positions as the bounding box of all given target camera positions
Figure imgf000030_0002
where tmin and tmax can be the elementwise minimum and maximum values of
Figure imgf000030_0006
respectively.
[0148] To obtain the sample space of camera rotations, the systems and methods may first define a common “up” direction
Figure imgf000030_0004
by taking the mean over the up directions of all target poses. Next, the system may calculate the mean focus point for all target poses. To
Figure imgf000030_0005
learn more robust representations, the system may add some jittering to the focal point before calculating the camera rotation matrix. The system can define the set of all possible camera rotations as
Figure imgf000030_0003
where indicates the resulting camera rotation matrix and e a small jitter added to the focus point.
[0149] In some implementations, the systems and methods can regularize unseen viewpoints such that scene geometry and color prediction can include a value of highest probability.
[0150] Geometry may tend to be smooth in the real world (e.g., flat surfaces can be much more likely than high-frequency variable surfaces). Therefore, the systems and methods can include geometry regularization that may enforce depth smoothness by adding a TV prior on depth map patches from unobserved viewpoints. More specifically, the systems and methods can let Spatch be the patch size of the rendered patches from unobserved viewpoints. The expected depth of a pixel can be obtained via
Figure imgf000031_0002
[0151] Further, the systems and methods can let indicate the expected depth of
Figure imgf000031_0004
the ray / pixel at position (i,j) of the patch. The system can formulate a total variation loss as:
Figure imgf000031_0003
[0152] For regularizing the color, the systems and methods can estimate the likelihood of rendered patches and can maximize the estimated likelihood during optimization. The systems and methods may make use of readily-available abundant data of unstructured image collections of two-dimensional images. While datasets of multi-view images with pose information can be expensive to collect, collections of unstructured images may be more easily accessible. The systems and methods may train a normalizing flow model on the JFT- dataset. In some implementations, the dataset can include natural images. As a result, the systems and methods can reuse the same flow model for any type of scene optimized. After training the flow model, the systems and methods can use the trained flow model to estimate the log-likelihood (LL) of rendered patches. More specifically, the systems and methods can let be the trained flow model. The system can define the color regularization loss as
Figure imgf000031_0005
where P can indicate the predicted RGB color patch from an unobserved viewpoint.
[0153] For very sparse scenarios (e.g., 3 input views), the systems and methods may observe another failure mode of a mip-NeRF system: divergent training start behavior can lead to high density values at the ray starts. As a result, the input views may be correctly reconstructed, but novel views may be degenerated. To avoid the failure mode, the systems and methods can anneal the sampled scene space quickly over the first iterations. More specifically, the system can let n, f be the near and far plane, respectively, and m be the defined scene center (usually midpoint between n and f). The system can then define
Figure imgf000031_0001
where i indicates the training iteration, Nt a hyperparameter indicating after which iteration the full range should be reached, and pstart a hyperparameter indicating with which range to start with (e.g., 0.5).
[0154] In some implementations, to allow for the high learning rate, the systems and methods may clip the gradients at a maximum value of 0: 1 and at a maximum norm of 0: 1. Additionally and/or alternatively, the systems and methods may train with the Adam optimizer and a learning rate of 0.002 and may exponentially decay it to 0.00002 over the optimization process.
[0155] The systems and methods for training NeRF models in data-limited regimes can include a method that leverages multi-view consistency constraints for the rendered depth maps to enforce the learning of correct scene geometry. In order to regularize the color predictions, the systems and methods may maximize the log-likelihood of the rendered patches relative to the input views using a normalizing flow model. Additionally and/or alternatively, the systems and methods can include an annealing-based ray sampling strategy to avoid divergent behavior at the beginning of training.
Additional Disclosure
[0156] The technology discussed herein makes reference to servers, databases, software applications, and other computer-based systems, as well as actions taken and information sent to and from such systems. The inherent flexibility of computer-based systems allows for a great variety of possible configurations, combinations, and divisions of tasks and functionality between and among components. For instance, processes discussed herein can be implemented using a single device or component or multiple devices or components working in combination. Databases and applications can be implemented on a single system or distributed across multiple systems. Distributed components can operate sequentially or in parallel.
[0157] While the present subject matter has been described in detail with respect to various specific example embodiments thereof, each example is provided by way of explanation, not limitation of the disclosure. Those skilled in the art, upon attaining an understanding of the foregoing, can readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, the subject disclosure does not preclude inclusion of such modifications, variations and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art. For instance, features illustrated or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Thus, it is intended that the present disclosure cover such alterations, variations, and equivalents.

Claims

WHAT IS CLAIMED IS:
1. A computing system, the system comprising: one or more processors; and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations, the operations comprising: obtaining an image dataset, wherein the image dataset comprises a plurality of images and a plurality of respective three-dimensional locations, and wherein the plurality of images are descriptive of a scene; generating one or more ground truth patches based on the plurality of images, wherein each ground truth patch comprises a proper subset of one of the plurality of images; processing one or more of the plurality of three-dimensional locations of the image dataset with a neural radiance field model to generate one or more view synthesis renderings, wherein the one or more view synthesis renderings are descriptive of different views of the scene; evaluating a loss function that evaluates a difference between the one or more view synthesis renderings and the one or more ground truth patches; and adjusting one or more parameters of the neural radiance field model based at least in part on the loss function.
2. The system of any preceding claim, wherein generating one or more ground truth patches comprises: processing an image of the plurality of images to determine a portion of the image descriptive of an object in the scene; and generating the one or more ground truth patches by segmenting the portion of the image descriptive of the object.
3. The system of any preceding claim, wherein the operations further comprise: obtaining an input dataset, wherein the input dataset comprises one or more respective input locations; processing the input dataset with the neural radiance field model to generate a novel view rendering; and providing the novel view rendering for display.
4. The system of claim 3, wherein the plurality of images are descriptive of one or more input views of the scene; and wherein the novel view rendering is descriptive of one or more output views, wherein the one or more output views differ from the one or more input views.
5. The system of any preceding claim, wherein the one or more view synthesis renderings comprise one or more predicted patches.
6. The system of any preceding claim, wherein the loss function comprises at least one of a perceptual loss or a discriminator loss.
7. The system of any preceding claim, wherein adjusting the one or more parameters of the neural radiance field model comprises adjusting the one or more parameters to maximize a log likelihood of output renderings from the neural radiance field model.
8. The system of any preceding claim, wherein the operations further comprise: processing the one or more view synthesis renderings with a discriminator model to generate a discriminator output; and adjusting the one or more parameters of the neural radiance field model based on the discriminator output.
9. The system of claim 8, wherein the discriminator model comprises a convolutional discriminator, and wherein the discriminator model is part of a generative adversarial network.
10. The system of any preceding claim, wherein the loss function comprises an adversarial loss.
11. A computer-implemented method, the method comprising: obtaining, by a computing system comprising one or more processors, a training dataset, wherein the training dataset comprises a plurality of images and a plurality of respective three-dimensional locations, and wherein the plurality of images depict a scene; generating, by the computing system, a plurality of image patches based on the plurality of images, wherein each image patch comprises a proper subset of one of the plurality of images; processing, by the computing system, the plurality of respective three-dimensional locations with a neural radiance field model to generate one or more patch renderings, wherein the one or more patch renderings are descriptive of views of the scene; obtaining, by the computing system, a flow model, wherein the flow model comprises a pre-trained model trained on a flow training dataset; processing, by the computing system, the one or more patch renderings with the flow model to generate a flow output; and adjusting, by the computing system, one or more parameters of the neural radiance field model based at least in part on the flow output.
12. The method of any preceding claim, further comprising: evaluating, by the computing system, a ground truth loss function that evaluates a difference between the one or more patch renderings and one or more of the plurality of image patches; and adjusting, by the computing system, one or more parameters of the neural radiance field model based at least in part on the ground truth loss function.
13. The method of any preceding claim, further comprising: storing, by the computing system, the plurality of image patches in a database.
14. The method of any preceding claim, wherein the one or more patch renderings comprise one or more color predictions and one or more depth predictions.
15. The method of any preceding claim, wherein the flow output comprises a geometry regularization.
16. One or more non-transitory computer-readable media that collectively store instructions that, when executed by one or more computing devices, cause the one or more computing devices to perform operations, the operations comprising: obtaining an input dataset, wherein the input dataset comprises one or more locations, wherein the one or more locations are descriptive of a position in an environment; processing the input dataset with a neural radiance field model to generate one or more novel view renderings, wherein the novel view rendering comprises a view of at least a portion of the environment; wherein the neural radiance field model has been trained by comparing patches from a training dataset to generated predicted view renderings, wherein the patches are generated by segmenting one or more training images; and providing the one or more novel view renderings for display.
17. The one or more non-transitory computer-readable media of any preceding claim, wherein the neural radiance field model comprises: a first model configured to process an input dataset; and a second model configured to process a neural radiance field output generated by a neural radiance field model.
18. The one or more non-transitory computer-readable media of claim 17, wherein processing the image dataset with the neural radiance field model comprises: processing the input dataset with the first model to generate neural radiance field data.
19. The one or more non-transitory computer-readable media of claim 18, wherein processing the image dataset with the neural radiance field model comprises: processing the neural radiance field with the second model to generate the one or more novel view renderings.
20. The one or more non-transitory computer-readable media of any preceding claim, wherein the input dataset comprises one or more view directions.
PCT/US2022/047539 2021-11-15 2022-10-24 Robustifying nerf model novel view synthesis to sparse data WO2023086198A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22812912.8A EP4392935A1 (en) 2021-11-15 2022-10-24 Robustifying nerf model novel view synthesis to sparse data
CN202280075411.XA CN118251698A (en) 2021-11-15 2022-10-24 Novel view synthesis of robust NERF model for sparse data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163279445P 2021-11-15 2021-11-15
US63/279,445 2021-11-15

Publications (1)

Publication Number Publication Date
WO2023086198A1 true WO2023086198A1 (en) 2023-05-19

Family

ID=84361960

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/047539 WO2023086198A1 (en) 2021-11-15 2022-10-24 Robustifying nerf model novel view synthesis to sparse data

Country Status (3)

Country Link
EP (1) EP4392935A1 (en)
CN (1) CN118251698A (en)
WO (1) WO2023086198A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883587A (en) * 2023-06-15 2023-10-13 北京百度网讯科技有限公司 Training method, 3D object generation method, device, equipment and medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
IAN J GOODFELLOW ET AL: "Generative Adversarial Nets", NIPS'14 PROCEEDINGS OF THE 27TH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS, vol. 2, 8 December 2014 (2014-12-08), pages 1 - 9, XP055572979, DOI: http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf *
JAIN AJAY ET AL: "Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis", 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), IEEE, 10 October 2021 (2021-10-10), pages 5865 - 5874, XP034093504, DOI: 10.1109/ICCV48922.2021.00583 *
KANGLE DENG ET AL: "Depth-supervised NeRF: Fewer Views and Faster Training for Free", 6 July 2021, ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, XP091008610 *
MENG QUAN ET AL: "GNeRF: GAN-based Neural Radiance Field without Posed Camera", 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), IEEE, 10 October 2021 (2021-10-10), pages 6331 - 6341, XP034093333, DOI: 10.1109/ICCV48922.2021.00629 *
RADFORD ALEC ET AL: "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks", 7 January 2016 (2016-01-07), pages 1 - 16, XP055786755, Retrieved from the Internet <URL:https://arxiv.org/pdf/1511.06434.pdf> [retrieved on 20210317] *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883587A (en) * 2023-06-15 2023-10-13 北京百度网讯科技有限公司 Training method, 3D object generation method, device, equipment and medium

Also Published As

Publication number Publication date
CN118251698A (en) 2024-06-25
EP4392935A1 (en) 2024-07-03

Similar Documents

Publication Publication Date Title
US11232286B2 (en) Method and apparatus for generating face rotation image
EP4150581A1 (en) Inverting neural radiance fields for pose estimation
WO2023129190A1 (en) Generative modeling of three dimensional scenes and applications to inverse problems
EP4377898A1 (en) Neural radiance field generative modeling of object classes from single two-dimensional views
US20240119697A1 (en) Neural Semantic Fields for Generalizable Semantic Segmentation of 3D Scenes
US20240096001A1 (en) Geometry-Free Neural Scene Representations Through Novel-View Synthesis
CN115131218A (en) Image processing method, image processing device, computer readable medium and electronic equipment
CN116563682A (en) Attention scheme and strip convolution semantic line detection method based on depth Hough network
WO2023086198A1 (en) Robustifying nerf model novel view synthesis to sparse data
Abbas et al. Improving deep learning-based image super-resolution with residual learning and perceptual loss using SRGAN model
US20230360181A1 (en) Machine Learning for High Quality Image Processing
CN115066691A (en) Cyclic unit for generating or processing a sequence of images
US20230342890A1 (en) High Resolution Inpainting with a Machine-learned Augmentation Model and Texture Transfer
CN117255998A (en) Unsupervised learning of object representations from video sequences using spatial and temporal attention
CN114764746A (en) Super-resolution method and device for laser radar, electronic device and storage medium
CN114529899A (en) Method and system for training convolutional neural networks
US12026892B2 (en) Figure-ground neural radiance fields for three-dimensional object category modelling
US20230130281A1 (en) Figure-Ground Neural Radiance Fields For Three-Dimensional Object Category Modelling
Chen et al. An image denoising method of picking robot vision based on feature pyramid network
EP4350632A2 (en) Method and appratus with neural rendering based on view augmentation
CN117058472B (en) 3D target detection method, device and equipment based on self-attention mechanism
Liu et al. Stylized image resolution enhancement scheme based on an improved convolutional neural network in cyber‐physical systems
US20230177722A1 (en) Apparatus and method with object posture estimating
US20220171959A1 (en) Method and apparatus with image processing
US20230085156A1 (en) Entropy-based pre-filtering using neural networks for streaming applications

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 18012270

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22812912

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022812912

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022812912

Country of ref document: EP

Effective date: 20240327