WO2021178919A1 - Systems and methods for depth estimation by learning triangulation and densification of sparse points for multi-view stereo - Google Patents
Systems and methods for depth estimation by learning triangulation and densification of sparse points for multi-view stereo Download PDFInfo
- Publication number
- WO2021178919A1 WO2021178919A1 PCT/US2021/021239 US2021021239W WO2021178919A1 WO 2021178919 A1 WO2021178919 A1 WO 2021178919A1 US 2021021239 W US2021021239 W US 2021021239W WO 2021178919 A1 WO2021178919 A1 WO 2021178919A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- encoder
- points
- descriptors
- depth
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
- G06T7/596—Depth or shape recovery from multiple images from stereo images from three or more stereo images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/002—Image coding using neural networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/15—Processing image signals for colour aspects of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/282—Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
- G06T2207/10021—Stereoscopic video; Stereoscopic image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Definitions
- the present invention is related to computing, learning network configurations, and connected mobile computing systems, methods, and configurations, and more specifically to systems and methods for estimating depths of features in a scene from multi-view images, which estimated depths may be used in mobile computing systems, methods, and configurations featuring at least one wearable component configured for virtual and/or augmented reality operation.
- Modem computing and display technologies have facilitated the development of systems for so called “virtual reality” (“VR”), “augmented reality” (“AR”), and/or “mixed reality” (“MR”) environments or experiences, referred to collectively as “cross-reality” (“XR”) environments or experiences.
- VR virtual reality
- AR augmented reality
- MR mixed reality
- XR mixed reality
- This can be done by presenting computer-generated imagery to a user through a head-mounted display. This imagery creates a sensory experience which immerses the user in a simulated environment.
- This data may describe, for example, virtual objects that may be rendered in a way that users’ sense or perceive as a part of a physical world and can interact with the virtual objects.
- the user may experience these virtual objects as a result of the data being rendered and presented through a user interface device, such as, for example, a head-mounted display device.
- the data may be displayed to the user to see, or may control audio that is played for the user to hear, or may control a tactile (or haptic) interface, enabling the user to experience touch sensations that the user senses or perceives as feeling the virtual object.
- AR systems may be useful for many applications, spanning the fields of scientific visualization, medical training, engineering design and prototyping, tele-manipulation and tele-presence, and personal entertainment.
- VR systems typically involve presentation of digital or virtual image information without transparency to actual real-world visual input.
- AR systems generally supplement a real-world environment with simulated elements.
- AR systems may provide a user with a view of a surrounding real-world environment via a head-mounted display.
- Computer-generated imagery can also be presented on the head-mounted display to enhance the surrounding real-world environment.
- This computer-generated imagery can include elements which are contextually -related to the surrounding real-world environment.
- Such elements can include simulated text, images, objects, and the like.
- MR systems also introduce simulated objects into a real-world environment, but these objects typically feature a greater degree of interactivity than in AR systems.
- AR/MR scenarios often include presentation of virtual image elements in relationship to real-world objects.
- an AR/MR scene is depicted wherein a user of an AR/MR technology sees a real-world scene featuring the environment surrounding the user, including structures, objects, etc.
- the user of the AR/MR technology perceives that they “see” computer generated features (i.e., virtual object), even though such features do not exist in the real-world environment.
- AR and MR in contrast to VR, include one or more virtual objects in relation to real objects of the physical world.
- the virtual objects also interact with the real world objects, such that the AR/MR system may also be termed a “spatial computing” system in relation to the system’s interaction with the 3D world surrounding the user.
- the experience of virtual objects interacting with real objects greatly enhances the user’s enjoyment in using the XR system, and also opens the door for a variety of applications that present realistic and readily understandable information about how the physical world might be altered.
- the visualization center of the brain gains valuable perception information from the motion of both eyes and components thereof relative to each other.
- Vergence movements i.e., rolling movements of the pupils toward or away from each other to converge the lines of sight of the eyes to fixate upon an object
- accommodation or focusing
- accommodation or focusing
- Stereoscopic wearable glasses generally feature two displays - one for the left eye and one for the right eye - that are configured to display images with slightly different element presentation such that a three-dimensional perspective is perceived by the human visual system.
- Such configurations have been found to be uncomfortable for many users due to a mismatch between vergence and accommodation (“vergence-accommodation conflict”) which must be overcome to perceive the images in three dimensions. Indeed, some users are not able to tolerate stereoscopic configurations. These limitations apply to VR, AR, and MR systems.
- the light-guiding optical elements are designed to in-couple virtual light corresponding to digital or virtual objects, propagate it by total internal reflection ( TIR ). and then out-couple the virtual light to display the virtual objects to the user’s eyes.
- TIR total internal reflection
- the light-guiding optical elements are also designed to be transparent to light from (e.g., reflecting off of) actual real-world objects. Therefore, portions of the light- guiding optical elements are designed to reflect virtual light for propagation via TIR while being transparent to real-world light from real-world objects in AR/MR systems.
- AR/MR scenarios often include interactions between virtual objects and a real-world physical environment.
- some VR scenarios include interactions between completely virtual objects and other virtual objects.
- Delineating objects in the physical environment facilitates interactions with virtual objects by defining the metes and bounds of those interactions (e.g., by defining the extent of a particular structure or object in the physical environment). For instance, if an AR/MR scenario includes a virtual object (e.g., a tentacle or a fist) extending from a particular object in the physical environment, defining the extent of the object in three dimensions allows the AR/MR system to present a more realistic AR/MR scenario. Conversely, if the extent of objects is not defined or inaccurately defined, artifacts and errors will occur in the displayed images.
- a virtual object e.g., a tentacle or a fist
- a virtual object may appear to extend partially or entirely from midair adjacent an object instead of from the surface of the object.
- an AR/MR scenario includes a virtual character walking on a particular horizontal surface in a physical environment, inaccurately defining the extent of the surface may result in the virtual character appearing to walk off of the surface without falling, and instead floating in midair.
- depth sensing of scenes are useful for in a wide range of applications, ranging from cross reality systems to autonomous driving.
- Estimating depth of scenes can be broadly divided into classes: active and passive sensing.
- Active sensing techniques include LiDAR, structured-light and time-of-flight (ToF) cameras, whereas depth estimation using a monocular camera or stereopsis of an array of cameras is termed passive sensing.
- Active sensors are currently the de-facto standard of applications requiring depth sensing due to good accuracy and low latency in varied environments (see [Ref. 44]). Numbered references in brackets (“[Ref. ##]”) refer to the reference list appended below; each of these references is incorporated by reference in its entirety herein.
- MVS generally refers to the problem of reconstructing 3D scene structure from multiple images with known camera poses and intrinsics.
- the unconstrained nature of camera motion alleviates the baseline limitation of stereo-rigs, and the algorithm benefits from multiple observations of the same scene from continuously varying viewpoints. (See [Ref. 17]).
- camera motion also makes depth estimation more challenging relative to rigid stereo-rigs due to pose uncertainty and added complexity of motion artifacts.
- Most MVS approaches involve building a 3D cost volume, usually with a plane sweep stereo approach. (See [Refs. 41,18]).
- Accurate depth estimation using MVS relies on 3D convolutions on the cost volume, which is both memory as well as computationally expensive, scaling cubically with the resolution.
- redundant compute is added by ignoring useful image-level properties such as interest points and their descriptors, which are a necessary precursor to camera pose estimation, and hence, any MVS technique. This increases the overall cost and energy requirements for passive sensing.
- the detect-then-describe approach is the most common approach to sparse feature extraction, wherein, interest points are detected and then described for a patch around the point.
- the descriptor encapsulates higher level information, which is missed by typical low-level interest points such as comers, blobs, etc.
- SIFT see [Ref. 28]
- ORB see [Ref. 37] were ubiquitously used as descriptors for feature matching for low level vision tasks.
- Deep neural networks directly optimizing for the objective at hand have now replaced these hand engineered features across a wide array of applications.
- an end-to-end network has remained elusive for SLAM (see [Ref. 32] due to the components being non- differentiable.
- General purpose descriptors learned by methods such as SuperPoint (see [Ref. 9], LIFT (see [Ref. 42]), and GIFT (see [Ref. 27] aim to bridge the gap towards differentiable SLAM.
- MVS approaches either directly reconstruct a 3D volume or output a depth map which can be flexibly used for 3D reconstruction or other applications.
- Methods of reconstructing 3D volumes are restricted to small spaces or isolated objects either due to the high memory load of operating in a 3D voxelized space (see [Refs. 35, 39], or due to the difficulty of learning point representations in complex environments (see [Ref. 34]).
- the use of multi-view images captured in indoor environments has progressed lately starting with DeepMVS (see [Ref. 18]) which proposed a learned patch matching approach.
- MVDepthNet see [Ref. 40]
- DPSNefl see [Ref. 19] build a cost volume for depth estimation.
- MVS multi -view stereo
- the embodiments disclosed herein are directed to systems and methods for estimating depths of features in a scene or environment surrounding a user of a spatial computing system, such as an XR system, in an end-to-end process.
- the estimated depths can be utilized by a spatial computing system, for example, to provide an accurate and effective 3D XR experience.
- the resulting 3D XR experience is displayable in a rich, binocular, three- dimensional experience that is comfortable and maximally useful to the user, in part because it can present images in a manner which addresses some of the fundamental aspects of the human perception system, such as the vergence-accommodation mismatch.
- the estimated depths may be used to generate a 3D reconstruction having accurate depth data enabling the 3D images to be displayed in multiple focal planes.
- the 3D reconstruction may also enable accurate management of interactions between virtual objects, other virtual objects, and/or real world objects.
- one embodiment is directed to a method for estimating depth of features in a scene from multi-view images.
- multi-view images are obtained, including an anchor image of the scene and a set of reference images of the scene. This may be accomplished by one or more suitable cameras, such as cameras of an XR system.
- the anchor image and reference images are passed through a shared RGB encoder and descriptor decoder which (1) outputs a respective descriptor field of descriptors for the anchor image and each reference image, (ii) detects interest points in the anchor image in conjunction with relative poses to determine a search space in the reference images from alternate view-points, and (iii) outputs intermediate feature maps.
- the respective descriptors are sampled in the search space of each reference image to determine descriptors in the search space and matching the identified descriptors with descriptors for the interest points in the anchor image.
- the matched descriptors are referred to as matched keypoints.
- the matched keypoints are triangulated using singular value decomposition (SVD) to output 3D points.
- the 3D points are passed through a sparse depth encoder to create a sparse depth image from the 3D points and output feature maps.
- a depth decoder then generates a dense depth image based on the output feature maps for the sparse depth encoder and the intermediate feature maps from the RGB encoder.
- the shared RGB encoder and descriptor decoder may comprise two encoders including an RGB image encoder and a sparse depth image encoder, and three decoders including an interest point detection encoder, a descriptor decoder, and a dense depth prediction encoder.
- the shared RGB encoder and descriptor decoder may be a fully - convolutional neural network configured to operate on a full resolution of the anchor image and transaction images.
- the method may further comprise feeding the feature maps from the RGB encoder into a first task-specific decoder head to determine weights for the detecting of interest points in the anchor image and outputting interest point descriptions.
- the descriptor decoder may comprise a U-Net like architecture to fuse fine and course level image information for matching the identified descriptors with descriptors for the interest points.
- the search space may be constrained to a respective epipolar line in the reference images plus a fixed offset on either side of the epipolar line, and within a feasible depth sensing range along the epipolar line.
- bilinear sampling may be used by the shared RGB encoder and descriptor decoder to output the respective descriptors at desired points in the descriptor field.
- the step of triangulating the matched keypoints comprises estimating respective two dimensional (2D) positions of the interest points by computing a softmax across spatial axes to output cross-correlation maps; performing a soft- argmax operation to calculate the 2D position of joints as a center of mass of corresponding cross-correlation maps; performing a linear algebraic triangulation from the 2D estimates; and using a singular value decomposition (SVD) to output 3D points.
- XR cross reality
- the cross reality system comprises a head-mounted display device having a display system.
- the head-mounted display may have a pair of near-eye displays in an eyeglasses-like structure.
- a computing system is in operable communication with the head-mounted display.
- a plurality of camera sensors are in operable communication with the computing system.
- the computing system is configured to estimate depths of features in a scene from a plurality of multi-view images captured by the camera sensors any of the methods described above.
- the process may include any one or more of the additional aspects of the cross reality system described above.
- the process may include obtaining a multi-view images, including an anchor image of the scene and a set of reference images of a scene within a field of view of the camera sensors from the camera sensors; passing the anchor image and reference images through a shared RGB encoder and descriptor decoder which (1) outputs a respective descriptor field of descriptors for the anchor image and each reference image, (ii) detects interest points in the anchor image in conjunction with relative poses to determine a search space in the reference images from alternate view-points, and (iii) outputs intermediate feature maps; sampling the respective descriptors in the search space of each reference image to determine descriptors in the search space and matching the identified descriptors with descriptors for the interest points in the anchor image, such matched descriptors referred to as matched keypoints; triangulating the matched keypoints using singular value decomposition (SVD) to output 3D points; passing the 3D points through a sparse depth encoder to create a sparse depth image from
- FIG. 1 is a schematic diagram of an exemplary cross reality system for providing a cross reality experience, according to one embodiment.
- FIG. 2 is a schematic diagram of a method for depth estimation of a scene, according to one embodiment.
- Fig. 3 is a block diagram of the architecture of a shared RGB encoder and descriptor decoder used in the method of Fig. 2, according to one embodiment.
- Fig. 4 illustrates a process for restricting the range of the search space using epipolar sampling and depth range sampling, as used in the method of Fig. 2, according to one embodiment.
- Fig. 5 is a block diagram illustrating the architecture for a key-point network, as used in the method of Fig. 2, according to one embodiment.
- Fig. 6 illustrates a qualitative comparison between an example of the method of Fig. 2 and various other different methods.
- Fig. 7 shows sample 3D reconstructions of the scene from the estimated depth maps I the example of the method of Fig.2, described herein.
- Fig. 8 shows a Table 1 having a comparison of the performance of different descriptors on ScanNet.
- Fig. 9 shows a Table 2 having a comparison of the performance of depth estimation on ScanNet.
- Fig. 10 shows a Table 3 having a comparison of the performance of depth estimation on ScanNet for dilferent numbers of images.
- Fig. 11 shows a Table 4 having a comparison of depth estimation on Sun3D.
- Fig. 12 sets forth an equation for a process for the descriptor of each interest point being convolved with the descriptor field along its corresponding epipolar line for each image view-point as used in the method of Fig. 2, according to one embodiment.
- Figs. 13-16 set forth equations for a process for an algebraic triangulation to obtain 3D points as used in the method of Fig. 2, according to one embodiment.
- systems and methods for estimating depths of features in a scene or environment surrounding a user of a spatial computing system may also be implemented independently of XR systems, and the embodiments depicted herein are described in relation to XR systems for illustrative purposes only.
- the XR system 100 includes a head-mounted display device 2 (also referred to as a head worn viewing component 2), a hand-held controller 4 (also referred to as a hand-held controller component 4), and an interconnected auxiliary computing system or controller 6 (also referred to as an interconnected auxiliary computing system or controller component 6) which may be configured to be worn as a belt pack or the like on the user.
- a head-mounted display device 2 also referred to as a head worn viewing component 2
- a hand-held controller 4 also referred to as a hand-held controller component 4
- an interconnected auxiliary computing system or controller 6 also referred to as an interconnected auxiliary computing system or controller component 6
- the head-mounted display device 2 includes two depicted optical elements 20 through which the user may see the world around them along with video images and visual components produced by the associated system components, including a pair of image sources (e.g., micro-display panels) and viewing optics for displaying computer generated images on the optical elements 20, for an augmented reality experience.
- image sources e.g., micro-display panels
- viewing optics for displaying computer generated images on the optical elements 20, for an augmented reality experience.
- the head-mounted display device 2 and pair of image sources are lightweight, low-cost, have a small form-factor, have a wide virtual image field of view, and are as transparent as possible.
- the XR system 100 also includes various sensors configured to provide information pertaining to the environment around the user, including but not limited to various camera type sensors 22, 24, 26 (such as monochrome, color/RGB, and/or thermal), depth camera sensors 28, and/or sound sensors 30 (such as microphones).
- the XR system 100 is configured to present virtual image information in multiple focal planes (for example, two or more) in order to be practical for a wide variety of use-cases without exceeding an acceptable allowance for vergence- accommodation mismatch.
- multiple focal planes for example, two or more
- a user wears an augmented reality system such as the XR system 100 depicted in Fig. 1, which may also be termed a “spatial computing” system in relation to such system’s interaction with the three dimensional world around the user when operated.
- the cameras 22, 24, 26 and computing system 6 are configured to map the environment around the user, and/or to create a “mesh” of such environment, comprising various points representative of the geometry of various objects within the environment around the user, such as walls, floors, chairs, and the like.
- the spatial computing system may be configured to map or mesh the environment around the user, and to run or operate software, such as that available from Magic Leap, Inc., of Plantation, Florida, which may be configured to utilize the map or mesh of the room to assist the user in placing, manipulating, visualizing, creating, and modifying various objects and elements in the three-dimensional space around the user.
- the XR system 100 may also be operatively coupled to additional connected resources 8, such as other computing systems, by cloud or other connectivity configurations.
- additional connected resources 8 such as other computing systems, by cloud or other connectivity configurations.
- the presently disclosed systems and methods leam the sparse 3D landmarks in conjunction with the sparse to dense formulation in an end-to-end manner so as to (a) remove dependence on a cost volume in the MVS technique, thus, significantly reducing compute, (b) complement camera pose estimation using sparse VIO or SLAM by reusing detected interest points and descriptors, (c) utilize geometry -based MVS concepts to guide the algorithm and improve the interpretability, and (d) benefit from the accuracy and efficiency of sparse-to-dense techniques.
- the network in the present systems and methods is a multitask model (see [Ref.
- an encoder-decoder structure composed of two encoders, one for RGB image and one for sparse depth image, and three decoders: one interest point detection, one for descriptors and one for the dense depth prediction.
- a differentiable module is also utilized that efficiently triangulates points using geometric priors and forms the critical link between the interest point decoder, descriptor decoder, and the sparse depth encoder enabling end-to-end training.
- One of the challenges in spatial computing relates to the utilization of data captured by various operatively coupled sensors (such as elements 22, 24, 26, 28 of the system of Fig. 1) of the XR system 100 in making determinations useful and/or critical to the user, such as in computer vision and/or object recognition challenges that may, for example, relate to the three-dimensional world around a user.
- Disclosed herein are methods and systems for generating a 3D reconstruction of a scene, such as the 3D environment surrounding the user of the XR system 100, using only RGB images, such as the RGB images from the cameras 22, 24, and 26, without using depth data from the depth sensors 28.
- the present disclosure introduces an approach for depth estimation by learning triangulation and densification of sparse points for multi-view stereo.
- the presently discloses systems and methods utilize an efficient depth estimation approach by first (a) detecting and evaluating descriptors for interest points, then (b) learning to match and triangulate a small set of interest points, and finally densifying this sparse set of 3D points using CNNs.
- An end-to-end network efficiently performs all three steps within a deep learning framework and trained with intermediate 2D image and 3D geometric supervision, along with depth supervision.
- the first step of the presently disclosed method complements pose estimation using interest point detection and descriptor learning.
- the present methods are shown to produce state-of-the-art results on depth estimation with lower compute for different scene lengths. Furthermore, this method generalizes to newer environments and the descriptors output by the network compare favorably to strong baselines.
- the sparse 3D landmarks are learned in conjunction with the sparse to dense formulation in an end-to-end manner so as to (a) remove the dependence on a cost volume as in the MVS technique, thus, significantly reducing computational costs, (b) complement camera pose estimation using sparse VIO or SLAM by reusing detected interest points and descriptors, (c) utilize geometry-based MVS concepts to guide the algorithm and improve the interpretability, and (d) benefit from the accuracy and efficiency of sparse-to-dense techniques.
- the network used in the method is a multitask model (e.g., see [Ref.
- an encoder-decoder structure composed of two encoders, one for RGB image and one for sparse depth image, and three decoders: one interest point detection, one for descriptors and one for the dense depth prediction.
- the method also utilizes a differentiable module that efficiently triangulates points using geometric priors and forms the critical link between the interest point decoder, descriptor decoder, and the sparse depth encoder enabling end-to-end training.
- One embodiment of a method 110, as well as a system 110, for depth estimation of a scene is can be broadly sub-divided into three steps as illustrated in the schematic diagram of Fig. 2.
- the method 110 can be broadly sub-divided into three steps as illustrated Fig. 2.
- the target or anchor image 114 and the multi-view images 116 are passed through a shared RGB encoder and descriptor decoder 118 (including an RGB image encoder 119, a detector decoder 121, and a descriptor decoder 123) to output a descriptor field 120 for each image 114, 116.
- Interest points 122 are also detected for the target or the anchor image 114.
- the interest points 122 in the anchor image 114 in conjunction with the relative poses 126 are used to determine the search space in the reference images 116 from alternate view-points.
- Descriptors 132 are sampled in the search space using an epipolar sampler 127 and point sampler 129, respectively, to output sampled descriptors 128 and are matched by a soft matcher 130 with descriptors 128 for the interest points 122.
- the matched keypoints 134 are triangulated using SVD using a triangulation module 136 to output 3D points 138.
- the output 3D points 138 are used by a sparse depth encoder 140 to create a sparse depth image.
- the output feature maps for the sparse depth encoder 140 and intermediate feature maps from the RGB encoder 119 are collectively used to inform the depth decoder 144 and output a dense depth image 146.
- the shared RGB encoder and descriptor decoder 118 is composed of two encoders, the RGB image encoder 119 and the sparse depth image encoder 140, and three decoders, the detector decoder 121 (also referred to as the interest point detector decoder 121), the descriptor decoder 123, and the dense depth decoder 144 (also referred to as dense depth predictor decoder 144).
- the shared RGB encoder and descriptor decoder 118 may comprise a SuperPoint-like (see [Ref. 9]) formulation of a fully - convolutional neural network architecture which operates on a full-resolution image and produces interest point detection accompanied by fixed length descriptors.
- the model has a single, shared encoder to process and reduce the input image dimensionality.
- the feature maps from the RGB encoder 119 feed into two task- specific decoder “heads”, which learn weights for interest point detection and interest point description. This j oint formulation of interest point detection and description in SuperPoint enables sharing compute for the detection and description tasks, as well as the downstream task of depth estimation.
- SuperPoint was trained on grayscale images with focus on interest point detection and description for continuous pose estimation on high frame rate video streams, and hence, has a relatively shallow encoder.
- the present method is interested in image sequences with sufficient baseline, and consequently longer intervals between subsequent frames.
- SuperPoint’s shallow backbone suitable for sparse point analysis has limited capacity for our downstream task of dense depth estimation.
- the shallow backbone is replaced with a ResNet-50 (see [Ref. 16]) encoder which balances efficiency and performance.
- the output resolution of the interest point detector decoder 121 is identical to that of SuperPoint.
- the method 110 may utilize a U-Net (see [Ref.
- the descriptor decoder 123 outputs an N-dimensional descriptor tensor 120 at l/8th the image resolution, similar to SuperPoint. This architecture is illustrated in Fig. 3.
- the interest point detector network is trained by distilling the output of the original SuperPoint network and the descriptors are trained by the matching formulation described below.
- the previous step provides interest points for the anchor image and descriptors for all images, i.e., the anchor image and full set of reference images.
- the next step 124 of the method 110 includes point matching and triangulation.
- a naive approach would be to match descriptors of the interest points 122 sampled from the descriptor field 120 of the anchor image 114 to all possible positions in each reference image 116.
- this is computationally prohibitive.
- the method 110 invokes geometrical constraints to restrict the search space and improve efficiency.
- the method el 00 only searches along the epipolar line in the reference images (see [Fig. 14]).
- the matched point is guaranteed to he on the epipolar line in an ideal scenario.
- practical limitations to obtain perfect pose lead us to search along the epipolar line with a small fixed offset on either side.
- the epipolar line stretches for depth values from - ⁇ to ⁇ .
- the search space is constrained to lie within a feasible depth sensing range from epipolar line, and the sampling rate is varied within this restricted range in order to obtain descriptor fields with the same output shape for implementation purposes as illustrated in Fig 4.
- Bilinear sampling is used to obtain the descriptors at the desired points in the descriptor field 120.
- the descriptor of each interest point 122 is convolved with the descriptor field 120 along its corresponding epipolar line for each image view-point, as illustrated in Equation (1) of Fig. 12, and also reproduced below:
- the soft-argmax is that rather than getting the index of the maximum, it allows the gradients to flow back to cross-correlation maps C j,k from the output 2D position of the matched points x j,k .
- the soft-argmax operator is differentiable.
- a linear algebraic triangulation approach is used.
- step 142 of method 110 including the densification of sparse depth points will be described.
- a key-point detector network provides the position of the points.
- the z coordinate of the triangulated points provides the depth.
- a sparse depth image of the same resolution as the input image is imputed with depth of these sparse points. Note that the gradients can propagate from the sparse depth image back to the 3D key-points all the way to the input image. This is akin to switch unpooling in SegNet (see [Ref. 1]).
- the sparse depth image is passed through an encoder network which is a narrower version of the image encoder network 119. More specifically, a ResNet-50 encoder with the channel widths is used after each layer to be one fourth of the image encoder. These features are concatenated with the features obtained from the image encoder 119.
- a U-net style decoder with intermediate feature maps from both the image as well as sparse depth encoder concatenated with the intermediate feature maps of the same resolution in the decoder is used, similar to [Ref. 6] Deep supervision over 4 scales is provided. (See [Ref 25]).
- a spatial pyramid pooling block is also included to encourage feature mixing at different receptive field sizes. (See [Refs. 15, 4]). The details of this architecture are shown in Fig. 5.
- the overall training objective will now be described.
- the entire network is trained with a combination of (a) cross entropy loss between the output tensor of the interest point detector decoder and ground truth interest point locations obtained from SuperPoint, (b) a smooth-Ll loss between the 2D points output after soft argmax and ground truth 2D point matches, (c) a smooth-Ll loss between the 3D points output after SVD triangulation and ground truth 3D points, (d) an edge aware smoothness loss on the output dense depth map, and (e) a smooth-Ll loss over multiple scales between the predicted dense depth map output and ground truth 3D depth map.
- the overall training objective is:
- the target frame is passed through SuperPoint in order to detect interest points, which are then distilled using the loss L i while training our network.
- We set the length of the sampled descriptors along the epipolar line to be 100, albeit, we found that the matching is robust even for lengths as small as 25.
- the ScanNet test set consists of 100 scans of unique scenes different for the 707 scenes in the training dataset.
- We use the evaluation protocol and metrics proposed in SuperPoint namely the mean localization error (MLE), the matching score (MScore), repeatability (Rep) and the fraction of correct pose estimated using descriptor matches and PnP algorithm at 5 ° (5 degree) threshold for rotation and 5 cm for translation.
- MLE mean localization error
- MScore matching score
- Rep repeatability
- PnP algorithm 5 ° (5 degree) threshold for rotation and 5 cm for translation.
- Table 1 in Fig. 8 shows the results of our detector and descriptor evaluation. Note that MLE and repeatability are detector metrics, MScore is a descriptor metric, and rotation@5 and translation@5 are combined metrics.
- MLE and repeatability are detector metrics
- MScore is a descriptor metric
- rotation@5 and translation@5 are combined metrics.
- We set the threshold for our detector at 0.0005, the same as that used during training. This results in a large number of interest points being detected (Num) which artificially inflates the repeatability score (Rep) in our favor, but has poor localization performance as indicated by MLE metric.
- our MScore is comparable to SuperPoint although we trained our network to only match along the epipolar line, and not for the full image.
- Table 3 of Fig. 10 shows the performance for different numbers of images.
- We set the frame gap to be 20, 15, 12 and 10 for 2,4,5 and 7 frames respectively. These gaps ensure that each set approximately span the similar volumes in 3D space, and that any performance improvement emerges from the network better using the available information as opposed to acquiring new information.
- the method disclosed herein outperforms all other methods on all three metrics for different sequence lengths. Closer inspection of the values indicates that the DPSNet and GPMVSNet does not benefit from additional views, whereas, MVDepthNet benefits from a small number of additional views but stagnates for more than 4 frames.
- the presently disclosed method shows steady improvement in all three metrics with additional views. This can be attributed to our point matcher and triangulation module which naturally benefits from additional views.
- the presently disclosed methods for depth estimation provide an efficient depth estimation algorithm by learning to triangulate and densify sparse points in a multi-view stereo scenario.
- the methods disclosed herein have exceeded the state-of-the-art results, and demonstrated significant computation efficiency of competitive methods. It is anticipated that these methods can be expanded on by incorporating more effective attention mechanisms for interest point matching, and more anchor supporting view selection.
- the methods may also incorporate deeper integration with the SLAM problem as depth estimation and SLAM are duals of each other.
- Appendix 1 The references listed below correspond to the references in brackets (“[Ref. ##]”), above; each of these references is incorporated by reference in its entirety herein.
- the invention includes methods that may be performed using the subject devices.
- the methods may comprise the act of providing such a suitable device. Such provision may be performed by the end user.
- the "providing" act merely requires the end user obtain, access, approach, position, set-up, activate, power-up or otherwise act to provide the requisite device in the subject method.
- Methods recited herein may be carried out in any order of the recited events which is logically possible, as well as in the recited order of events.
- any optional feature of the inventive variations described may be set forth and claimed independently, or in combination with any one or more of the features described herein.
- Reference to a singular item includes the possibility that there are plural of the same items present. More specifically, as used herein and in claims associated hereto, the singular forms “a,” “an,” “said,” and “the” include plural referents unless the specifically stated otherwise.
- use of the articles allow for "at least one" of the subject item in the description above as well as claims associated with this disclosure.
- claims may be drafted to exclude any optional element. As such, this statement is intended to serve as antecedent basis for use of such exclusive terminology as “solely,” “only” and the like in connection with the recitation of claim elements, or use of a “negative” limitation.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Systems and methods for estimating depths of features in a scene or environment surrounding a user of a spatial computing system, such as a virtual reality, augmented reality or mixed reality (collectively, cross reality) system, in an end-to-end process. The estimated depths can be utilized by a spatial computing system, for example, to provide an accurate and effective 3D cross reality experience.
Description
SYSTEMS AND METHODS FOR DEPTH ESTIMATION BY LEARNING TRIANGULATION AND DENSIFICATION OF SPARSE POINTS FOR MULTI-VIEW
STEREO
FIELD OF THE INVENTION
[0001] The present invention is related to computing, learning network configurations, and connected mobile computing systems, methods, and configurations, and more specifically to systems and methods for estimating depths of features in a scene from multi-view images, which estimated depths may be used in mobile computing systems, methods, and configurations featuring at least one wearable component configured for virtual and/or augmented reality operation.
BACKGROUND:
[0002] Modem computing and display technologies have facilitated the development of systems for so called “virtual reality” (“VR”), “augmented reality” (“AR”), and/or “mixed reality” (“MR”) environments or experiences, referred to collectively as “cross-reality” (“XR”) environments or experiences. This can be done by presenting computer-generated imagery to a user through a head-mounted display. This imagery creates a sensory experience which immerses the user in a simulated environment. This data may describe, for example, virtual objects that may be rendered in a way that users’ sense or perceive as a part of a physical world and can interact with the virtual objects. The user may experience these virtual objects as a result of the data being rendered and presented through a user interface device, such as, for example, a head-mounted display device. The data may be displayed to the user to see, or may control audio that is played for the user to hear, or may control a tactile (or haptic) interface, enabling the user to experience touch sensations that the user senses or perceives as feeling the virtual object.
[0003] XR systems may be useful for many applications, spanning the fields of scientific visualization, medical training, engineering design and prototyping, tele-manipulation and
tele-presence, and personal entertainment. VR systems typically involve presentation of digital or virtual image information without transparency to actual real-world visual input. [0004] AR systems generally supplement a real-world environment with simulated elements. For example, AR systems may provide a user with a view of a surrounding real-world environment via a head-mounted display. Computer-generated imagery can also be presented on the head-mounted display to enhance the surrounding real-world environment. This computer-generated imagery can include elements which are contextually -related to the surrounding real-world environment. Such elements can include simulated text, images, objects, and the like. MR systems also introduce simulated objects into a real-world environment, but these objects typically feature a greater degree of interactivity than in AR systems.
[0005] AR/MR scenarios often include presentation of virtual image elements in relationship to real-world objects. For example, an AR/MR scene is depicted wherein a user of an AR/MR technology sees a real-world scene featuring the environment surrounding the user, including structures, objects, etc. In addition to these features, the user of the AR/MR technology perceives that they “see” computer generated features (i.e., virtual object), even though such features do not exist in the real-world environment. Accordingly, AR and MR, in contrast to VR, include one or more virtual objects in relation to real objects of the physical world. The virtual objects also interact with the real world objects, such that the AR/MR system may also be termed a “spatial computing” system in relation to the system’s interaction with the 3D world surrounding the user. The experience of virtual objects interacting with real objects greatly enhances the user’s enjoyment in using the XR system, and also opens the door for a variety of applications that present realistic and readily understandable information about how the physical world might be altered.
[0006] The visualization center of the brain gains valuable perception information from the motion of both eyes and components thereof relative to each other. Vergence movements (i.e., rolling movements of the pupils toward or away from each other to converge the lines of sight of the eyes to fixate upon an object) of the two eyes relative to each other are closely associated with accommodation (or focusing) of the lenses of the eyes. Under normal conditions, accommodating the eyes, or changing the focus of the lenses of the eyes, to focus upon an object at a different distance will automatically cause a matching change in vergence to the same distance, under a relationship known as the “accommodation-vergence reflex.” Likewise, a change in vergence will trigger a matching change in accommodation, under normal conditions. Working against this reflex, as do most conventional stereoscopic VR/AR/MR configurations, is known to produce eye fatigue, headaches, or other forms of discomfort in users.
[0007] Stereoscopic wearable glasses generally feature two displays - one for the left eye and one for the right eye - that are configured to display images with slightly different element presentation such that a three-dimensional perspective is perceived by the human visual system. Such configurations have been found to be uncomfortable for many users due to a mismatch between vergence and accommodation (“vergence-accommodation conflict”) which must be overcome to perceive the images in three dimensions. Indeed, some users are not able to tolerate stereoscopic configurations. These limitations apply to VR, AR, and MR systems. Accordingly, most conventional VR/AR/MR systems are not optimally suited for presenting a rich, binocular, three-dimensional experience in a manner that will be comfortable and maximally useful to the user, in part because prior systems fail to address some of the fundamental aspects of the human perception system, including the vergence- accommodation conflict.
[0008] Various systems and methods have been disclosed for addressing the vergence- accommodation conflict. For example, U.S. Utility Patent Application Serial No. 14/555,585 discloses VR/AR/MR systems and methods that address the vergence-accommodation conflict by projecting light at the eyes of a user using one or more light-guiding optical elements such that the light and images rendered by the light appear to originate from multiple depth planes. The light-guiding optical elements are designed to in-couple virtual light corresponding to digital or virtual objects, propagate it by total internal reflection ( TIR ). and then out-couple the virtual light to display the virtual objects to the user’s eyes. In AR/MR systems, the light-guiding optical elements are also designed to be transparent to light from (e.g., reflecting off of) actual real-world objects. Therefore, portions of the light- guiding optical elements are designed to reflect virtual light for propagation via TIR while being transparent to real-world light from real-world objects in AR/MR systems.
[0009] AR/MR scenarios often include interactions between virtual objects and a real-world physical environment. Similarly, some VR scenarios include interactions between completely virtual objects and other virtual objects. Delineating objects in the physical environment facilitates interactions with virtual objects by defining the metes and bounds of those interactions (e.g., by defining the extent of a particular structure or object in the physical environment). For instance, if an AR/MR scenario includes a virtual object (e.g., a tentacle or a fist) extending from a particular object in the physical environment, defining the extent of the object in three dimensions allows the AR/MR system to present a more realistic AR/MR scenario. Conversely, if the extent of objects is not defined or inaccurately defined, artifacts and errors will occur in the displayed images. For instance, a virtual object may appear to extend partially or entirely from midair adjacent an object instead of from the surface of the object. As another example, if an AR/MR scenario includes a virtual character walking on a particular horizontal surface in a physical environment, inaccurately defining
the extent of the surface may result in the virtual character appearing to walk off of the surface without falling, and instead floating in midair.
[0010] Accordingly, depth sensing of scenes, such as a surrounding environment, are useful for in a wide range of applications, ranging from cross reality systems to autonomous driving. Estimating depth of scenes can be broadly divided into classes: active and passive sensing. Active sensing techniques include LiDAR, structured-light and time-of-flight (ToF) cameras, whereas depth estimation using a monocular camera or stereopsis of an array of cameras is termed passive sensing. Active sensors are currently the de-facto standard of applications requiring depth sensing due to good accuracy and low latency in varied environments (see [Ref. 44]). Numbered references in brackets (“[Ref. ##]”) refer to the reference list appended below; each of these references is incorporated by reference in its entirety herein.
[0011] However, active sensors have their own of limitation. LiDARs are prohibitively expensive and provide sparse measurements. Structured-light and ToF depth cameras have limited range and completeness due to the physics of light transport. Furthermore, they are power hungry and inhibit mobility critical for AR/VR applications on wearables. Consequently, computer vision researchers have pursued passive sensing techniques as a ubiquitous, cost-effective and energy-efficient alternative to active sensors. (See [Ref. 30]). [0012] Passive depth sensing using stereo cameras requires a large baseline and careful calibration for accurate depth estimation. (See [Ref. 3]). A large baseline is infeasible for mobile devices like phones and wearables. An alternative is to use multi-view stereo (MVS) techniques for a moving monocular camera to estimate depth. MVS generally refers to the problem of reconstructing 3D scene structure from multiple images with known camera poses and intrinsics. (See [Ref. 14]). The unconstrained nature of camera motion alleviates the baseline limitation of stereo-rigs, and the algorithm benefits from multiple observations of the same scene from continuously varying viewpoints. (See [Ref. 17]). However, camera
motion also makes depth estimation more challenging relative to rigid stereo-rigs due to pose uncertainty and added complexity of motion artifacts. Most MVS approaches involve building a 3D cost volume, usually with a plane sweep stereo approach. (See [Refs. 41,18]). Accurate depth estimation using MVS relies on 3D convolutions on the cost volume, which is both memory as well as computationally expensive, scaling cubically with the resolution. Furthermore, redundant compute is added by ignoring useful image-level properties such as interest points and their descriptors, which are a necessary precursor to camera pose estimation, and hence, any MVS technique. This increases the overall cost and energy requirements for passive sensing.
[0013] Passive sensing using a single image is fundamentally unreliable due to scale ambiguity in 2D images. Deep learning based monocular depth estimation approaches formulate the problem as depth regression (see [Refs. 10,11]) and have reduced the performance gap to those of active sensors(see [Refs. 26,24]), but still far from being practical. Recently, sparse-to-dense depth estimation approaches have been proposed to remove the scale ambiguity and improve robustness of monocular depth estimation. (See [Ref. 30] Indeed, recent sparse-to-dense approaches with less than 0.5% depth samples have accuracy comparable to active sensors, with higher range and completeness. (See [Ref. 6] . However, these approaches assume accurate or seed depth samples from an active sensor which is limiting. The alternative is to use the sparse 3D landmarks output from the best performing algorithms for Simultaneous Localization and Mapping (SLAM) (see [Ref. 31]) or Visual Inertial Odometry (VIO) (see [Ref. 33]). However, using depth evaluated from these sparse landmarks in lieu of depth from active sensors, significantly degrades performance. (See [Ref. 43]). This is not surprising as the leamt sparse-to-dense network ignores potentially useful cues, structured noise and biases present in SLAM or VIO algorithm.
[0014] Sparse feature based methods are standard for SLAM or VIO techniques due to their high speed and accuracy. The detect-then-describe approach is the most common approach to sparse feature extraction, wherein, interest points are detected and then described for a patch around the point. The descriptor encapsulates higher level information, which is missed by typical low-level interest points such as comers, blobs, etc. Prior to the deep learning revolution, classical systems like SIFT (see [Ref. 28] and ORB (see [Ref. 37] were ubiquitously used as descriptors for feature matching for low level vision tasks. Deep neural networks directly optimizing for the objective at hand have now replaced these hand engineered features across a wide array of applications. However, such an end-to-end network has remained elusive for SLAM (see [Ref. 32] due to the components being non- differentiable. General purpose descriptors learned by methods such as SuperPoint (see [Ref. 9], LIFT (see [Ref. 42]), and GIFT (see [Ref. 27] aim to bridge the gap towards differentiable SLAM.
[0015] MVS approaches either directly reconstruct a 3D volume or output a depth map which can be flexibly used for 3D reconstruction or other applications. Methods of reconstructing 3D volumes (see [Ref. 41, 5] are restricted to small spaces or isolated objects either due to the high memory load of operating in a 3D voxelized space (see [Refs. 35, 39], or due to the difficulty of learning point representations in complex environments (see [Ref. 34]). The use of multi-view images captured in indoor environments has progressed lately starting with DeepMVS (see [Ref. 18]) which proposed a learned patch matching approach. MVDepthNet (see [Ref. 40]), and DPSNeflsee [Ref. 19] build a cost volume for depth estimation.
Recently, GP-MVSNet (see [Ref. 17]) built upon MVDepthNet to coherently fuse temporal information using Gaussian processes. All these methods utilize the plane sweep algorithm during some stage of depth estimation, resulting in an accuracy vs efficiency trade-off.
[0016] Sparse-to-dense depth estimation has also recently emerged as a way to supplement active depth sensors due to their range limitations when operating on a power budget, and to fill in depth in hard to detect regions such as dark or reflective objects. One approach was proposed by Ma et.al (see [Ref. 30], which was followed by Chen et. al. (see [Ref. 6, 43]) which introduced innovations in the representation and network architecture. A convolutional spatial propagation module is proposed in [Ref. 7] to in-fill the missing depth values. Recently, self-supervised approaches (see [Refs. 13, 12]) have been explored for the sparse-to-dense problem. (See [Ref. 29]).
[0017] It can be seen that multi -view stereo (MVS) represents an advantageous middle approach between the accuracy of active depth sensing and the practicality of monocular depth estimation. Cost volume based approaches employing 3D convolutional neural networks (CNNs) have considerably improved the accuracy of MVS systems. However, this accuracy comes at a high computational cost which impedes practical adoption.
[0018] Accordingly, there is a need for improved systems and methods for depth estimation of a scene which does not depend on costly and ineffective active depth sensing, and improves upon the efficiency and/or accuracy of prior passive depth sensing techniques. In addition, the systems and methods for depth estimation should be implementable in XR systems having displays which are lightweight, low-cost, have a small form-factor, have a wide virtual image field of view, and are as transparent as possible.
SUMMARY
[0019] The embodiments disclosed herein are directed to systems and methods for estimating depths of features in a scene or environment surrounding a user of a spatial computing system, such as an XR system, in an end-to-end process. The estimated depths can be utilized by a spatial computing system, for example, to provide an accurate and effective 3D XR experience. The resulting 3D XR experience is displayable in a rich, binocular, three-
dimensional experience that is comfortable and maximally useful to the user, in part because it can present images in a manner which addresses some of the fundamental aspects of the human perception system, such as the vergence-accommodation mismatch. For instance, the estimated depths may be used to generate a 3D reconstruction having accurate depth data enabling the 3D images to be displayed in multiple focal planes. The 3D reconstruction may also enable accurate management of interactions between virtual objects, other virtual objects, and/or real world objects.
[0020] Accordingly, one embodiment is directed to a method for estimating depth of features in a scene from multi-view images. First, multi-view images are obtained, including an anchor image of the scene and a set of reference images of the scene. This may be accomplished by one or more suitable cameras, such as cameras of an XR system. The anchor image and reference images are passed through a shared RGB encoder and descriptor decoder which (1) outputs a respective descriptor field of descriptors for the anchor image and each reference image, (ii) detects interest points in the anchor image in conjunction with relative poses to determine a search space in the reference images from alternate view-points, and (iii) outputs intermediate feature maps. The respective descriptors are sampled in the search space of each reference image to determine descriptors in the search space and matching the identified descriptors with descriptors for the interest points in the anchor image. The matched descriptors are referred to as matched keypoints. The matched keypoints are triangulated using singular value decomposition (SVD) to output 3D points. The 3D points are passed through a sparse depth encoder to create a sparse depth image from the 3D points and output feature maps. A depth decoder then generates a dense depth image based on the output feature maps for the sparse depth encoder and the intermediate feature maps from the RGB encoder.
[0021] In another aspect of the method, the shared RGB encoder and descriptor decoder may comprise two encoders including an RGB image encoder and a sparse depth image encoder, and three decoders including an interest point detection encoder, a descriptor decoder, and a dense depth prediction encoder.
[0022] In still another aspect, the shared RGB encoder and descriptor decoder may be a fully - convolutional neural network configured to operate on a full resolution of the anchor image and transaction images.
[0023] In yet another aspect, the method may further comprise feeding the feature maps from the RGB encoder into a first task-specific decoder head to determine weights for the detecting of interest points in the anchor image and outputting interest point descriptions.
[0024] In yet another aspect of the method, the descriptor decoder may comprise a U-Net like architecture to fuse fine and course level image information for matching the identified descriptors with descriptors for the interest points.
[0025] In another aspect of the method, the search space may be constrained to a respective epipolar line in the reference images plus a fixed offset on either side of the epipolar line, and within a feasible depth sensing range along the epipolar line.
[0026] In still another aspect of the method, bilinear sampling may be used by the shared RGB encoder and descriptor decoder to output the respective descriptors at desired points in the descriptor field.
[0027] In another aspect of the method, the step of triangulating the matched keypoints comprises estimating respective two dimensional (2D) positions of the interest points by computing a softmax across spatial axes to output cross-correlation maps; performing a soft- argmax operation to calculate the 2D position of joints as a center of mass of corresponding cross-correlation maps; performing a linear algebraic triangulation from the 2D estimates; and using a singular value decomposition (SVD) to output 3D points.
[0028] Another disclosed embodiment is directed to a cross reality (XR) system which is configured to estimate depths, and utilized such depths as described herein. The cross reality system comprises a head-mounted display device having a display system. For example, the head-mounted display may have a pair of near-eye displays in an eyeglasses-like structure. A computing system is in operable communication with the head-mounted display. A plurality of camera sensors are in operable communication with the computing system. The computing system is configured to estimate depths of features in a scene from a plurality of multi-view images captured by the camera sensors any of the methods described above. In additional aspects of the cross reality system, the process may include any one or more of the additional aspects of the cross reality system described above. For instance, the process may include obtaining a multi-view images, including an anchor image of the scene and a set of reference images of a scene within a field of view of the camera sensors from the camera sensors; passing the anchor image and reference images through a shared RGB encoder and descriptor decoder which (1) outputs a respective descriptor field of descriptors for the anchor image and each reference image, (ii) detects interest points in the anchor image in conjunction with relative poses to determine a search space in the reference images from alternate view-points, and (iii) outputs intermediate feature maps; sampling the respective descriptors in the search space of each reference image to determine descriptors in the search space and matching the identified descriptors with descriptors for the interest points in the anchor image, such matched descriptors referred to as matched keypoints; triangulating the matched keypoints using singular value decomposition (SVD) to output 3D points; passing the 3D points through a sparse depth encoder to create a sparse depth image from the 3D points and output feature maps; and a depth decoder generating a dense depth image based on the output feature maps for the sparse depth encoder and the intermediate feature maps from the RGB encoder.
BRIEF DESCRIPTION OF THE DRAWINGS
[0029] The drawings illustrate the design and utility of preferred embodiments of the present disclosure, in which similar elements are referred to by common reference numerals. In order to better appreciate how the above-recited and other advantages and objects of the present disclosure are obtained, a more particular description of the present disclosure briefly described above will be rendered by reference to specific embodiments thereof, which are illustrated in the accompanying drawings. Understanding that these drawings depict only typical embodiments of the disclosure and are not therefore to be considered limiting of its scope, the disclosure will be described and explained with additional specificity and detail through the use of the accompanying drawings.
[0030] Fig. 1 is a schematic diagram of an exemplary cross reality system for providing a cross reality experience, according to one embodiment.
[0031] Fig. 2 is a schematic diagram of a method for depth estimation of a scene, according to one embodiment.
[0032] Fig. 3 is a block diagram of the architecture of a shared RGB encoder and descriptor decoder used in the method of Fig. 2, according to one embodiment.
[0033] Fig. 4 illustrates a process for restricting the range of the search space using epipolar sampling and depth range sampling, as used in the method of Fig. 2, according to one embodiment.
[0034] Fig. 5 is a block diagram illustrating the architecture for a key-point network, as used in the method of Fig. 2, according to one embodiment.
[0035] Fig. 6 illustrates a qualitative comparison between an example of the method of Fig. 2 and various other different methods.
[0036] Fig. 7 shows sample 3D reconstructions of the scene from the estimated depth maps I the example of the method of Fig.2, described herein.
[0037] Fig. 8 shows a Table 1 having a comparison of the performance of different descriptors on ScanNet.
[0038] Fig. 9 shows a Table 2 having a comparison of the performance of depth estimation on ScanNet.
[0039] Fig. 10 shows a Table 3 having a comparison of the performance of depth estimation on ScanNet for dilferent numbers of images.
[0040] Fig. 11 shows a Table 4 having a comparison of depth estimation on Sun3D.
[0041] Fig. 12 sets forth an equation for a process for the descriptor of each interest point being convolved with the descriptor field along its corresponding epipolar line for each image view-point as used in the method of Fig. 2, according to one embodiment.
[0042] Figs. 13-16 set forth equations for a process for an algebraic triangulation to obtain 3D points as used in the method of Fig. 2, according to one embodiment.
DETAILED DESCRIPTION
[0043] The following describes various embodiments of systems and methods for estimating depths of features in a scene or environment surrounding a user of a spatial computing system, such as an XR system, in an end-to-end process. The various embodiments are described in detail with reference to the drawings, which are provided as illustrative examples of the disclosure to enable those skilled in the art to practice the disclosure. Notably, the figures and the examples below are not meant to limit the scope of the present disclosure. Where certain elements of the present disclosure may be partially or fully implemented using known components (or methods or processes), only those portions of such known components (or methods or processes) that are necessary for an understanding of the present disclosure will be described, and the detailed descriptions of other portions of such known components (or methods or processes) will be omitted so as not to obscure the disclosure.
Further, various embodiments encompass present and future known equivalents to the components referred to herein by way of illustration.
[0044] Furthermore, the systems and methods for estimating depths of features in a scene or environment surrounding a user of a spatial computing system may also be implemented independently of XR systems, and the embodiments depicted herein are described in relation to XR systems for illustrative purposes only.
[0045] Referring to Fig. 1, an exemplary XR system 100 according to one embodiment is illustrated. The XR system 100 includes a head-mounted display device 2 (also referred to as a head worn viewing component 2), a hand-held controller 4 (also referred to as a hand-held controller component 4), and an interconnected auxiliary computing system or controller 6 (also referred to as an interconnected auxiliary computing system or controller component 6) which may be configured to be worn as a belt pack or the like on the user. Each of these components are in operable communication (i.e., operatively coupled) to each other and to other connected resources 8 (such as cloud computing or cloud storage resources) via wired or wireless communication connections 10, 12, 14, 16, 17, 18, such as those specified by IEEE 802.11, Bluetooth (RTM), and other connectivity standards and configurations. The head-mounted display device 2 includes two depicted optical elements 20 through which the user may see the world around them along with video images and visual components produced by the associated system components, including a pair of image sources (e.g., micro-display panels) and viewing optics for displaying computer generated images on the optical elements 20, for an augmented reality experience. In the illustrated embodiment, the head-mounted display device 2 and pair of image sources are lightweight, low-cost, have a small form-factor, have a wide virtual image field of view, and are as transparent as possible. As illustrated in Fig. 1, the XR system 100 also includes various sensors configured to provide information pertaining to the environment around the user, including but not limited
to various camera type sensors 22, 24, 26 (such as monochrome, color/RGB, and/or thermal), depth camera sensors 28, and/or sound sensors 30 (such as microphones).
[0046] In addition, it is desirable that the XR system 100 is configured to present virtual image information in multiple focal planes (for example, two or more) in order to be practical for a wide variety of use-cases without exceeding an acceptable allowance for vergence- accommodation mismatch. U.S. Patent Application Serial Numbers 14/555,585, 14/690,401, 14/331,218, 15/481,255, 62/627,155, 62/518,539, 16/229,532, 16/155,564, 15/413,284, 16/020,541, 62,702,322, 62/206,765, 15,597,694, 16/221,065, 15/968,673, and 62/682,788, each of which is incorporated by reference herein in its entirety, describe various aspects of the XR system 100 and its components in more detail.
[0047] In various embodiments a user wears an augmented reality system such as the XR system 100 depicted in Fig. 1, which may also be termed a “spatial computing” system in relation to such system’s interaction with the three dimensional world around the user when operated. The cameras 22, 24, 26 and computing system 6 are configured to map the environment around the user, and/or to create a “mesh” of such environment, comprising various points representative of the geometry of various objects within the environment around the user, such as walls, floors, chairs, and the like. The spatial computing system may be configured to map or mesh the environment around the user, and to run or operate software, such as that available from Magic Leap, Inc., of Plantation, Florida, which may be configured to utilize the map or mesh of the room to assist the user in placing, manipulating, visualizing, creating, and modifying various objects and elements in the three-dimensional space around the user. As shown in Fig. 1, the XR system 100 may also be operatively coupled to additional connected resources 8, such as other computing systems, by cloud or other connectivity configurations.
[0048] It is understood that the methods, systems and configurations described herein are broadly applicable to various scenarios outside of the realm of wearable spatial computing such as the XR system 100, subject to the appropriate sensors and associated data being available.
[0049] In contrast to prior systems and methods for depth estimation of scenes, the presently disclosed systems and methods leam the sparse 3D landmarks in conjunction with the sparse to dense formulation in an end-to-end manner so as to (a) remove dependence on a cost volume in the MVS technique, thus, significantly reducing compute, (b) complement camera pose estimation using sparse VIO or SLAM by reusing detected interest points and descriptors, (c) utilize geometry -based MVS concepts to guide the algorithm and improve the interpretability, and (d) benefit from the accuracy and efficiency of sparse-to-dense techniques. The network in the present systems and methods is a multitask model (see [Ref. 22]), comprised of an encoder-decoder structure composed of two encoders, one for RGB image and one for sparse depth image, and three decoders: one interest point detection, one for descriptors and one for the dense depth prediction. A differentiable module is also utilized that efficiently triangulates points using geometric priors and forms the critical link between the interest point decoder, descriptor decoder, and the sparse depth encoder enabling end-to-end training.
[0050] These methods and configurations are broadly applicable to various scenarios outside of realm of wearable spatial computing, subject to the appropriate sensors and associated data being available.
[0051] One of the challenges in spatial computing relates to the utilization of data captured by various operatively coupled sensors (such as elements 22, 24, 26, 28 of the system of Fig. 1) of the XR system 100 in making determinations useful and/or critical to the user, such as in computer vision and/or object recognition challenges that may, for example, relate to the
three-dimensional world around a user. Disclosed herein are methods and systems for generating a 3D reconstruction of a scene, such as the 3D environment surrounding the user of the XR system 100, using only RGB images, such as the RGB images from the cameras 22, 24, and 26, without using depth data from the depth sensors 28.
[0052] In contrast to previous methods of depth estimation of scenes, such as indoor environments, the present disclosure introduces an approach for depth estimation by learning triangulation and densification of sparse points for multi-view stereo. Distinct from cost volume approaches, the presently discloses systems and methods utilize an efficient depth estimation approach by first (a) detecting and evaluating descriptors for interest points, then (b) learning to match and triangulate a small set of interest points, and finally densifying this sparse set of 3D points using CNNs. An end-to-end network efficiently performs all three steps within a deep learning framework and trained with intermediate 2D image and 3D geometric supervision, along with depth supervision. Crucially, the first step of the presently disclosed method complements pose estimation using interest point detection and descriptor learning. The present methods are shown to produce state-of-the-art results on depth estimation with lower compute for different scene lengths. Furthermore, this method generalizes to newer environments and the descriptors output by the network compare favorably to strong baselines.
[0053] In the present disclosed method, the sparse 3D landmarks are learned in conjunction with the sparse to dense formulation in an end-to-end manner so as to (a) remove the dependence on a cost volume as in the MVS technique, thus, significantly reducing computational costs, (b) complement camera pose estimation using sparse VIO or SLAM by reusing detected interest points and descriptors, (c) utilize geometry-based MVS concepts to guide the algorithm and improve the interpretability, and (d) benefit from the accuracy and efficiency of sparse-to-dense techniques. The network used in the method is a multitask
model (e.g., see [Ref. 22]), comprised of an encoder-decoder structure composed of two encoders, one for RGB image and one for sparse depth image, and three decoders: one interest point detection, one for descriptors and one for the dense depth prediction. The method also utilizes a differentiable module that efficiently triangulates points using geometric priors and forms the critical link between the interest point decoder, descriptor decoder, and the sparse depth encoder enabling end-to-end training.
[0054] One embodiment of a method 110, as well as a system 110, for depth estimation of a scene is can be broadly sub-divided into three steps as illustrated in the schematic diagram of Fig. 2. The method 110 can be broadly sub-divided into three steps as illustrated Fig. 2. In the first step 112, the target or anchor image 114 and the multi-view images 116 are passed through a shared RGB encoder and descriptor decoder 118 (including an RGB image encoder 119, a detector decoder 121, and a descriptor decoder 123) to output a descriptor field 120 for each image 114, 116. Interest points 122 are also detected for the target or the anchor image 114. In the second step 124, the interest points 122 in the anchor image 114 in conjunction with the relative poses 126 are used to determine the search space in the reference images 116 from alternate view-points. Descriptors 132 are sampled in the search space using an epipolar sampler 127 and point sampler 129, respectively, to output sampled descriptors 128 and are matched by a soft matcher 130 with descriptors 128 for the interest points 122. Then, the matched keypoints 134 are triangulated using SVD using a triangulation module 136 to output 3D points 138. The output 3D points 138 are used by a sparse depth encoder 140 to create a sparse depth image. In the third and final step 142, the output feature maps for the sparse depth encoder 140 and intermediate feature maps from the RGB encoder 119 are collectively used to inform the depth decoder 144 and output a dense depth image 146. Each of the three steps are described in greater detail below.
[0055] As described above, the shared RGB encoder and descriptor decoder 118 is composed of two encoders, the RGB image encoder 119 and the sparse depth image encoder 140, and three decoders, the detector decoder 121 (also referred to as the interest point detector decoder 121), the descriptor decoder 123, and the dense depth decoder 144 (also referred to as dense depth predictor decoder 144). In one embodiment, the shared RGB encoder and descriptor decoder 118 may comprise a SuperPoint-like (see [Ref. 9]) formulation of a fully - convolutional neural network architecture which operates on a full-resolution image and produces interest point detection accompanied by fixed length descriptors. The model has a single, shared encoder to process and reduce the input image dimensionality. The feature maps from the RGB encoder 119 feed into two task- specific decoder “heads”, which learn weights for interest point detection and interest point description. This j oint formulation of interest point detection and description in SuperPoint enables sharing compute for the detection and description tasks, as well as the downstream task of depth estimation.
However, SuperPoint was trained on grayscale images with focus on interest point detection and description for continuous pose estimation on high frame rate video streams, and hence, has a relatively shallow encoder. On the contrary, the present method is interested in image sequences with sufficient baseline, and consequently longer intervals between subsequent frames. Furthermore, SuperPoint’s shallow backbone suitable for sparse point analysis has limited capacity for our downstream task of dense depth estimation. Hence, the shallow backbone is replaced with a ResNet-50 (see [Ref. 16]) encoder which balances efficiency and performance. The output resolution of the interest point detector decoder 121 is identical to that of SuperPoint. In order to fuse fine and coarse level image information critical for point matching, the method 110 may utilize a U-Net (see [Ref. 36]) like architecture for the descriptor decoder 123. The descriptor decoder 123 outputs an N-dimensional descriptor tensor 120 at l/8th the image resolution, similar to SuperPoint. This architecture is illustrated
in Fig. 3. The interest point detector network is trained by distilling the output of the original SuperPoint network and the descriptors are trained by the matching formulation described below.
[0056] The previous step provides interest points for the anchor image and descriptors for all images, i.e., the anchor image and full set of reference images. The next step 124 of the method 110 includes point matching and triangulation. A naive approach would be to match descriptors of the interest points 122 sampled from the descriptor field 120 of the anchor image 114 to all possible positions in each reference image 116. However, this is computationally prohibitive. Hence, the method 110 invokes geometrical constraints to restrict the search space and improve efficiency. Using concepts from multi-view geometry, the method el 00 only searches along the epipolar line in the reference images (see [Fig. 14]). The epipolar line is determined using the fundamental matrix, F, using the relation xFxT = 0, where x is the set of points in the image. The matched point is guaranteed to he on the epipolar line in an ideal scenario. However, practical limitations to obtain perfect pose lead us to search along the epipolar line with a small fixed offset on either side. Furthermore, the epipolar line stretches for depth values from -¥ to ¥. The search space is constrained to lie within a feasible depth sensing range from epipolar line, and the sampling rate is varied within this restricted range in order to obtain descriptor fields with the same output shape for implementation purposes as illustrated in Fig 4. Bilinear sampling is used to obtain the descriptors at the desired points in the descriptor field 120. The descriptor of each interest point 122 is convolved with the descriptor field 120 along its corresponding epipolar line for each image view-point, as illustrated in Equation (1) of Fig. 12, and also reproduced below:
[0057] To obtain the 3D points, the algebraic triangulation approach proposed in [Ref. 21] is followed. Each interest point j is processed independently of each other. The approach is built upon triangulating the 2D interest points along with the 2D positions obtained from the peak value in each cross correlation map. To estimate the 2D positions, the softmax across the spatial axes is first computed, as illustrated in Equation (2) of Fig. 13, and also reproduced below:
[0058] Then, using Equation (3) of Fig. 14 (also reproduced below), the 2D positions of the joints are calculated as the center of mass of the corresponding cross-correlation maps, also termed a soft-argmax operation:
[0059] An important feature of the soft-argmax is that rather than getting the index of the maximum, it allows the gradients to flow back to cross-correlation maps Cj,k from the output 2D position of the matched points xj,k . In other words, unlike argmax, the soft-argmax operator is differentiable. To infer the 3D positions of the joints from their 2D estimates xj,k, a linear algebraic triangulation approach is used. This method reduces the finding of the 3D coordinates of a point zj to solving the over-determined system of equations of homogeneous 3D coordinate vector of the point ^, as illustrated in Equation 4 of Fig.15, and also
reproduced below:
[0060] A naive triangulation algorithm assumes that the point coordinates from each view are independent of each other and thus all make comparable contributions to the triangulation. However, one some views the 2D point locations cannot be estimated reliably (e.g., due to occlusions, artifacts, etc.), leading to unnecessary degradation of the final triangulation result. This greatly exacerbates the tendency of methods that optimize algebraic reprojection error to pay uneven attention to different views. The problem can be solved by applying Random Sample Consensus (RANSAC) together with the Huber loss (used to score reprojection errors corresponding to inliers). However, this has its own drawbacks. E.g., using RANSAC may completely cut off the gradient flow to the excluded view. To address the aforementioned problems, weights wk are added to the coefficients of the matrix corresponding to different views, as illustrated in Equation (5) of Fig. 16. The weights w are set to be the maximum value in each cross-correlation map. This allows the contribution of each camera view to be weighted less while triangulating the interest point. Note the confidence value of the interest points are set to be 1. Equation (5) of Fig. 16, reproduced below, is solved via differentiable Singular Value Decomposition (SVD) of the matrix B = UDVT, from which z is set as the last column of V.
[0061] The final non-homogeneous value of z is obtained by dividing the homogeneous 3D coordinate vector
by its fourth coordinate
[0062] Next, step 142 of method 110, including the densification of sparse depth points will be described. A key-point detector network provides the position of the points. The z coordinate of the triangulated points provides the depth. A sparse depth image of the same resolution as the input image is imputed with depth of these sparse points. Note that the gradients can propagate from the sparse depth image back to the 3D key-points all the way to the input image. This is akin to switch unpooling in SegNet (see [Ref. 1]). The sparse depth image is passed through an encoder network which is a narrower version of the image encoder network 119. More specifically, a ResNet-50 encoder with the channel widths is used after each layer to be one fourth of the image encoder. These features are concatenated with the features obtained from the image encoder 119. A U-net style decoder with intermediate feature maps from both the image as well as sparse depth encoder concatenated with the intermediate feature maps of the same resolution in the decoder is used, similar to [Ref. 6] Deep supervision over 4 scales is provided. (See [Ref 25]). A spatial pyramid pooling block is also included to encourage feature mixing at different receptive field sizes. (See [Refs. 15, 4]). The details of this architecture are shown in Fig. 5.
[0063] The overall training objective will now be described. The entire network is trained with a combination of (a) cross entropy loss between the output tensor of the interest point detector decoder and ground truth interest point locations obtained from SuperPoint, (b) a smooth-Ll loss between the 2D points output after soft argmax and ground truth 2D point matches, (c) a smooth-Ll loss between the 3D points output after SVD triangulation and ground truth 3D points, (d) an edge aware smoothness loss on the output dense depth map, and (e) a smooth-Ll loss over multiple scales between the predicted dense depth map output and ground truth 3D depth map. The overall training objective is:
[0064] Examples:
[0065] Implementation Details:
[0066] Training: Most MVS datasets are trained on the DEMON dataset. However, the DEMON dataset mostly contains pairs of images with the associated depth and pose information. Relative confidence estimation is crucial to accurate triangulation in our algorithm, and needs sequences of length three or greater in order to estimate the confidence accurately and holistically triangulate an interest point. Hence, we divulge from traditional datasets for MVS depth estimation, and instead use ScanNet (see [Ref. 8]). ScanNet is an RGB-D video dataset containing 2.5 million views in more than 1500 scans, annotated with 3D camera poses, surface reconstructions, and instance-level semantic segmentations. Three views from a scan at a fixed interval of 20 frames along with the pose and depth information form a training data point in our method. The target frame is passed through SuperPoint in order to detect interest points, which are then distilled using the loss Li while training our network. We use the depth images to determine ground truth 2D matches, and unproject the depth to determine the ground truth 3D points. We train our model for 100K iterations using PyTorch framework with batch-size of 24 and ADAM optimizer with learning rate 0.0001 (bΐ = 0.9, b2 = 0.999). We fix the resolution of the image to be qVGA (240x320) and number of interest points to be 512 in each image with at most half the interest points chosen from the interest point detector thresholded at 5e-4, and the rest of the points chosen randomly from the image. Choosing random points ensures uniform distribution of sparse points in the image and helps the densification process. We set the length of the sampled
descriptors along the epipolar line to be 100, albeit, we found that the matching is robust even for lengths as small as 25. We empirically set the weights to be [0.1, 1.0, 2.0, 1.0, 2.0]
[0067] Evaluation: The ScanNet test set consists of 100 scans of unique scenes different for the 707 scenes in the training dataset. We first evaluate the performance of our detector and descriptor decoder for the purpose of pose estimation on ScanNet. We use the evaluation protocol and metrics proposed in SuperPoint, namely the mean localization error (MLE), the matching score (MScore), repeatability (Rep) and the fraction of correct pose estimated using descriptor matches and PnP algorithm at 5° (5 degree) threshold for rotation and 5 cm for translation. We compare against SuperPoint, SIFT, ORB and SURF at a NMS threshold of 3 pixels for Rep, MLE, and MScore as suggested in the SuperPoint paper. Next, we use standard metrics to quantitatively measure the quality of our estimated depth: : absolute relative error (Abs Rel), absolute difference error (Abs difl), square relative error (Sq Rel), root mean square error and its log scale (RMSE and RMSE log) and inlier ratios (d < 1.25i where i e 1,2,3).
[0068] We compare our method to recent deep learning approaches for MVS: (a) DPSNet: Deep plane sweep approach, (b) MVDepthNet: Multi -view depth net, and (c) GP-MVSNet temporal non-parametric fusion approach using Gaussian processes. Note that these methods perform much better than traditional geometry based stereo algorithms. Our primary results are on sequences of length 3, but we also report numbers on sequences of length 2,4,5 and 7 in order to understand the performance as a function of scene length. We evaluate the methods on Sun3D dataset, in order to understand the generalization of our approach to other indoor scenes. We also discuss the multiply-accumulate operations (MACs) for the different methods to understand the operating efficiency at run-time.
[0069] Descriptor Quality:
[0070] Table 1 in Fig. 8 shows the results of our detector and descriptor evaluation. Note that MLE and repeatability are detector metrics, MScore is a descriptor metric, and rotation@5 and translation@5 are combined metrics. We set the threshold for our detector at 0.0005, the same as that used during training. This results in a large number of interest points being detected (Num) which artificially inflates the repeatability score (Rep) in our favor, but has poor localization performance as indicated by MLE metric. However, our MScore is comparable to SuperPoint although we trained our network to only match along the epipolar line, and not for the full image. Furthermore, we have the best rotation@5 and translation@5 metric indicating that the matches found using our descriptors help accurately determine rotation and translation, i.e., pose. These results indicate our training procedure can complement the homographic adaptation technique of SuperPoint and boost the overall performance.
[0071] Depth Results:
[0072] We set the same hyper-parameters for evaluating our network for all scenarios and across all datasets, i.e., fix the number of points detected to be 512, length of the sampled descriptors to be 100, and the detector threshold to be 5e-4. In order to ensure uniform distribution of the interest points and avoid clusters, we set a high NMS value of 9 as suggested in [Ref. 9] The supplement has ablation study over different choices of hyper parameters. Table 2 of Fig. 9 shows the performance of depth estimation on sequences of length 3 and gap 20 as used in the training set. For fair comparison, we evaluate two versions of the competing approaches: (1) we provided open source trained model, (2) the trained model fine tuned on ScanNet for 100K iterations with the default training parameters as suggested in the manuscript or made available by the authors. We use a gap of 20 frames to train each network, similar to ours. The fine-tuned models are indicated by the suffix ’-FT’ in Table 2 of Fig. 9. Unsurprisingly, the fine-tuned models fare much better than the original
models on ScanNet evaluation. MVDepthNet has the least improvement after fine-tuning, which can be attributed to the heavy geometric and photometric augmentation used during training, hence making it generalize well. DPSNet benefits maximally from fine-tuning with over 25% drop in absolute error. However, our network according to the presently disclosed methods outperforms all methods across all metrics. Fig. 6 shows a qualitative comparison between the different methods and Fig. 7 shows sample 3D reconstructions of the scene from the estimated depth maps.
[0073] An important feature of any multiview stereo method is the ability to improve with more views. Table 3 of Fig. 10 shows the performance for different numbers of images. We set the frame gap to be 20, 15, 12 and 10 for 2,4,5 and 7 frames respectively. These gaps ensure that each set approximately span the similar volumes in 3D space, and that any performance improvement emerges from the network better using the available information as opposed to acquiring new information. We again see that the method disclosed herein outperforms all other methods on all three metrics for different sequence lengths. Closer inspection of the values indicates that the DPSNet and GPMVSNet does not benefit from additional views, whereas, MVDepthNet benefits from a small number of additional views but stagnates for more than 4 frames. On the contrary, the presently disclosed method shows steady improvement in all three metrics with additional views. This can be attributed to our point matcher and triangulation module which naturally benefits from additional views.
[0074] As a final experiment, we test our network on Sun3D test dataset consisting of 80 pairs of images. Sun3D also captures indoor environments, albeit at a much smaller scale compared to ScanNet. Table 4 of Fig. 11 shows the performance from the two versions of DPSNet and MVDepthNet discussed previously, as compared to our network according to the disclosed embodiments. Note DPSNet and MVDepthNet were originally trained on the Sun3D training database. The fine-tuned version of DPSNet performs better than the original
network on the Sun3D test set owing to the greater diversity in ScanNet training database. MVDepthNet on the contrary performs worse, indicating that it overfit to ScanNet and the original network was sufficiently trained and generalized well. Remarkably, our method according to the embodiments disclosed herein again outperforms both methods although our trained network has never seen any image from the Sun3D database. This indicates that our principled way of determining sparse depth, and then densifying has good generalizability. [0075] Next, we evaluate the total number of multiply-accumulate operations (MACs) needed for our approach according to the disclosed embodiments. For a 2 image sequence, we perform 16.57 Giga Macs (GMacs) for the point detector and descriptor module, less than 0.002 GMacs for the matcher and triangulation module, and 67.90 GMacs for the sparse-to- dense module. A large fraction of this is due to the U-Net style feature tensors connecting the image and sparse depth encoder to the decoder. We perform a total of 84.48 GMacs to estimate the depth for a 2 image sequence. This is considerably lower than DPSNet which performs 295.63 GMacs for a 2 image sequence, and also less than the real-time MVDepthNet which performs 134.8 GMacs for a pair of images to estimate depth. It takes 90 milliseconds to estimate depth on NVidia TiTan RTX GPU, which we evaluated to be 2.5 times faster than DPSNet. We believe our presently disclosed method can be further sped up by replacing Pytorch’s native SVD with a custom implementation for the triangulation. Furthermore, as we do not depend on a cost volume, compound scaling laws as those derived for image recognition and object detection can be straightforwardly extended to make our method more efficient.
[0076] The presently disclosed methods for depth estimation provide an efficient depth estimation algorithm by learning to triangulate and densify sparse points in a multi-view stereo scenario. On all of the existing benchmarks, the methods disclosed herein have exceeded the state-of-the-art results, and demonstrated significant computation efficiency of
competitive methods. It is anticipated that these methods can be expanded on by incorporating more effective attention mechanisms for interest point matching, and more anchor supporting view selection. The methods may also incorporate deeper integration with the SLAM problem as depth estimation and SLAM are duals of each other.
[0077] Appendix 1 : The references listed below correspond to the references in brackets (“[Ref. ##]”), above; each of these references is incorporated by reference in its entirety herein.
[0078] Various example embodiments of the invention are described herein. Reference is made to these examples in a non-limiting sense. They are provided to illustrate more broadly applicable aspects of the invention. Various changes may be made to the invention described and equivalents may be substituted without departing from the true spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation, material, composition of matter, process, process act(s) or step(s) to the objective(s), spirit or scope of the present invention. Further, as will be appreciated by those with skill in the art that each of the individual variations described and illustrated herein has discrete components and features which may be readily separated from or combined with the features of any of the other several embodiments without departing from the scope or spirit of the present inventions. All such modifications are intended to be within the scope of claims associated with this disclosure.
[0079] The invention includes methods that may be performed using the subject devices. The methods may comprise the act of providing such a suitable device. Such provision may be performed by the end user. In other words, the "providing" act merely requires the end user obtain, access, approach, position, set-up, activate, power-up or otherwise act to provide the requisite device in the subject method. Methods recited herein may be carried out in any order of the recited events which is logically possible, as well as in the recited order of events.
[0080] Example aspects of the invention, together with details regarding material selection and manufacture have been set forth above. As for other details of the present invention, these may be appreciated in connection with the above-referenced patents and publications as well as generally known or appreciated by those with skill in the art. The same may hold true with respect to method-based aspects of the invention in terms of additional acts as commonly or logically employed.
[0081] In addition, though the invention has been described in reference to several examples optionally incorporating various features, the invention is not to be limited to that which is described or indicated as contemplated with respect to each variation of the invention. Various changes may be made to the invention described and equivalents (whether recited herein or not included for the sake of some brevity) may be substituted without departing from the true spirit and scope of the invention. In addition, where a range of values is provided, it is understood that every intervening value, between the upper and lower limit of that range and any other stated or intervening value in that stated range, is encompassed within the invention.
[0082] Also, it is contemplated that any optional feature of the inventive variations described may be set forth and claimed independently, or in combination with any one or more of the features described herein. Reference to a singular item, includes the possibility that there are
plural of the same items present. More specifically, as used herein and in claims associated hereto, the singular forms "a," "an," "said," and "the" include plural referents unless the specifically stated otherwise. In other words, use of the articles allow for "at least one" of the subject item in the description above as well as claims associated with this disclosure. It is further noted that such claims may be drafted to exclude any optional element. As such, this statement is intended to serve as antecedent basis for use of such exclusive terminology as "solely," "only" and the like in connection with the recitation of claim elements, or use of a "negative" limitation.
[0083] Without the use of such exclusive terminology, the term "comprising" in claims associated with this disclosure shall allow for the inclusion of any additional element- irrespective of whether a given number of elements are enumerated in such claims, or the addition of a feature could be regarded as transforming the nature of an element set forth in such claims. Except as specifically defined herein, all technical and scientific terms used herein are to be given as broad a commonly understood meaning as possible while maintaining claim validity.
[0084] The breadth of the present invention is not to be limited to the examples provided and/or the subject specification, but rather only by the scope of claim language associated with this disclosure.
Claims
1. A method for estimating depth of features in a scene from multi-view images, the method comprising: obtaining multi-view images, including an anchor image of the scene and a set of reference images of the scene; passing the anchor image and reference images through a shared RGB encoder and descriptor decoder which (1) outputs a respective descriptor field of descriptors for the anchor image and each reference image, (ii) detects interest points in the anchor image in conjunction with relative poses to determine a search space in the reference images from alternate view-points, and (iii) outputs intermediate feature maps; sampling the respective descriptors in the search space of each reference image to determine descriptors in the search space and matching the identified descriptors with descriptors for the interest points in the anchor image, such matched descriptors referred to as matched keypoints; triangulating the matched keypoints using singular value decomposition (SVD) to output 3D points; passing the 3D points through a sparse depth encoder to create a sparse depth image from the 3D points and output feature maps; and a depth decoder generating a dense depth image based on the output feature maps for the sparse depth encoder and the intermediate feature maps from the RGB encoder.
2. The method of claim 1, wherein the shared RGB encoder and descriptor decoder comprises two encoders including an RGB image encoder and a sparse depth image encoder, and three decoders including an interest point detection encoder, a descriptor decoder, and a dense depth prediction encoder.
3. The method of claim 1, wherein the shared RGB encoder and descriptor decoder is a fully-convolutional neural network configured to operate on a full resolution of the anchor image and transaction images.
4. The method of claim 1, further comprising: feeding the feature maps from the RGB encoder into a first task-specific decoder head to determine weights for the detecting of interest points in the anchor image and outputting interest point descriptions.
5. The method of claim 1, wherein the descriptor decoder comprises a U-Net like architecture to fuse fine and course level image information for matching the identified descriptors with descriptors for the interest points.
6. The method of claim 1, wherein the search space is constrained to a respective epipolar line in the reference images plus a fixed offset on either side of the epipolar line, and within a feasible depth sensing range along the epipolar line.
7. The method of claim 1, wherein bilinear sampling is used by the shared RGB encoder and descriptor decoder to output the respective descriptors at desired points in the descriptor field.
8. The method of claim 1, wherein the step of triangulating the matched keypoints comprises:
estimating respective two dimensional (2D) positions of the interest points by computing a softmax across spatial axes to output cross-correlation maps; performing a soft-argmax operation to calculate the 2D position of joints as a center of mass of corresponding cross-correlation maps; performing a linear algebraic triangulation from the 2D estimates; and using a singular value decomposition (SVD) to output 3D points.
9. A cross reality system, comprising: a head-mounted display device having a display system; a computing system in operable communication with the head-mounted display; a plurality of camera sensors in operable communication with the computing system; wherein the computing system is configured to estimate depths of features in a scene from a plurality of multi-view images captured by the camera sensors by a process comprising: obtaining a multi-view images, including an anchor image of the scene and a set of reference images of a scene within a field of view of the camera sensors from the camera sensors; passing the anchor image and reference images through a shared RGB encoder and descriptor decoder which (1) outputs a respective descriptor field of descriptors for the anchor image and each reference image, (ii) detects interest points in the anchor image in conjunction with relative poses to determine a search space in the reference images from alternate view-points, and (iii) outputs intermediate feature maps; sampling the respective descriptors in the search space of each reference image to determine descriptors in the search space and matching the identified descriptors with
descriptors for the interest points in the anchor image, such matched descriptors referred to as matched keypoints; triangulating the matched keypoints using singular value decomposition (SVD) to output 3D points; passing the 3D points through a sparse depth encoder to create a sparse depth image from the 3D points and output feature maps; and a depth decoder generating a dense depth image based on the output feature maps for the sparse depth encoder and the intermediate feature maps from the RGB encoder.
10. The cross reality system of claim 9, wherein the shared RGB encoder and descriptor decoder comprises two encoders including an RGB image encoder and a sparse depth image encoder, and three decoders including an interest point detection encoder, a descriptor decoder, and a dense depth prediction encoder.
11. The cross reality system of claim 9, wherein the shared RGB encoder and descriptor decoder is a fully-convolutional neural network configured to operate on a full resolution of the anchor image and transaction images.
12. The cross reality system of claim 9, wherein the process for estimating depths of features in a scene from a plurality of multi-view images captured by the camera sensors further comprises: feeding the feature maps from the RGB encoder into a first task-specific decoder head to determine weights for the detecting of interest points in the anchor image and outputting interest point descriptions.
13. The cross reality system of claim 9, wherein the descriptor decoder comprises a U-Net like architecture to fuse fine and course level image information for matching the identified descriptors with descriptors for the interest points.
14. The cross reality system of claim 9, wherein the search space is constrained to a respective epipolar line in the reference images plus a fixed offset on either side of the epipolar line, and within a feasible depth sensing range along the epipolar line.
15. The cross reality system of claim 9, wherein bilinear sampling is used by the shared RGB encoder and descriptor decoder to output the respective descriptors at desired points in the descriptor field.
16. The cross reality system of claim 9, wherein the step of triangulating the matched keypoints comprises: estimating respective two dimensional (2D) positions of the interest points by computing a softmax across spatial axes to output cross-correlation maps; performing a soft-argmax operation to calculate the 2D position of joints as a center of mass of corresponding cross-correlation maps; performing a linear algebraic triangulation from the 2D estimates; and using a singular value decomposition (SVD) to output 3D points.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180017832.2A CN115210532A (en) | 2020-03-05 | 2021-03-05 | System and method for depth estimation by learning triangulation and densification of sparse points for multi-view stereo |
JP2022552548A JP2023515669A (en) | 2020-03-05 | 2021-03-05 | Systems and Methods for Depth Estimation by Learning Sparse Point Triangulation and Densification for Multiview Stereo |
EP21764896.3A EP4115145A4 (en) | 2020-03-05 | 2021-03-05 | Systems and methods for depth estimation by learning triangulation and densification of sparse points for multi-view stereo |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202062985773P | 2020-03-05 | 2020-03-05 | |
US62/985,773 | 2020-03-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021178919A1 true WO2021178919A1 (en) | 2021-09-10 |
Family
ID=77554899
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2021/021239 WO2021178919A1 (en) | 2020-03-05 | 2021-03-05 | Systems and methods for depth estimation by learning triangulation and densification of sparse points for multi-view stereo |
Country Status (5)
Country | Link |
---|---|
US (1) | US11948320B2 (en) |
EP (1) | EP4115145A4 (en) |
JP (1) | JP2023515669A (en) |
CN (1) | CN115210532A (en) |
WO (1) | WO2021178919A1 (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200137380A1 (en) * | 2018-10-31 | 2020-04-30 | Intel Corporation | Multi-plane display image synthesis mechanism |
US11481871B2 (en) * | 2021-03-12 | 2022-10-25 | Samsung Electronics Co., Ltd. | Image-guided depth propagation for space-warping images |
US20230111306A1 (en) * | 2021-10-13 | 2023-04-13 | GE Precision Healthcare LLC | Self-supervised representation learning paradigm for medical images |
US12086965B2 (en) * | 2021-11-05 | 2024-09-10 | Adobe Inc. | Image reprojection and multi-image inpainting based on geometric depth parameters |
CN114820745B (en) * | 2021-12-13 | 2024-09-13 | 南瑞集团有限公司 | Monocular vision depth estimation system, monocular vision depth estimation method, computer device, and computer-readable storage medium |
CN114332510B (en) * | 2022-01-04 | 2024-03-22 | 安徽大学 | Hierarchical image matching method |
CN114742794B (en) * | 2022-04-02 | 2024-08-27 | 北京信息科技大学 | Temporary road detection method and system based on triangulation |
CN114913287B (en) * | 2022-04-07 | 2023-08-22 | 北京拙河科技有限公司 | Three-dimensional human body model reconstruction method and system |
WO2023225235A1 (en) * | 2022-05-19 | 2023-11-23 | Innopeak Technology, Inc. | Method for predicting depth map via multi-view stereo system, electronic apparatus and storage medium |
CN116071504B (en) * | 2023-03-06 | 2023-06-09 | 安徽大学 | Multi-view three-dimensional reconstruction method for high-resolution image |
CN116934829B (en) * | 2023-09-15 | 2023-12-12 | 天津云圣智能科技有限责任公司 | Unmanned aerial vehicle target depth estimation method and device, storage medium and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120163672A1 (en) * | 2010-12-22 | 2012-06-28 | David Mckinnon | Depth Estimate Determination, Systems and Methods |
US20140049612A1 (en) * | 2011-10-11 | 2014-02-20 | Panasonic Corporation | Image processing device, imaging device, and image processing method |
US20140192154A1 (en) * | 2011-08-09 | 2014-07-10 | Samsung Electronics Co., Ltd. | Method and device for encoding a depth map of multi viewpoint video data, and method and device for decoding the encoded depth map |
US20150237329A1 (en) * | 2013-03-15 | 2015-08-20 | Pelican Imaging Corporation | Systems and Methods for Estimating Depth Using Ad Hoc Stereo Array Cameras |
US20150262412A1 (en) * | 2014-03-17 | 2015-09-17 | Qualcomm Incorporated | Augmented reality lighting with dynamic geometry |
US20180176545A1 (en) * | 2016-11-25 | 2018-06-21 | Nokia Technologies Oy | Virtual reality display |
US20190108683A1 (en) * | 2016-04-01 | 2019-04-11 | Pcms Holdings, Inc. | Apparatus and method for supporting interactive augmented reality functionalities |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7152051B1 (en) | 2002-09-30 | 2006-12-19 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
US9483843B2 (en) * | 2014-01-13 | 2016-11-01 | Transgaming Inc. | Method and system for expediting bilinear filtering |
US9704232B2 (en) * | 2014-03-18 | 2017-07-11 | Arizona Board of Regents of behalf of Arizona State University | Stereo vision measurement system and method |
US9418319B2 (en) | 2014-11-21 | 2016-08-16 | Adobe Systems Incorporated | Object detection using cascaded convolutional neural networks |
US10783394B2 (en) | 2017-06-20 | 2020-09-22 | Nvidia Corporation | Equivariant landmark transformation for landmark localization |
US11676296B2 (en) * | 2017-08-11 | 2023-06-13 | Sri International | Augmenting reality using semantic segmentation |
CA3078530A1 (en) | 2017-10-26 | 2019-05-02 | Magic Leap, Inc. | Gradient normalization systems and methods for adaptive loss balancing in deep multitask networks |
US10929654B2 (en) * | 2018-03-12 | 2021-02-23 | Nvidia Corporation | Three-dimensional (3D) pose estimation from a monocular camera |
US10304193B1 (en) * | 2018-08-17 | 2019-05-28 | 12 Sigma Technologies | Image segmentation and object detection using fully convolutional neural network |
US10682108B1 (en) * | 2019-07-16 | 2020-06-16 | The University Of North Carolina At Chapel Hill | Methods, systems, and computer readable media for three-dimensional (3D) reconstruction of colonoscopic surfaces for determining missing regions |
US11727587B2 (en) * | 2019-11-12 | 2023-08-15 | Geomagical Labs, Inc. | Method and system for scene image modification |
US11763433B2 (en) * | 2019-11-14 | 2023-09-19 | Samsung Electronics Co., Ltd. | Depth image generation method and device |
US11900626B2 (en) * | 2020-01-31 | 2024-02-13 | Toyota Research Institute, Inc. | Self-supervised 3D keypoint learning for ego-motion estimation |
-
2021
- 2021-03-05 CN CN202180017832.2A patent/CN115210532A/en active Pending
- 2021-03-05 EP EP21764896.3A patent/EP4115145A4/en active Pending
- 2021-03-05 US US17/194,117 patent/US11948320B2/en active Active
- 2021-03-05 WO PCT/US2021/021239 patent/WO2021178919A1/en unknown
- 2021-03-05 JP JP2022552548A patent/JP2023515669A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120163672A1 (en) * | 2010-12-22 | 2012-06-28 | David Mckinnon | Depth Estimate Determination, Systems and Methods |
US20140192154A1 (en) * | 2011-08-09 | 2014-07-10 | Samsung Electronics Co., Ltd. | Method and device for encoding a depth map of multi viewpoint video data, and method and device for decoding the encoded depth map |
US20140049612A1 (en) * | 2011-10-11 | 2014-02-20 | Panasonic Corporation | Image processing device, imaging device, and image processing method |
US20150237329A1 (en) * | 2013-03-15 | 2015-08-20 | Pelican Imaging Corporation | Systems and Methods for Estimating Depth Using Ad Hoc Stereo Array Cameras |
US20150262412A1 (en) * | 2014-03-17 | 2015-09-17 | Qualcomm Incorporated | Augmented reality lighting with dynamic geometry |
US20190108683A1 (en) * | 2016-04-01 | 2019-04-11 | Pcms Holdings, Inc. | Apparatus and method for supporting interactive augmented reality functionalities |
US20180176545A1 (en) * | 2016-11-25 | 2018-06-21 | Nokia Technologies Oy | Virtual reality display |
Non-Patent Citations (1)
Title |
---|
See also references of EP4115145A4 * |
Also Published As
Publication number | Publication date |
---|---|
US11948320B2 (en) | 2024-04-02 |
CN115210532A (en) | 2022-10-18 |
JP2023515669A (en) | 2023-04-13 |
EP4115145A1 (en) | 2023-01-11 |
US20210279904A1 (en) | 2021-09-09 |
EP4115145A4 (en) | 2023-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11948320B2 (en) | Systems and methods for depth estimation by learning triangulation and densification of sparse points for multi-view stereo | |
US12100093B2 (en) | Systems and methods for end to end scene reconstruction from multiview images | |
US11838518B2 (en) | Reprojecting holographic video to enhance streaming bandwidth/quality | |
JP7569439B2 (en) | Scalable 3D Object Recognition in Cross-Reality Systems | |
Zioulis et al. | Omnidepth: Dense depth estimation for indoors spherical panoramas | |
US10546429B2 (en) | Augmented reality mirror system | |
US11328481B2 (en) | Multi-resolution voxel meshing | |
US20210011288A1 (en) | Systems and methods for distributing a neural network across multiple computing devices | |
KR20150080003A (en) | Using motion parallax to create 3d perception from 2d images | |
US11989900B2 (en) | Object recognition neural network for amodal center prediction | |
US20230290132A1 (en) | Object recognition neural network training using multiple data sources | |
Jin et al. | From capture to display: A survey on volumetric video | |
WO2019213392A1 (en) | System and method for generating combined embedded multi-view interactive digital media representations | |
Baričević et al. | User-perspective AR magic lens from gradient-based IBR and semi-dense stereo | |
Tenze et al. | altiro3d: scene representation from single image and novel view synthesis | |
Sinha et al. | Depth estimation by learning triangulation and densification of sparse points for multi-view stereo | |
CN116643648B (en) | Three-dimensional scene matching interaction method, device, equipment and storage medium | |
Hall et al. | Networked and multimodal 3d modeling of cities for collaborative virtual environments | |
Liu et al. | Template-Based 3D Reconstruction of Non-rigid Deformable Object from Monocular Video | |
WO2021231261A1 (en) | Computationally efficient method for computing a composite representation of a 3d environment | |
CN116912533A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN117392228A (en) | Visual mileage calculation method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21764896 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022552548 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021764896 Country of ref document: EP Effective date: 20221005 |