US20200118281A1 - Three dimensional model generation using heterogeneous 2d and 3d sensor fusion - Google Patents
Three dimensional model generation using heterogeneous 2d and 3d sensor fusion Download PDFInfo
- Publication number
- US20200118281A1 US20200118281A1 US16/157,012 US201816157012A US2020118281A1 US 20200118281 A1 US20200118281 A1 US 20200118281A1 US 201816157012 A US201816157012 A US 201816157012A US 2020118281 A1 US2020118281 A1 US 2020118281A1
- Authority
- US
- United States
- Prior art keywords
- point cloud
- image
- points
- model
- upsampled
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title description 10
- 238000000034 method Methods 0.000 claims abstract description 86
- 238000003384 imaging method Methods 0.000 claims description 65
- 230000008569 process Effects 0.000 claims description 26
- 238000013139 quantization Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 6
- 238000003032 molecular docking Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 19
- 238000005286 illumination Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G06K9/00208—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/0068—Geometric image transformation in the plane of the image for image registration, e.g. elastic snapping
-
- G06T3/14—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4007—Interpolation-based scaling, e.g. bilinear interpolation
-
- G06T5/77—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/647—Three-dimensional objects by matching two-dimensional images to three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/653—Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/56—Particle system, point based geometry or rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2004—Aligning objects, relative positioning of parts
Definitions
- the present disclosure relates to three dimensional (3D) model generation for performance of an operation with respect to an object and more particularly to three dimensional model generation using heterogeneous two dimension (2D) and three dimensional (3D) sensor fusion.
- a three-dimensional (3D) model may be developed by acquiring a 3D image of an object using a 3D scanning technology such as, for example, light detection and ranging (LiDAR).
- the 3D image generated provides a point cloud that is a collection of points in a 3D coordinate system. Each point in the point cloud represents XYZ coordinates within the 3D coordinate system.
- the points within the point cloud represent the location of points on an exterior surface of the object and define a 3D model of the object.
- the 3D model may be used in performing an operation on the object, for instance, by a robot or other device.
- the 3D model represented by the point cloud often suffers from insufficient features on objects or feature detection discrepancy among images because of illumination changes.
- Such conditions may be particularly present with objects, such as satellites or other spacecraft, in space where illumination levels of the object may vary widely from one viewpoint to another.
- a method for generating a three dimensional (3D) model of an object includes capturing, by a two dimensional (2D) imaging sensor, a 2D image of the object.
- the 2D image includes a 2D image plane.
- the method also includes capturing, by a 3D imaging sensor, a 3D image of the object.
- the 3D image of the object includes a 3D point cloud.
- the 3D point cloud includes a multiplicity of points, and the 3D point cloud includes a plurality of missing points or holes in the 3D point cloud.
- the method additionally includes generating, by a processor, an upsampled 3D point cloud from the 3D image using local entropy data of the 2D image to fill at least some missing points or holes in the 3D point cloud and merging, by the processor, a 3D model point cloud from a previous viewpoint or location of a sensor platform and the upsampled 3D point cloud to create a new 3D model point cloud.
- the method further includes quantizing the new 3D model point cloud to generate an updated 3D model point cloud.
- a method for generating a three dimensional (3D) model of an object includes capturing, by a two dimensional (2D) imaging sensor, a 2D image of the object.
- the 2D image includes a 2D image plane.
- the method also includes capturing, by a 3D imaging sensor, a 3D image of the object.
- the 3D image of the object includes a 3D point cloud.
- the 3D point cloud includes a multiplicity of points, and the 3D point cloud includes a plurality of missing points or holes in the 3D point cloud.
- the 3D point cloud also includes a 3D depth map, wherein each of the points of the 3D point cloud includes depth information of a corresponding location on the object.
- the method also includes upsampling, by a processor, to generate an upsampled 3D point cloud from the 3D image using local entropy data of pixels within a predefined upsampling window.
- the upsampled 3D point cloud includes filled points for at least selected missing points or holes in the 3D point cloud.
- the method additionally includes generating, by the processor, multiple upsampled 3D point clouds from different viewpoints or locations of a sensor platform including the 2D imaging sensor and the 3D imaging sensor.
- the method further includes merging, by the processor, the multiple upsampled 3D point clouds to generate a 3D model point cloud of the object.
- a system for generating a three dimensional (3D) model of an object includes a two dimensional (2D) imaging sensor for capturing a 2D image of the object.
- the 2D image includes a 2D image plane.
- the system also includes a 3D imaging sensor for capturing a 3D image of the object.
- the 3D image of the object includes a 3D point cloud.
- the 3D point cloud includes a multiplicity of points, and the 3D point cloud includes a plurality of missing points or holes in the 3D point cloud.
- the 3D point cloud also includes a 3D depth map, wherein each of the points of the 3D point cloud includes depth information of a corresponding location on the object.
- the system also includes a processor configured to perform a set of functions including upsampling to generate an upsampled 3D point cloud from the 3D image using local entropy data of pixels within a predefined upsampling window.
- the upsampled 3D point cloud includes filled points for at least selected missing points or holes in the 3D point cloud.
- the set of functions also includes generating multiple upsampled 3D point clouds from different viewpoints or locations of a sensor platform that includes the 2D imaging sensor and the 3D imaging sensor.
- the set of functions further includes merging the multiple upsampled 3D point clouds to generate a 3D model point cloud of the object.
- the method, system or set of functions further includes performing a process including moving the sensor platform to a next viewpoint or location relative to the object.
- the sensor platform includes the 2D imaging sensor and the 3D imaging sensor.
- the process also includes capturing, by the 2D imaging sensor, a subsequent 2D image of the object at a current viewpoint or location of the sensor platform and capturing, by the 3D imaging sensor, a subsequent 3D image of the object at the current viewpoint or location of the sensor platform.
- the subsequent 3D image of the object includes a subsequent 3D point cloud including a plurality of missing points or holes.
- the process also includes generating a current upsampled 3D point cloud for the current viewpoint or location of the sensor platform from the subsequent 3D image and using local entropy data of the subsequent 2D image to fill at least some of the plurality of missing points or holes in the subsequent 3D point cloud.
- the process additionally includes registering the updated 3D model point cloud from the previous viewpoint or location of the sensor platform with original points of the subsequent 3D point cloud without entropy based upsampling and merging the updated 3D model point cloud from the previous viewpoint or location of the sensor platform and the current upsampled 3D point cloud to create a current new 3D model point cloud.
- the process further includes quantizing the current new 3D model point cloud to generate a current updated 3D model point cloud for the current viewpoint or location of the sensor platform.
- the method or system additionally includes repeating the process for each of a set of viewpoints or locations of the sensor platform.
- the method, process or system further includes determining a homogeneous transform from the updated 3D model point cloud from the previous viewpoint or location of the sensor platform and the original points of a current 3D point cloud at the current viewpoint or location of the sensor platform without entropy-based upsampling using an iterative closest point process.
- the method, process or system also includes adjusting the current upsampled 3D point cloud to align or coordinate with the updated 3D model point cloud from the previous viewpoint or location of the sensor platform before merging using the homogeneous transform.
- the method, process or system further includes performing an operation with respect to the object.
- performing an operation with respect to the object includes one of performing an autonomous space rendezvous; performing a proximity maneuver; performing a docking maneuver; or generating a 3D model of the object, wherein the object is a space object.
- the method, process or system additionally includes aligning the 2D image and the 3D image using pre-acquired calibration information.
- the method, process or system additionally includes assigning a depth value from each of a predetermined number of points in the 3D point cloud of the 3D image to respective matching pixels on the 2D image plane.
- the method, process or system also includes interpolating depth values for other pixels on the 2D image plane from the 3D point cloud of the 3D image.
- the method, process or system wherein the interpolating includes using a predefined upsampling window around a currently processed pixel, and performing upsampling using the local entropy data of the pixels of the 2D image within the predefined upsampling window.
- the method, process or system further includes aligning the 2D image and the 3D image; assigning a depth value from selected points of the 3D point cloud or 3D depth map to a matching pixel on the 2D image plane; and interpolating depth values for other pixels on the 2D image plane within the predefined upsampling window.
- merging the multiple upsampled 3D point clouds includes performing point cloud registration and quantization to generate the 3D model point cloud of the object.
- merging the multiple upsampled 3D point clouds includes using an iterative closest point process to generate the 3D model point cloud of the object.
- the 2D imaging sensor includes an electro-optical camera to capture a 2D electro-optical image and wherein the 3D imaging sensor includes a 3D Light Detection and Ranging (LiDAR) imaging sensor.
- LiDAR Light Detection and Ranging
- FIG. 1A is a block schematic diagram of an example of a system for generating a 3D model point cloud of an object in accordance with an embodiment of the present disclosure.
- FIG. 1B is an illustration of the exemplary system of FIG. 1A showing an example of an image plane and using heterogeneous 2D and 3D sensor fusion in accordance with an embodiment of the present disclosure.
- FIG. 2 is a flow chart of an example of a method for generating a 3D model point cloud of an object in accordance with an embodiment of the present disclosure.
- FIG. 3A is a diagram of an example of 2D image or a 2D electro-optical (EO) image of the object in accordance with an embodiment of the present disclosure.
- FIG. 3B is a diagram of an example of a 3D image of the object including a point cloud in accordance with an embodiment of the present disclosure.
- FIG. 4A is a diagram illustrating an example of resolution of the 2D image in accordance with an embodiment of the present disclosure.
- FIG. 4B is a diagram illustrating an example of resolution of the 3D image or 3D point cloud compared to the 2D image including pixels in FIG. 4A in accordance with an embodiment of the present disclosure.
- FIG. 4C is a diagram illustrating an example of generating an upsampled 3D point cloud in accordance with an embodiment of the present disclosure.
- FIG. 5 is a diagram illustrating an example of an entropy image of the 2D image in FIG. 3A in accordance with an embodiment of the present disclosure.
- FIGS. 6A and 6B are a flow chart of an example of a method for generating a 3D model point cloud of an object in accordance with another embodiment of the present disclosure.
- FIG. 7 is a block schematic diagram illustrating portions the exemplary method in FIGS. 6A and 6B .
- FIG. 1A is a block schematic diagram of an example of a system 100 for generating a 3D model point cloud 102 of an object 104 in accordance with an embodiment of the present disclosure.
- the 3D model point cloud 102 may also be referred to herein as simply the 3D model.
- the system 100 includes a two dimensional (2D) imaging sensor 106 for capturing a 2D image 108 of the object 104 .
- An example of the 2D imaging sensor 106 is an electro-optical camera 109 that generated a high resolution 2D electro-optical image.
- Other examples of the 2D imaging sensor include any type device capable of generating a high resolution 2D image.
- the 2D image 108 includes a plurality of pixels 112 that provide a predetermined high resolution 114 .
- the 2D image 108 also includes a 2D image plane 110 or is referenced in the 2D image plane 110 as illustrated in FIG. 1B .
- the system 100 also includes a 3D imaging sensor 116 for capturing a 3D image 118 of the object 104 .
- the 3D image 118 of the object 104 includes a 3D point cloud 120 .
- the 3D point cloud 120 includes a multiplicity of points 122 , and the 3D point cloud 120 includes a plurality of missing points 124 or holes that render the 3D point cloud 120 unusable as a 3D model point cloud for performing an operation with respect to the object 104 .
- Each of the points 122 of the 3D point cloud 120 include corresponding location information or XYZ coordinate information of a corresponding location 126 on a surface 128 of the object 104 .
- the 3D point cloud 120 includes or defines a 3D depth map 130 that includes the location or coordinate information of locations 126 or points on the object 104 . Therefore, each of the points 122 of the 3D point cloud 120 includes depth information 132 or location information of the corresponding location 126 or point on the object 104 .
- the depth information 132 includes depth values 133 .
- the 3D image 118 or 3D point cloud 120 includes a resolution 134 that is less than the resolution 114 of the 2D image 108 as also illustrated and described with reference to FIGS. 4A and 4B .
- the 3D imaging sensor 116 is a Light Detection and Ranging (LiDAR) imaging sensor 135 or similar imaging sensor capable of generating a 3D image or 3D point cloud as described herein.
- LiDAR Light Detection and Ranging
- the 2D imaging sensor 106 and the 3D imaging sensor 116 are associated with or mounted to a sensor platform 136 .
- the sensor platform 136 is configured to move to different viewpoints 138 or locations to capture 2D images 108 and corresponding 3D images 118 at the different viewpoints 138 or location of the sensor platform 136 .
- the multiple 2D images 108 and multiple 3D images 118 captured at different viewpoints 138 or locations of the sensor platform 136 are stored in a memory device 140 .
- the 2D image 108 data and the 3D image 118 data are combined or fused for each viewpoint 138 or location of the sensor platform 136 and the data for each of the viewpoints are combined or fused to generate the 3D model point cloud 102 or updated 3D model point cloud.
- the sensor platform is a vehicle, such as a spacecraft or other type vehicle.
- the system 100 also includes an image processing system 142 .
- the memory device 140 is a component of the image processing system 142 .
- the image processing system 142 includes a processor 144 .
- the image processing system 142 or processor 144 is configured for generating a 3D model point cloud 102 of the object 104 .
- the 3D model point cloud 102 is used to perform an operation on the object 104 .
- the 3D model point cloud 102 is actually an updated 3D model point cloud that is generated by sensor data fusion or combining the 2D image 108 and the 3D image 118 at the same viewpoint 138 using entropy-based upsampling as described in more detail with reference to FIG. 4C .
- the image processing system 142 or processor 144 is configured to perform a set of functions 145 including in block 146 sensor data fusion from the 2D imaging sensor 106 and the 3D imaging sensor 116 or combining the 2D image 108 and a corresponding 3D image 118 at the same viewpoint 138 .
- An upsampled 3D point cloud 148 is generated from the 3D image by upsampling using local entropy data 149 of pixels 112 of the 2D image 108 within a predefined upsampling window 402 in FIG. 4C .
- An example of using a predefined upsampling window 402 for upsampling using local entropy data 149 of the pixels 112 will be described in more detail with reference to FIG. 4C .
- the upsampled 3D point cloud 148 includes filled points 150 for at least selected missing points 124 or holes in the 3D point cloud 120 of the 3D image 118 .
- Multiple upsampled 3D point clouds 148 are generated from different viewpoints 138 or locations of the sensor platform 136 .
- registration of multiple point clouds 148 is performed with only selective points from the point clouds 148 and using entropy based upsampling.
- the points 122 that have lower entropies compared to other points 122 are selected because the lower entropy points 122 are more certain or there is more confidence in the depth information 132 or location information of these points 122 with the lower entropies compared to points 122 with higher entropies.
- An example of multiple point cloud registration is described in more detail in U.S. Pat. No. 9,972,067 and will be briefly described with reference to FIG. 1B .
- point quantization or subsampling is performed using color information from the 2D image 108 for filtering the registered upsampled 3D point clouds 148 to generate an updated 3D model point cloud or 3D model point cloud 102 of the object 104 .
- point quantization or subsampling is performed by selecting every Nth point in the entire set of points 122 and removing other points. The total number of points 122 in the 3D point cloud 120 will be reduced.
- Another (more computationally expensive) way of quantization is a grid-based method, where a 3D space is divided into a 3D cubic grid and a centroid point is calculated from all the points 122 within every unit grid cell.
- centroid points are the quantized version of the original 3D point cloud 120 . Accordingly, the multiple upsampled 3D point clouds 148 are merged to generate the updated 3D model point cloud or 3D model point cloud 102 of the object 104 .
- the updated 3D model point cloud 102 is used in block 152 for registration with other 3D point clouds.
- the updated 3D model point cloud 102 is used for sensor platform pose estimation.
- the sensor platform pose estimation means the relative distance and orientation of the sensor platform 136 with respect to the object 104 that can be used for approaching, manipulation, or rendezvous.
- the sensor platform 136 is controlled using the sensor platform pose estimation 156 and/or 3D model point cloud 102 of the object 104 .
- the 3D model point cloud 102 is used to perform an operation with respect to the object 104 .
- Examples of controlling the sensor platform 136 and/or performing an operation with respect to the object 104 include but is not necessarily limited to performing an autonomous rendezvous between the sensor platform 136 or space vehicle and the object 104 , performing a proximity maneuver by the sensor platform 136 relative to the object 104 , performing a docking maneuver between the sensor platform 136 or space vehicle and the object 104 which is another space vehicle or generate a 3D image of the object 104 which is a space object such as an asteroid or other space object.
- FIG. 1B is an illustration of the exemplary system of FIG. 1A showing an example of an image plane and using heterogeneous 2D and 3D sensor fusion in accordance with an embodiment of the present disclosure.
- a calibration procedure between the 3D point cloud 120 and the 2D image 108 is performed by the processor 144 or image processing system 142 to determine the relative poses of the 3D point cloud 120 and the 2D image 108 .
- Synchronizing or aligning the 3D image 118 or 3D point cloud 120 with the 2D image involve fusion or combining the data from the 2D imaging sensor 106 and the 3D imaging sensor 116 for the same viewpoint 138 or location of the sensor platform 136 , which is also referred to as heterogeneous 2D and 3D sensor fusion.
- the image processing system 142 is configured to determine a feature point 160 A of the object 104 within the 3D point cloud 120 as well as a corresponding pixel location 162 in the image plane 110 of the 2D image 108 which corresponds to the feature point 160 A as shown in FIG. 1A .
- the feature point 160 A corresponds to the pixel location 162 on the image plane 110 (e.g., the two dimensional projection of a 3D scene onto a two dimensional image captured by the 2D imaging sensor 106 ).
- the image processing system 142 or processor 144 is also configured to determine a predetermined number of feature points 160 A-C (or common points) in the 3D point cloud 120 and the corresponding pixel locations to each respective feature points 160 A-C on the image plane 110 of the 2D image 108 captured by the 2D imaging sensor 106 .
- two or more pairs of feature points 160 A-C in the 3D point cloud 120 and corresponding pixel locations on the image plane 110 are determined by the image processing system 142 or processor 144 .
- each of the feature points 160 A-C in the 3D point cloud 120 (e.g., the 3D depth map 130 ) provide a 3D location (e.g., provides a depth value of the point)
- the 3D point cloud 120 provides the image processing system 142 or processor 144 with the depth value or 3D location information for each of the corresponding pixels 112 to the feature points 160 A-C on the image plane 110 .
- FIG. 2 is a flow chart of an example of a method 200 for generating a 3D model point cloud of an object in accordance with an embodiment of the present disclosure.
- the method 200 is embodied in and performed by the system 100 in FIGS. 1A and 1B .
- the set of functions 145 includes the method 200 .
- generating the 3D model point cloud includes generating the 3D model point cloud using heterogeneous 2D and 3D sensor fusion in that data from the 2D imaging sensor 106 is combined or fused with data from the 3D imaging sensor 116 for the same viewpoint 138 or location of the sensor platform 136 .
- a 2D image of the object is captured by a 2D imaging sensor, such as 2D imaging sensor 106 in FIG. 1A .
- the 2D imaging sensor is an electro-optical camera and the 2D image is a 2D electro-optical (EO) image of the object.
- the 2D image includes a 2D image plane, for example, 2D image plane 110 in FIG. 1B .
- FIG. 3A is a diagram of an example of a 2D image 108 or 2D electro-optical (EO) image of the object 104 in accordance with an embodiment of the present disclosure.
- a 3D image of the object is captured by a 3D imaging sensor, such as 3D imaging sensor 116 in FIG. 1A .
- the 3D imaging sensor is a Light Detection and Ranging (LiDAR) imaging sensor or similar device for capturing the 3D image of the object.
- LiDAR Light Detection and Ranging
- FIG. 3B is a diagram of an example of a 3D image 118 of the object 104 including a 3D point cloud 120 in accordance with an embodiment of the present disclosure.
- the 3D point cloud 120 includes a multiplicity of points 122 ( FIG. 1A ) and includes a plurality of missing points 124 ( FIG.
- each of the points 122 in the 3D point cloud 120 include 3D location information or XYZ coordinate information for a corresponding location 126 or point on the surface 128 of the object 104 .
- each point 122 in the 3D point cloud 120 includes depth information 132 associated with the corresponding location 126 or point on the object 104 and the 3D point cloud 120 includes or defines a 3D depth map 130 , wherein each of the points 122 of the 3D point cloud 120 includes depth information 132 of a corresponding location 126 on the object 104 .
- the 2D image and the 3D image are aligned.
- the 2D image and the 3D image are aligned using pre-acquired calibration information.
- the pre-acquired calibration information includes parameters of scale difference, translation offset, and rotation offset.
- a depth value is assigned from selected points 122 of the 3D point cloud 120 or 3D depth map 130 to a matching or corresponding pixel 112 on the 2D image plane 110 .
- a depth value is assigned from each of a predetermined number of points 122 in the 3D point cloud 120 of the 3D image 118 to respective matching or corresponding pixels 112 on the 2D image plane 110 .
- FIG. 4A is a diagram illustrating an example of resolution of the 2D image 108 in accordance with an embodiment.
- FIG. 4B is a diagram illustrating an example of resolution of the 3D image 118 or 3D point cloud 120 compared to the resolution of the 2D image 108 in FIG. 4A in accordance with an embodiment of the present disclosure.
- the 2D image 108 or electro-optic image has a much higher resolution compared to the 3D image 118 or LiDAR image.
- Interpolating depth values for other pixels 113 of the 2D image 108 or the 2D image plane 110 with no depth values includes using a predefined upsampling window 402 around a currently processed pixel 113 , enclosed in circle 404 , and performing upsampling using the entropy data of the pixels 112 of the 2D image 108 within the upsampling window 402 that have assigned depth values from the 3D image 118 or 3D depth map 130 .
- the entropy data of the pixels 112 within the upsampling window 402 define the local entropies of the 2D image 108 .
- the entropy data of the 2D image 108 is the measure of variance in pixel levels of an electro-optical pixel within the 2D image 108 relative to its neighboring pixels.
- the entropy of an image can be represented as the degree of change or noise between one pixel and its neighboring pixels.
- regions with relatively low entropy represent regions of substantially uniform surfaces or smooth features.
- Regions of an image with high entropy represents regions of substantial variation between neighboring pixels within an image, which represents high noise and/or high variability in surface (e.g. resulting in an irregular surface).
- An entropy image 502 of the 2D image 108 or electro-optic (EO) image ( FIG. 3A ) is shown in FIG. 5 .
- Interpolating the depth values includes using the assigned depth values of pixels 112 neighboring the currently processed pixel 113 to determine a probable depth value for the currently processed pixel 113 .
- An example of interpolating depth values for pixels 113 without an assigned depth value using neighboring pixels 112 that have an assigned depth value is described in U.S. Pat. No. 9,972,067 which is incorporated herein by reference.
- upsampling by the processor, to generate an upsampled 3D point cloud 148 is generated from the 3D image 118 by upsampling the 3D point cloud 120 using local entropy data 149 of pixels 112 within the upsampling window 402 .
- the upsampled 3D point cloud 148 includes filled points 150 for at least selected missing points 124 or holes in the 3D point cloud 120 .
- multiple upsampled 3D point clouds 148 are generated from different viewpoints 138 or locations of the sensor platform 136 using entropy based upsampling.
- the multiple upsampled 3D point clouds 148 are matched or registered and merged to generate the 3D model point cloud 102 of the object 104 .
- matching and merging the multiple upsampled 3D point clouds 148 includes performing registration and quantization or subsampling of the multiple upsampled 3D point clouds 148 using an iterative closest point process to generate the 3D model point cloud 102 or update 3D model point cloud of the object 104 .
- an operation is performed on the object 104 or with respect to the object 104 using the 3D model point cloud 102 or final updated 3D model point cloud if all 2D images and corresponding 3D images at all viewpoints 138 or locations of the sensor platform 136 have been captured and merged as described herein.
- Examples of the operation on the object 104 or with respect to the object 104 include but is not necessarily limited to performing an autonomous space rendezvous with another object or spacecraft; performing a proximity maneuver with respect to another object or spacecraft; performing a docking maneuver with respect to another object or spacecraft; or generating the 3D model point cloud of the object, wherein the object is a space object, such as an asteroid or other object in space.
- FIGS. 6A and 6B are a flow chart of an example of a method 600 for generating a 3D model point cloud of an object in accordance with another embodiment of the present disclosure.
- FIG. 7 is a block schematic diagram illustrating portions of the exemplary method 600 in FIGS. 6A and 6B .
- the method 600 is embodied in and performed by the system 100 in FIGS. 1A and 1B .
- the set of functions 145 includes the method 600 .
- generating the 3D model point cloud includes generating the 3D model point cloud using heterogeneous 2D and 3D sensor fusion in that data from the 2D imaging sensor 106 is combined or fused with data from the 3D imaging sensor 116 for the same viewpoint 138 or location of the sensor platform 136 for each viewpoint 138 or location of the sensor platform 136 .
- a 2D image of the object is captured by a 2D imaging sensor and a 3D image of the object is captured by a 3D imaging sensor for a current sensor platform viewpoint or location relative to the object.
- the 2D image and the 3D image may be stored in a memory device, such as memory device 140 in FIG. 1A .
- the 2D image includes a 2D image plane.
- the 3D image of the object includes a 3D point cloud.
- the 3D point cloud includes a multiplicity of points and includes a plurality of missing points or holes in the 3D point cloud that render the 3D point cloud unusable for performing an operation with respect to the object.
- an upsampled 3D point cloud 702 ( FIG. 7 ) is generated from the 3D image or 3D point cloud 700 using local entropy data of the 2D image, similar to that previously described, for the current viewpoint or location of the sensor platform to fill-in missing points or holes in the 3D point cloud.
- the current 3D model point cloud 704 ( FIG. 7 ) at the first viewpoint or first iteration is empty. As described in more detail herein for subsequent viewpoints or iterations a current 3D model point cloud 704 which is the updated 3D model point cloud 712 from a previous viewpoint of the sensor platform and the upsampled 3D point cloud 702 are merged 706 ( FIG. 7 ) to create a new 3D model point cloud 708 .
- the new 3D model point cloud 708 is quantized 710 or subsampled to generate an updated 3D model point cloud 712 (M K ).
- a subsequent 2D image of the object is captured by the 2D imaging sensor at a current viewpoint or location of the sensor platform and a subsequent 3D image of the object is captured by the 3D imaging sensor at the current viewpoint or location.
- the subsequent 3D image of the object includes a subsequent 3D point cloud including missing points or holes.
- a current upsampled 3D point cloud 702 ( FIG. 7 ) for the current viewpoint or location is generated from the subsequent 3D image and using local entropy data from the subsequent 2D image to fill at least some of the plurality of missing points or holes in the subsequent 3D point cloud.
- the method 600 advances to block 608 .
- the current 3D model point cloud 704 ( FIG. 7 ) which is the updated 3D model point cloud 712 from a previous viewpoint or location (Frame K ⁇ 1) of the sensor platform is registered 714 with original points of the subsequent 3D point cloud (3D depth map) 716 at the current viewpoint or location (Frame K) of the sensor platform without entropy based upsampling.
- a homogeneous transform (H K ) 718 ( FIG. 7 ) is determined from the current 3D model point cloud 704 which is the updated 3D model point cloud 712 from the previous viewpoint (M K ⁇ 1 ) or location of the sensor platform and the original points of a current 3D point cloud 716 at the current viewpoint or location without entropy-based upsampling (PC ORG K ) using an iterative closest point process according to equation 1:
- the current (Frame K) upsampled 3D point cloud 702 is transformed into a new 3D model point cloud 708 (aligned coordinate frame) using the homogeneous transform according to Equation 2:
- the current upsampled 3D point cloud 702 is adjusted to align or coordinate with the current 3D model point cloud 704 which is the updated 3D model point cloud 712 from the previous viewpoint or location of the sensor platform before merging 706 using the homogeneous transform 718 .
- the updated 3D model point cloud 712 created at the previous viewpoint or location of the sensor platform, which is the current 3D model point cloud 704 , and the current upsampled 3D point cloud 702 are merged 706 to create a current new 3D model point cloud 708 .
- the current new 3D model point cloud 708 is quantized 710 to generate a current updated 3D model point cloud 712 for the current viewpoint or location of the sensor platform similar to that previously described.
- an operation is performed with respect to the object or an operation is performed on the object using a final updated 3D model point cloud 712 determined at the last viewpoint or location of the sensor platform. Examples of the operations include but are not necessarily limited to performing an autonomous space rendezvous with another object or spacecraft; performing a proximity maneuver relative to another object or spacecraft; performing a docking maneuver with another object or spacecraft; or generating the 3D model of the object.
- the object is a space object, such as an asteroid, spacecraft or other space object.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures.
- two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Abstract
Description
- The present application is related to U.S. patent application Ser. No. 15/290,429, filed Oct. 11, 2016, entitled “System and Method for Upsampling of Sparse Point Cloud for 3D Registration,” now U.S. Pat. No. 9,972,067, which is assigned to the assignee as the present application and is incorporated herein by reference.
- The present disclosure relates to three dimensional (3D) model generation for performance of an operation with respect to an object and more particularly to three dimensional model generation using heterogeneous two dimension (2D) and three dimensional (3D) sensor fusion.
- A three-dimensional (3D) model may be developed by acquiring a 3D image of an object using a 3D scanning technology such as, for example, light detection and ranging (LiDAR). The 3D image generated provides a point cloud that is a collection of points in a 3D coordinate system. Each point in the point cloud represents XYZ coordinates within the 3D coordinate system. Typically, the points within the point cloud represent the location of points on an exterior surface of the object and define a 3D model of the object. The 3D model may be used in performing an operation on the object, for instance, by a robot or other device. However, the 3D model represented by the point cloud often suffers from insufficient features on objects or feature detection discrepancy among images because of illumination changes. Such conditions may be particularly present with objects, such as satellites or other spacecraft, in space where illumination levels of the object may vary widely from one viewpoint to another. This results in incomplete 3D models with holes or missing information or data on surfaces. Accordingly, there is a need to fill-in as many holes as possible and generate a denser or more complete 3D model that is usable for performing an operation with respect to the object or for performing an operation on the object.
- In accordance with an embodiment, a method for generating a three dimensional (3D) model of an object includes capturing, by a two dimensional (2D) imaging sensor, a 2D image of the object. The 2D image includes a 2D image plane. The method also includes capturing, by a 3D imaging sensor, a 3D image of the object. The 3D image of the object includes a 3D point cloud. The 3D point cloud includes a multiplicity of points, and the 3D point cloud includes a plurality of missing points or holes in the 3D point cloud. The method additionally includes generating, by a processor, an upsampled 3D point cloud from the 3D image using local entropy data of the 2D image to fill at least some missing points or holes in the 3D point cloud and merging, by the processor, a 3D model point cloud from a previous viewpoint or location of a sensor platform and the upsampled 3D point cloud to create a new 3D model point cloud. The method further includes quantizing the new 3D model point cloud to generate an updated 3D model point cloud.
- In accordance with another embodiment, a method for generating a three dimensional (3D) model of an object includes capturing, by a two dimensional (2D) imaging sensor, a 2D image of the object. The 2D image includes a 2D image plane. The method also includes capturing, by a 3D imaging sensor, a 3D image of the object. The 3D image of the object includes a 3D point cloud. The 3D point cloud includes a multiplicity of points, and the 3D point cloud includes a plurality of missing points or holes in the 3D point cloud. The 3D point cloud also includes a 3D depth map, wherein each of the points of the 3D point cloud includes depth information of a corresponding location on the object. The method also includes upsampling, by a processor, to generate an upsampled 3D point cloud from the 3D image using local entropy data of pixels within a predefined upsampling window. The upsampled 3D point cloud includes filled points for at least selected missing points or holes in the 3D point cloud. The method additionally includes generating, by the processor, multiple upsampled 3D point clouds from different viewpoints or locations of a sensor platform including the 2D imaging sensor and the 3D imaging sensor. The method further includes merging, by the processor, the multiple upsampled 3D point clouds to generate a 3D model point cloud of the object.
- In accordance with a further embodiment, a system for generating a three dimensional (3D) model of an object includes a two dimensional (2D) imaging sensor for capturing a 2D image of the object. The 2D image includes a 2D image plane. The system also includes a 3D imaging sensor for capturing a 3D image of the object. The 3D image of the object includes a 3D point cloud. The 3D point cloud includes a multiplicity of points, and the 3D point cloud includes a plurality of missing points or holes in the 3D point cloud. The 3D point cloud also includes a 3D depth map, wherein each of the points of the 3D point cloud includes depth information of a corresponding location on the object. The system also includes a processor configured to perform a set of functions including upsampling to generate an upsampled 3D point cloud from the 3D image using local entropy data of pixels within a predefined upsampling window. The upsampled 3D point cloud includes filled points for at least selected missing points or holes in the 3D point cloud. The set of functions also includes generating multiple upsampled 3D point clouds from different viewpoints or locations of a sensor platform that includes the 2D imaging sensor and the 3D imaging sensor. The set of functions further includes merging the multiple upsampled 3D point clouds to generate a 3D model point cloud of the object.
- In accordance with an embodiment and any of the previous embodiments, the method, system or set of functions further includes performing a process including moving the sensor platform to a next viewpoint or location relative to the object. The sensor platform includes the 2D imaging sensor and the 3D imaging sensor. The process also includes capturing, by the 2D imaging sensor, a subsequent 2D image of the object at a current viewpoint or location of the sensor platform and capturing, by the 3D imaging sensor, a subsequent 3D image of the object at the current viewpoint or location of the sensor platform. The subsequent 3D image of the object includes a subsequent 3D point cloud including a plurality of missing points or holes. The process also includes generating a current upsampled 3D point cloud for the current viewpoint or location of the sensor platform from the subsequent 3D image and using local entropy data of the subsequent 2D image to fill at least some of the plurality of missing points or holes in the subsequent 3D point cloud. The process additionally includes registering the updated 3D model point cloud from the previous viewpoint or location of the sensor platform with original points of the subsequent 3D point cloud without entropy based upsampling and merging the updated 3D model point cloud from the previous viewpoint or location of the sensor platform and the current upsampled 3D point cloud to create a current new 3D model point cloud. The process further includes quantizing the current new 3D model point cloud to generate a current updated 3D model point cloud for the current viewpoint or location of the sensor platform.
- In accordance with an embodiment and any of the previous embodiment, the method or system additionally includes repeating the process for each of a set of viewpoints or locations of the sensor platform.
- In accordance with an embodiment and any of the previous embodiments, the method, process or system further includes determining a homogeneous transform from the updated 3D model point cloud from the previous viewpoint or location of the sensor platform and the original points of a current 3D point cloud at the current viewpoint or location of the sensor platform without entropy-based upsampling using an iterative closest point process.
- In accordance with an embodiment and any of the previous embodiments, the method, process or system also includes adjusting the current upsampled 3D point cloud to align or coordinate with the updated 3D model point cloud from the previous viewpoint or location of the sensor platform before merging using the homogeneous transform.
- In accordance with an embodiment and any of the previous embodiments, the method, process or system further includes performing an operation with respect to the object.
- In accordance with an embodiment and any of the previous embodiments, wherein performing an operation with respect to the object includes one of performing an autonomous space rendezvous; performing a proximity maneuver; performing a docking maneuver; or generating a 3D model of the object, wherein the object is a space object.
- In accordance with an embodiment and any of the previous embodiments, the method, process or system additionally includes aligning the 2D image and the 3D image using pre-acquired calibration information.
- In accordance with an embodiment and any of the previous embodiments, the method, process or system additionally includes assigning a depth value from each of a predetermined number of points in the 3D point cloud of the 3D image to respective matching pixels on the 2D image plane.
- In accordance with an embodiment and any of the previous embodiments, the method, process or system also includes interpolating depth values for other pixels on the 2D image plane from the 3D point cloud of the 3D image.
- In accordance with an embodiment and any of the previous embodiments, the method, process or system, wherein the interpolating includes using a predefined upsampling window around a currently processed pixel, and performing upsampling using the local entropy data of the pixels of the 2D image within the predefined upsampling window.
- In accordance with an embodiment and any of the previous embodiments, the method, process or system further includes aligning the 2D image and the 3D image; assigning a depth value from selected points of the 3D point cloud or 3D depth map to a matching pixel on the 2D image plane; and interpolating depth values for other pixels on the 2D image plane within the predefined upsampling window.
- In accordance with an embodiment and any of the previous embodiments, wherein merging the multiple upsampled 3D point clouds includes performing point cloud registration and quantization to generate the 3D model point cloud of the object.
- In accordance with an embodiment and any of the previous embodiments, wherein merging the multiple upsampled 3D point clouds includes using an iterative closest point process to generate the 3D model point cloud of the object.
- In accordance with an embodiment and any of the previous embodiments, wherein the 2D imaging sensor includes an electro-optical camera to capture a 2D electro-optical image and wherein the 3D imaging sensor includes a 3D Light Detection and Ranging (LiDAR) imaging sensor.
- The features, functions, and advantages that have been discussed can be achieved independently in various embodiments or may be combined in yet other embodiments further details of which can be seen with reference to the following description and drawings.
-
FIG. 1A is a block schematic diagram of an example of a system for generating a 3D model point cloud of an object in accordance with an embodiment of the present disclosure. -
FIG. 1B is an illustration of the exemplary system ofFIG. 1A showing an example of an image plane and using heterogeneous 2D and 3D sensor fusion in accordance with an embodiment of the present disclosure. -
FIG. 2 is a flow chart of an example of a method for generating a 3D model point cloud of an object in accordance with an embodiment of the present disclosure. -
FIG. 3A is a diagram of an example of 2D image or a 2D electro-optical (EO) image of the object in accordance with an embodiment of the present disclosure. -
FIG. 3B is a diagram of an example of a 3D image of the object including a point cloud in accordance with an embodiment of the present disclosure. -
FIG. 4A is a diagram illustrating an example of resolution of the 2D image in accordance with an embodiment of the present disclosure. -
FIG. 4B is a diagram illustrating an example of resolution of the 3D image or 3D point cloud compared to the 2D image including pixels inFIG. 4A in accordance with an embodiment of the present disclosure. -
FIG. 4C is a diagram illustrating an example of generating an upsampled 3D point cloud in accordance with an embodiment of the present disclosure. -
FIG. 5 is a diagram illustrating an example of an entropy image of the 2D image inFIG. 3A in accordance with an embodiment of the present disclosure. -
FIGS. 6A and 6B are a flow chart of an example of a method for generating a 3D model point cloud of an object in accordance with another embodiment of the present disclosure. -
FIG. 7 is a block schematic diagram illustrating portions the exemplary method inFIGS. 6A and 6B . - The following detailed description of embodiments refers to the accompanying drawings, which illustrate specific embodiments of the disclosure. Other embodiments having different structures and operations do not depart from the scope of the present disclosure. Like reference numerals may refer to the same element or component in the different drawings.
-
FIG. 1A is a block schematic diagram of an example of asystem 100 for generating a 3Dmodel point cloud 102 of anobject 104 in accordance with an embodiment of the present disclosure. The 3Dmodel point cloud 102 may also be referred to herein as simply the 3D model. Thesystem 100 includes a two dimensional (2D)imaging sensor 106 for capturing a2D image 108 of theobject 104. An example of the2D imaging sensor 106 is an electro-optical camera 109 that generated ahigh resolution 2D electro-optical image. Other examples of the 2D imaging sensor include any type device capable of generating ahigh resolution 2D image. The2D image 108 includes a plurality ofpixels 112 that provide a predeterminedhigh resolution 114. The2D image 108 also includes a2D image plane 110 or is referenced in the2D image plane 110 as illustrated inFIG. 1B . - The
system 100 also includes a3D imaging sensor 116 for capturing a3D image 118 of theobject 104. The3D image 118 of theobject 104 includes a3D point cloud 120. The3D point cloud 120 includes a multiplicity ofpoints 122, and the3D point cloud 120 includes a plurality ofmissing points 124 or holes that render the3D point cloud 120 unusable as a 3D model point cloud for performing an operation with respect to theobject 104. Each of thepoints 122 of the3D point cloud 120 include corresponding location information or XYZ coordinate information of acorresponding location 126 on asurface 128 of theobject 104. Accordingly, the3D point cloud 120 includes or defines a3D depth map 130 that includes the location or coordinate information oflocations 126 or points on theobject 104. Therefore, each of thepoints 122 of the3D point cloud 120 includesdepth information 132 or location information of thecorresponding location 126 or point on theobject 104. Thedepth information 132 includes depth values 133. The3D image 3D point cloud 120 includes aresolution 134 that is less than theresolution 114 of the2D image 108 as also illustrated and described with reference toFIGS. 4A and 4B . In accordance with an example, the3D imaging sensor 116 is a Light Detection and Ranging (LiDAR)imaging sensor 135 or similar imaging sensor capable of generating a 3D image or 3D point cloud as described herein. - In accordance with an embodiment, the
2D imaging sensor 106 and the3D imaging sensor 116 are associated with or mounted to asensor platform 136. Thesensor platform 136 is configured to move todifferent viewpoints 138 or locations to capture2D images 108 andcorresponding 3D images 118 at thedifferent viewpoints 138 or location of thesensor platform 136. Themultiple 2D images 108 andmultiple 3D images 118 captured atdifferent viewpoints 138 or locations of thesensor platform 136 are stored in amemory device 140. As described in more detail herein, the2D image 108 data and the3D image 118 data are combined or fused for eachviewpoint 138 or location of thesensor platform 136 and the data for each of the viewpoints are combined or fused to generate the 3Dmodel point cloud 102 or updated 3D model point cloud. In accordance with an embodiment, the sensor platform is a vehicle, such as a spacecraft or other type vehicle. - The
system 100 also includes animage processing system 142. In at least one example, thememory device 140 is a component of theimage processing system 142. Theimage processing system 142 includes aprocessor 144. Theimage processing system 142 orprocessor 144 is configured for generating a 3Dmodel point cloud 102 of theobject 104. The 3Dmodel point cloud 102 is used to perform an operation on theobject 104. The 3Dmodel point cloud 102 is actually an updated 3D model point cloud that is generated by sensor data fusion or combining the2D image 108 and the3D image 118 at thesame viewpoint 138 using entropy-based upsampling as described in more detail with reference toFIG. 4C . Accordingly, theimage processing system 142 orprocessor 144 is configured to perform a set offunctions 145 including inblock 146 sensor data fusion from the2D imaging sensor 106 and the3D imaging sensor 116 or combining the2D image 108 and acorresponding 3D image 118 at thesame viewpoint 138. An upsampled3D point cloud 148 is generated from the 3D image by upsampling usinglocal entropy data 149 ofpixels 112 of the2D image 108 within apredefined upsampling window 402 inFIG. 4C . An example of using apredefined upsampling window 402 for upsampling usinglocal entropy data 149 of thepixels 112 will be described in more detail with reference toFIG. 4C . The upsampled3D point cloud 148 includes filledpoints 150 for at least selectedmissing points 124 or holes in the3D point cloud 120 of the3D image 118. Multiple upsampled3D point clouds 148 are generated fromdifferent viewpoints 138 or locations of thesensor platform 136. - In
block 152, registration ofmultiple point clouds 148 is performed with only selective points from thepoint clouds 148 and using entropy based upsampling. Thepoints 122 that have lower entropies compared toother points 122 are selected because the lower entropy points 122 are more certain or there is more confidence in thedepth information 132 or location information of thesepoints 122 with the lower entropies compared topoints 122 with higher entropies. An example of multiple point cloud registration is described in more detail in U.S. Pat. No. 9,972,067 and will be briefly described with reference toFIG. 1B . - In
block 154, point quantization or subsampling is performed using color information from the2D image 108 for filtering the registeredupsampled 3D point clouds 148 to generate an updated 3D model point cloud or 3Dmodel point cloud 102 of theobject 104. In accordance with an embodiment, point quantization or subsampling is performed by selecting every Nth point in the entire set ofpoints 122 and removing other points. The total number ofpoints 122 in the3D point cloud 120 will be reduced. Another (more computationally expensive) way of quantization is a grid-based method, where a 3D space is divided into a 3D cubic grid and a centroid point is calculated from all thepoints 122 within every unit grid cell. These centroid points are the quantized version of the original3D point cloud 120. Accordingly, themultiple upsampled 3D point clouds 148 are merged to generate the updated 3D model point cloud or 3Dmodel point cloud 102 of theobject 104. The updated 3Dmodel point cloud 102 is used inblock 152 for registration with other 3D point clouds. - In
block 156, the updated 3Dmodel point cloud 102 is used for sensor platform pose estimation. The sensor platform pose estimation means the relative distance and orientation of thesensor platform 136 with respect to theobject 104 that can be used for approaching, manipulation, or rendezvous. - In
block 158, thesensor platform 136 is controlled using the sensor platform poseestimation 156 and/or 3Dmodel point cloud 102 of theobject 104. The 3Dmodel point cloud 102 is used to perform an operation with respect to theobject 104. Examples of controlling thesensor platform 136 and/or performing an operation with respect to theobject 104 include but is not necessarily limited to performing an autonomous rendezvous between thesensor platform 136 or space vehicle and theobject 104, performing a proximity maneuver by thesensor platform 136 relative to theobject 104, performing a docking maneuver between thesensor platform 136 or space vehicle and theobject 104 which is another space vehicle or generate a 3D image of theobject 104 which is a space object such as an asteroid or other space object. -
FIG. 1B is an illustration of the exemplary system ofFIG. 1A showing an example of an image plane and using heterogeneous 2D and 3D sensor fusion in accordance with an embodiment of the present disclosure. In order to synchronize or align the3D image 3D point cloud 120 with the2D image 108 by theprocessor 144 orimage processing system 142, a calibration procedure between the3D point cloud 120 and the2D image 108 is performed by theprocessor 144 orimage processing system 142 to determine the relative poses of the3D point cloud 120 and the2D image 108. Synchronizing or aligning the3D image 3D point cloud 120 with the 2D image involve fusion or combining the data from the2D imaging sensor 106 and the3D imaging sensor 116 for thesame viewpoint 138 or location of thesensor platform 136, which is also referred to as heterogeneous 2D and 3D sensor fusion. In one aspect, theimage processing system 142 is configured to determine afeature point 160A of theobject 104 within the3D point cloud 120 as well as a correspondingpixel location 162 in theimage plane 110 of the2D image 108 which corresponds to thefeature point 160A as shown inFIG. 1A . As can be seen, thefeature point 160A corresponds to thepixel location 162 on the image plane 110 (e.g., the two dimensional projection of a 3D scene onto a two dimensional image captured by the 2D imaging sensor 106). In one aspect, theimage processing system 142 orprocessor 144 is also configured to determine a predetermined number of feature points 160A-C (or common points) in the3D point cloud 120 and the corresponding pixel locations to each respective feature points 160A-C on theimage plane 110 of the2D image 108 captured by the2D imaging sensor 106. For the purposes of this application, two or more pairs of feature points 160A-C in the3D point cloud 120 and corresponding pixel locations on theimage plane 110 are determined by theimage processing system 142 orprocessor 144. Since each of the feature points 160A-C in the 3D point cloud 120 (e.g., the 3D depth map 130) provide a 3D location (e.g., provides a depth value of the point), the3D point cloud 120 provides theimage processing system 142 orprocessor 144 with the depth value or 3D location information for each of the correspondingpixels 112 to the feature points 160A-C on theimage plane 110. -
FIG. 2 is a flow chart of an example of amethod 200 for generating a 3D model point cloud of an object in accordance with an embodiment of the present disclosure. In accordance with an embodiment, themethod 200 is embodied in and performed by thesystem 100 inFIGS. 1A and 1B . For example, the set offunctions 145 includes themethod 200. As described herein, generating the 3D model point cloud includes generating the 3D model point cloud using heterogeneous 2D and 3D sensor fusion in that data from the2D imaging sensor 106 is combined or fused with data from the3D imaging sensor 116 for thesame viewpoint 138 or location of thesensor platform 136. - In
block 202, a 2D image of the object is captured by a 2D imaging sensor, such as2D imaging sensor 106 inFIG. 1A . In accordance with an example, the 2D imaging sensor is an electro-optical camera and the 2D image is a 2D electro-optical (EO) image of the object. The 2D image includes a 2D image plane, for example,2D image plane 110 inFIG. 1B . Referring also toFIG. 3A ,FIG. 3A is a diagram of an example of a2D image object 104 in accordance with an embodiment of the present disclosure. - In
block 204, a 3D image of the object is captured by a 3D imaging sensor, such as3D imaging sensor 116 inFIG. 1A . In accordance with an example, the 3D imaging sensor is a Light Detection and Ranging (LiDAR) imaging sensor or similar device for capturing the 3D image of the object. Referring also toFIG. 3B ,FIG. 3B is a diagram of an example of a3D image 118 of theobject 104 including a3D point cloud 120 in accordance with an embodiment of the present disclosure. The3D point cloud 120 includes a multiplicity of points 122 (FIG. 1A ) and includes a plurality of missing points 124 (FIG. 1A ) or holes in the 3D point cloud that render the point cloud unusable for performing an operation with respect to the object. As previously described, each of thepoints 122 in the3D point cloud 120 include 3D location information or XYZ coordinate information for acorresponding location 126 or point on thesurface 128 of theobject 104. As such, eachpoint 122 in the3D point cloud 120 includesdepth information 132 associated with the correspondinglocation 126 or point on theobject 104 and the3D point cloud 120 includes or defines a3D depth map 130, wherein each of thepoints 122 of the3D point cloud 120 includesdepth information 132 of acorresponding location 126 on theobject 104. - In
block 206, the 2D image and the 3D image are aligned. In accordance with an example, the 2D image and the 3D image are aligned using pre-acquired calibration information. The pre-acquired calibration information includes parameters of scale difference, translation offset, and rotation offset. An example of a procedure for aligning the 2D image and the 3D image is described in more detail in U.S. Pat. No. 9,972,067. - In
block 208, a depth value is assigned from selectedpoints 122 of the3D point cloud 3D depth map 130 to a matching orcorresponding pixel 112 on the2D image plane 110. In another embodiment, a depth value is assigned from each of a predetermined number ofpoints 122 in the3D point cloud 120 of the3D image 118 to respective matching or correspondingpixels 112 on the2D image plane 110. - In
block 210, depth values for other pixels 113 (FIG. 4C ) with no depth value in the2D image 108 or on the2D image plane 110 are interpolated from the3D point cloud 120 of the3D image 118. Referring also toFIGS. 4A-4C ,FIG. 4A is a diagram illustrating an example of resolution of the2D image 108 in accordance with an embodiment.FIG. 4B is a diagram illustrating an example of resolution of the3D image 3D point cloud 120 compared to the resolution of the2D image 108 inFIG. 4A in accordance with an embodiment of the present disclosure. The2D image 108 or electro-optic image has a much higher resolution compared to the3D image 118 or LiDAR image.FIG. 4C is a diagram illustrating an example of generating an upsampled 3D point cloud 148 (FIG. 1A ) in accordance with an embodiment of the present disclosure. Interpolating depth values forother pixels 113 of the2D image 108 or the2D image plane 110 with no depth values includes using apredefined upsampling window 402 around a currently processedpixel 113, enclosed incircle 404, and performing upsampling using the entropy data of thepixels 112 of the2D image 108 within theupsampling window 402 that have assigned depth values from the3D image 3D depth map 130. The entropy data of thepixels 112 within theupsampling window 402 define the local entropies of the2D image 108. In one aspect, the entropy data of the2D image 108 is the measure of variance in pixel levels of an electro-optical pixel within the2D image 108 relative to its neighboring pixels. For example, the entropy of an image can be represented as the degree of change or noise between one pixel and its neighboring pixels. In one aspect, regions with relatively low entropy represent regions of substantially uniform surfaces or smooth features. Regions of an image with high entropy represents regions of substantial variation between neighboring pixels within an image, which represents high noise and/or high variability in surface (e.g. resulting in an irregular surface). Anentropy image 502 of the2D image 108 or electro-optic (EO) image (FIG. 3A ) is shown inFIG. 5 . - Interpolating the depth values includes using the assigned depth values of
pixels 112 neighboring the currently processedpixel 113 to determine a probable depth value for the currently processedpixel 113. An example of interpolating depth values forpixels 113 without an assigned depth value using neighboringpixels 112 that have an assigned depth value is described in U.S. Pat. No. 9,972,067 which is incorporated herein by reference. - In
block 212, upsampling, by the processor, to generate an upsampled3D point cloud 148 is generated from the3D image 118 by upsampling the3D point cloud 120 usinglocal entropy data 149 ofpixels 112 within theupsampling window 402. The upsampled3D point cloud 148 includes filledpoints 150 for at least selectedmissing points 124 or holes in the3D point cloud 120. - In block 214, multiple upsampled
3D point clouds 148 are generated fromdifferent viewpoints 138 or locations of thesensor platform 136 using entropy based upsampling. Inblock 216, themultiple upsampled 3D point clouds 148 are matched or registered and merged to generate the 3Dmodel point cloud 102 of theobject 104. In accordance with an embodiment, matching and merging themultiple upsampled 3D point clouds 148 includes performing registration and quantization or subsampling of themultiple upsampled 3D point clouds 148 using an iterative closest point process to generate the 3Dmodel point cloud 102 or update 3D model point cloud of theobject 104. An example of an iterative closest point process is described in “Efficient Variants of the ICP Algorithm” by Szymon Rusinkiewicz et al., 3-D Digital Imaging and Modeling, 2001. Similar to that previously described, point quantization or subsampling is performed by selecting every Nth point in the entire set ofpoints 122 and removingother points 122. The total number ofpoints 122 in the3D point cloud 120 are reduced. Another (more computationally expensive) way of quantization is a grid-based method, where a 3D space is divided into a 3D cubic grid and a centroid point is calculated from all thepoints 122 within every unit grid cell. These centroid points are the quantized version of the original3D point cloud 120. - In
block 218, an operation is performed on theobject 104 or with respect to theobject 104 using the 3Dmodel point cloud 102 or final updated 3D model point cloud if all 2D images and corresponding 3D images at allviewpoints 138 or locations of thesensor platform 136 have been captured and merged as described herein. Examples of the operation on theobject 104 or with respect to theobject 104 include but is not necessarily limited to performing an autonomous space rendezvous with another object or spacecraft; performing a proximity maneuver with respect to another object or spacecraft; performing a docking maneuver with respect to another object or spacecraft; or generating the 3D model point cloud of the object, wherein the object is a space object, such as an asteroid or other object in space. - Referring now to
FIGS. 6A, 6B andFIG. 7 ,FIGS. 6A and 6B are a flow chart of an example of amethod 600 for generating a 3D model point cloud of an object in accordance with another embodiment of the present disclosure.FIG. 7 is a block schematic diagram illustrating portions of theexemplary method 600 inFIGS. 6A and 6B . In accordance with an embodiment, themethod 600 is embodied in and performed by thesystem 100 inFIGS. 1A and 1B . For example, the set offunctions 145 includes themethod 600. As described herein, generating the 3D model point cloud includes generating the 3D model point cloud using heterogeneous 2D and 3D sensor fusion in that data from the2D imaging sensor 106 is combined or fused with data from the3D imaging sensor 116 for thesame viewpoint 138 or location of thesensor platform 136 for eachviewpoint 138 or location of thesensor platform 136. - In
block 602, a 2D image of the object is captured by a 2D imaging sensor and a 3D image of the object is captured by a 3D imaging sensor for a current sensor platform viewpoint or location relative to the object. The 2D image and the 3D image may be stored in a memory device, such asmemory device 140 inFIG. 1A . The 2D image includes a 2D image plane. The 3D image of the object includes a 3D point cloud. The 3D point cloud includes a multiplicity of points and includes a plurality of missing points or holes in the 3D point cloud that render the 3D point cloud unusable for performing an operation with respect to the object. - In
block 604, an upsampled 3D point cloud 702 (FIG. 7 ) is generated from the 3D image or3D point cloud 700 using local entropy data of the 2D image, similar to that previously described, for the current viewpoint or location of the sensor platform to fill-in missing points or holes in the 3D point cloud. - In
block 606, a determination is made whether the current viewpoint or location of the sensor platform is a first viewpoint or location. If the determination is made that this is the first viewpoint or location of the sensor platform, themethod 600 advances to block 614. Inblock 614, the current 3D model point cloud 704 (FIG. 7 ) at the first viewpoint or first iteration is empty. As described in more detail herein for subsequent viewpoints or iterations a current 3Dmodel point cloud 704 which is the updated 3Dmodel point cloud 712 from a previous viewpoint of the sensor platform and the upsampled3D point cloud 702 are merged 706 (FIG. 7 ) to create a new 3Dmodel point cloud 708. Inblock 616, the new 3Dmodel point cloud 708 is quantized 710 or subsampled to generate an updated 3D model point cloud 712 (MK). - In
block 618, a determination is made whether all viewpoints or locations of the sensor platform have been completed. If not, themethod 600 will advance to block 620. Inblock 620, the sensor platform is moved to the next viewpoint or location and themethod 600 will return to block 602 and themethod 600 will proceed similar to that previously described. Accordingly, the process ormethod 600 is repeated until an updated 3Dmodel point cloud 712 has been determined for all viewpoints or desired sensor platform locations. Inblock 602, a subsequent 2D image of the object is captured by the 2D imaging sensor at a current viewpoint or location of the sensor platform and a subsequent 3D image of the object is captured by the 3D imaging sensor at the current viewpoint or location. The subsequent 3D image of the object includes a subsequent 3D point cloud including missing points or holes. - In
block 604, acurrent upsampled 3D point cloud 702 (FIG. 7 ) for the current viewpoint or location is generated from the subsequent 3D image and using local entropy data from the subsequent 2D image to fill at least some of the plurality of missing points or holes in the subsequent 3D point cloud. - In
block 606, if the determination is made that this is not the first viewpoint or location of the sensor platform, themethod 600 advances to block 608. Inblock 608, the current 3D model point cloud 704 (FIG. 7 ) which is the updated 3Dmodel point cloud 712 from a previous viewpoint or location (Frame K−1) of the sensor platform is registered 714 with original points of the subsequent 3D point cloud (3D depth map) 716 at the current viewpoint or location (Frame K) of the sensor platform without entropy based upsampling. - In
block 610, a homogeneous transform (HK) 718 (FIG. 7 ) is determined from the current 3Dmodel point cloud 704 which is the updated 3Dmodel point cloud 712 from the previous viewpoint (MK−1) or location of the sensor platform and the original points of a current3D point cloud 716 at the current viewpoint or location without entropy-based upsampling (PCORG K) using an iterative closest point process according to equation 1: -
H K=ICP(PC ORG K ,M K−1)Equation 1 - In block 612, the current (Frame K) upsampled 3D point cloud 702 is transformed into a new 3D model point cloud 708 (aligned coordinate frame) using the homogeneous transform according to Equation 2:
- The
current upsampled 3D point cloud 702 is adjusted to align or coordinate with the current 3Dmodel point cloud 704 which is the updated 3Dmodel point cloud 712 from the previous viewpoint or location of the sensor platform before merging 706 using thehomogeneous transform 718. - In
block 614, the updated 3Dmodel point cloud 712 created at the previous viewpoint or location of the sensor platform, which is the current 3Dmodel point cloud 704, and thecurrent upsampled 3D point cloud 702 are merged 706 to create a current new 3Dmodel point cloud 708. - In
block 616, the current new 3Dmodel point cloud 708 is quantized 710 to generate a current updated 3Dmodel point cloud 712 for the current viewpoint or location of the sensor platform similar to that previously described. - As previously described, in
block 618, a determination is made whether the 3D modeling process has been performed for all viewpoints or locations of the sensor platform. If so, themethod 600 advances to block 622. Inblock 622, an operation is performed with respect to the object or an operation is performed on the object using a final updated 3Dmodel point cloud 712 determined at the last viewpoint or location of the sensor platform. Examples of the operations include but are not necessarily limited to performing an autonomous space rendezvous with another object or spacecraft; performing a proximity maneuver relative to another object or spacecraft; performing a docking maneuver with another object or spacecraft; or generating the 3D model of the object. In accordance with an embodiment, the object is a space object, such as an asteroid, spacecraft or other space object. - The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
- The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “include,” “includes,” “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
- The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present embodiments has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to embodiments in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of embodiments.
- Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art appreciate that any arrangement which is calculated to achieve the same purpose may be substituted for the specific embodiments shown and that the embodiments have other applications in other environments. This application is intended to cover any adaptations or variations. The following claims are in no way intended to limit the scope of embodiments of the disclosure to the specific embodiments described herein.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/157,012 US10614579B1 (en) | 2018-10-10 | 2018-10-10 | Three dimensional model generation using heterogeneous 2D and 3D sensor fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/157,012 US10614579B1 (en) | 2018-10-10 | 2018-10-10 | Three dimensional model generation using heterogeneous 2D and 3D sensor fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
US10614579B1 US10614579B1 (en) | 2020-04-07 |
US20200118281A1 true US20200118281A1 (en) | 2020-04-16 |
Family
ID=70056408
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/157,012 Active US10614579B1 (en) | 2018-10-10 | 2018-10-10 | Three dimensional model generation using heterogeneous 2D and 3D sensor fusion |
Country Status (1)
Country | Link |
---|---|
US (1) | US10614579B1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11461963B2 (en) * | 2018-11-16 | 2022-10-04 | Uatc, Llc | Systems and methods for generating synthetic light detection and ranging data via machine learning |
US11544167B2 (en) | 2019-03-23 | 2023-01-03 | Uatc, Llc | Systems and methods for generating synthetic sensor data via machine learning |
US20230041814A1 (en) * | 2021-08-06 | 2023-02-09 | Lenovo (Singapore) Pte. Ltd. | System and method for demonstrating objects at remote locations |
US20230356394A1 (en) * | 2020-07-23 | 2023-11-09 | Chun Man Anthony LIN | Robot Arm Control Method and Skin Surface Treatment Apparatus |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11430148B2 (en) * | 2016-12-28 | 2022-08-30 | Datalogic Ip Tech S.R.L. | Apparatus and method for pallet volume dimensioning through 3D vision capable unmanned aerial vehicles (UAV) |
KR102334641B1 (en) * | 2019-01-30 | 2021-12-03 | 바이두닷컴 타임즈 테크놀로지(베이징) 컴퍼니 리미티드 | Map Partitioning System for Autonomous Vehicles |
US10861175B1 (en) * | 2020-05-29 | 2020-12-08 | Illuscio, Inc. | Systems and methods for automatic detection and quantification of point cloud variance |
US11212503B1 (en) * | 2020-07-14 | 2021-12-28 | Microsoft Technology Licensing, Llc | Dual camera HMD with remote camera alignment |
KR102237451B1 (en) * | 2020-10-05 | 2021-04-06 | 성현석 | Apparatus for evaluating safety of cut-slopes |
CN112950689B (en) * | 2021-02-07 | 2024-04-16 | 南京航空航天大学 | Three-dimensional characterization method based on information entropy |
US11055428B1 (en) | 2021-02-26 | 2021-07-06 | CTRL IQ, Inc. | Systems and methods for encrypted container image management, deployment, and execution |
CN113470002B (en) * | 2021-07-22 | 2023-11-10 | 中国科学院空天信息创新研究院 | Chromatography SAR three-dimensional point cloud reconstruction quality evaluation method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090141966A1 (en) * | 2007-11-30 | 2009-06-04 | Microsoft Corporation | Interactive geo-positioning of imagery |
US20120327187A1 (en) * | 2011-06-22 | 2012-12-27 | The Boeing Company | Advanced remote nondestructive inspection system and process |
US20150287211A1 (en) * | 2014-04-04 | 2015-10-08 | Hrl Laboratories Llc | Method for classification and segmentation and forming 3d models from images |
US9858640B1 (en) * | 2015-07-15 | 2018-01-02 | Hrl Laboratories, Llc | Device and method for merging 3D point clouds from sparsely distributed viewpoints |
US20190004534A1 (en) * | 2017-07-03 | 2019-01-03 | Baidu Usa Llc | High resolution 3d point clouds generation from upsampled low resolution lidar 3d point clouds and camera images |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6862366B2 (en) | 2001-09-13 | 2005-03-01 | Seiko Epson Corporation | Techniques for scratch and date removal from scanned film |
US7271840B2 (en) | 2001-10-31 | 2007-09-18 | Intel Corporation | Method for determining entropy of a pixel of a real time streaming digital video image signal, and applications thereof |
US7187809B2 (en) | 2004-06-10 | 2007-03-06 | Sarnoff Corporation | Method and apparatus for aligning video to three-dimensional point clouds |
KR20130047822A (en) | 2011-11-01 | 2013-05-09 | 삼성전자주식회사 | Image processing apparatus and method |
US20140376821A1 (en) | 2011-11-07 | 2014-12-25 | Dimensional Perception Technologies Ltd. | Method and system for determining position and/or orientation |
US9811880B2 (en) | 2012-11-09 | 2017-11-07 | The Boeing Company | Backfilling points in a point cloud |
US9449227B2 (en) | 2014-01-08 | 2016-09-20 | Here Global B.V. | Systems and methods for creating an aerial image |
US9280825B2 (en) | 2014-03-10 | 2016-03-08 | Sony Corporation | Image processing system with registration mechanism and method of operation thereof |
US9772405B2 (en) | 2014-10-06 | 2017-09-26 | The Boeing Company | Backfilling clouds of 3D coordinates |
US9972067B2 (en) | 2016-10-11 | 2018-05-15 | The Boeing Company | System and method for upsampling of sparse point cloud for 3D registration |
-
2018
- 2018-10-10 US US16/157,012 patent/US10614579B1/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090141966A1 (en) * | 2007-11-30 | 2009-06-04 | Microsoft Corporation | Interactive geo-positioning of imagery |
US20120327187A1 (en) * | 2011-06-22 | 2012-12-27 | The Boeing Company | Advanced remote nondestructive inspection system and process |
US9182487B2 (en) * | 2011-06-22 | 2015-11-10 | The Boeing Company | Advanced remote nondestructive inspection system and process |
US20150287211A1 (en) * | 2014-04-04 | 2015-10-08 | Hrl Laboratories Llc | Method for classification and segmentation and forming 3d models from images |
US9858640B1 (en) * | 2015-07-15 | 2018-01-02 | Hrl Laboratories, Llc | Device and method for merging 3D point clouds from sparsely distributed viewpoints |
US20190004534A1 (en) * | 2017-07-03 | 2019-01-03 | Baidu Usa Llc | High resolution 3d point clouds generation from upsampled low resolution lidar 3d point clouds and camera images |
Non-Patent Citations (1)
Title |
---|
Bradley Skinner,("3D Point Cloud Upsampling for Accurate Reconstruction of Dense 2.50 Thickness Maps (Year: 2014) * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11461963B2 (en) * | 2018-11-16 | 2022-10-04 | Uatc, Llc | Systems and methods for generating synthetic light detection and ranging data via machine learning |
US11734885B2 (en) | 2018-11-16 | 2023-08-22 | Uatc, Llc | Systems and methods for generating synthetic light detection and ranging data via machine learning |
US11544167B2 (en) | 2019-03-23 | 2023-01-03 | Uatc, Llc | Systems and methods for generating synthetic sensor data via machine learning |
US11797407B2 (en) | 2019-03-23 | 2023-10-24 | Uatc, Llc | Systems and methods for generating synthetic sensor data via machine learning |
US20230356394A1 (en) * | 2020-07-23 | 2023-11-09 | Chun Man Anthony LIN | Robot Arm Control Method and Skin Surface Treatment Apparatus |
US20230041814A1 (en) * | 2021-08-06 | 2023-02-09 | Lenovo (Singapore) Pte. Ltd. | System and method for demonstrating objects at remote locations |
Also Published As
Publication number | Publication date |
---|---|
US10614579B1 (en) | 2020-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10614579B1 (en) | Three dimensional model generation using heterogeneous 2D and 3D sensor fusion | |
US10699476B2 (en) | Generating a merged, fused three-dimensional point cloud based on captured images of a scene | |
KR102402494B1 (en) | Motion compensation of geometric information | |
US9972067B2 (en) | System and method for upsampling of sparse point cloud for 3D registration | |
EP2806396B1 (en) | Sparse light field representation | |
US11290704B2 (en) | Three dimensional scanning system and framework | |
US9786062B2 (en) | Scene reconstruction from high spatio-angular resolution light fields | |
US10477178B2 (en) | High-speed and tunable scene reconstruction systems and methods using stereo imagery | |
KR20210119417A (en) | Depth estimation | |
US20180276793A1 (en) | Autonomous performance of an operation on an object using a generated dense 3d model of the object | |
JP2018101408A (en) | System and method for image processing | |
CN110276795B (en) | Light field depth estimation method based on splitting iterative algorithm | |
Alidoost et al. | An image-based technique for 3D building reconstruction using multi-view UAV images | |
KR102416523B1 (en) | A 3D skeleton generation method using calibration based on joints acquired from multi-view camera | |
JP2021520008A (en) | Vehicle inspection system and its method | |
CN110738731A (en) | 3D reconstruction method and system for binocular vision | |
CN112085849A (en) | Real-time iterative three-dimensional modeling method and system based on aerial video stream and readable medium | |
Alsadik et al. | Efficient use of video for 3D modelling of cultural heritage objects | |
CN115035235A (en) | Three-dimensional reconstruction method and device | |
Ghuffar | Satellite stereo based digital surface model generation using semi global matching in object and image space | |
JP2009530701A (en) | Method for determining depth map from image, apparatus for determining depth map | |
JP4102386B2 (en) | 3D information restoration device | |
EP2879090B1 (en) | Aligning ground based images and aerial imagery | |
Xu et al. | Kinect-based easy 3d object reconstruction | |
Verhoeven | Getting computer vision airborne: using structure from motion for accurate orthophoto production |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: THE BOEING COMPANY, ILLINOIS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KWON, HYUKSEONG;KIM, KYUNGNAM;REEL/FRAME:047127/0662 Effective date: 20181009 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |