US11964762B2 - Collaborative 3D mapping and surface registration - Google Patents
Collaborative 3D mapping and surface registration Download PDFInfo
- Publication number
- US11964762B2 US11964762B2 US17/171,544 US202117171544A US11964762B2 US 11964762 B2 US11964762 B2 US 11964762B2 US 202117171544 A US202117171544 A US 202117171544A US 11964762 B2 US11964762 B2 US 11964762B2
- Authority
- US
- United States
- Prior art keywords
- geo
- point
- equation
- image
- registered
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000013507 mapping Methods 0.000 title description 26
- 238000000034 method Methods 0.000 claims abstract description 80
- 238000012545 processing Methods 0.000 claims description 20
- 238000001514 detection method Methods 0.000 claims description 10
- 238000003384 imaging method Methods 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 description 69
- 239000013598 vector Substances 0.000 description 58
- 238000012937 correction Methods 0.000 description 22
- 230000002829 reductive effect Effects 0.000 description 21
- 238000010586 diagram Methods 0.000 description 20
- 230000000875 corresponding effect Effects 0.000 description 12
- 230000008569 process Effects 0.000 description 10
- 238000013519 translation Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 238000011960 computer-aided design Methods 0.000 description 8
- 238000011161 development Methods 0.000 description 8
- 238000009472 formulation Methods 0.000 description 8
- 230000015654 memory Effects 0.000 description 8
- 239000000203 mixture Substances 0.000 description 8
- 230000008901 benefit Effects 0.000 description 6
- 238000005259 measurement Methods 0.000 description 5
- 238000005192 partition Methods 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000037452 priming Effects 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 230000003862 health status Effects 0.000 description 3
- 230000008450 motivation Effects 0.000 description 3
- 230000036961 partial effect Effects 0.000 description 3
- 230000005855 radiation Effects 0.000 description 3
- 230000008685 targeting Effects 0.000 description 3
- 238000000844 transformation Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 239000000872 buffer Substances 0.000 description 2
- 239000003990 capacitor Substances 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000010348 incorporation Methods 0.000 description 2
- 230000005291 magnetic effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical compound C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 description 1
- 238000002940 Newton-Raphson method Methods 0.000 description 1
- 241000287181 Sturnus vulgaris Species 0.000 description 1
- 230000004308 accommodation Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 230000001143 conditioned effect Effects 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000001343 mnemonic effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000135 prohibitive effect Effects 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64C—AEROPLANES; HELICOPTERS
- B64C39/00—Aircraft not otherwise provided for
- B64C39/02—Aircraft not otherwise provided for characterised by special use
- B64C39/024—Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/32—Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/42—Simultaneous measurement of distance and other co-ordinates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/4808—Evaluating distance, position or velocity data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U10/00—Type of UAV
- B64U10/10—Rotorcrafts
- B64U10/13—Flying platforms
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2101/00—UAVs specially adapted for particular uses or applications
- B64U2101/30—UAVs specially adapted for particular uses or applications for imaging, photography or videography
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2101/00—UAVs specially adapted for particular uses or applications
- B64U2101/30—UAVs specially adapted for particular uses or applications for imaging, photography or videography
- B64U2101/32—UAVs specially adapted for particular uses or applications for imaging, photography or videography for cartography or topography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
Definitions
- Some embodiments described herein generally relate to generating a three-dimensional (3D) mapping and registering the generated 3D mapping to a surface.
- Generating a 3D point cloud can be resource intensive.
- the generation of the point cloud can include gathering two-dimensional (2D) images (e.g., satellite imagery, ground imagery (images taken from a camera on the ground), or an elevation therebetween) and performing photogrammetry, performing a light detection and ranging (LIDAR) scan, a human generating a computer-aided design (CAD) drawing, or the like.
- 2D two-dimensional
- LIDAR light detection and ranging
- CAD computer-aided design
- FIG. 1 illustrates, by way of example, a conceptual block diagram of an embodiment of a technique for 3D point cloud registration, such as with error propagation.
- FIG. 2 illustrates, by way of example, a diagram of an embodiment of a technique of performing the operation of the processing circuitry.
- FIG. 3 illustrates, by way of example, a conceptual block diagram of an embodiment of a collaborative system for 3D point cloud generation.
- FIG. 4 illustrates, by way of example, a diagram of an embodiment of a system for generating the registered 3D point cloud.
- FIG. 5 illustrates, by way of example, a diagram of an embodiment of a volume that is being mapped by UAVs.
- FIG. 6 illustrates, by way of example, a diagram of an embodiment of a system for 3D point cloud generation and geo-registration.
- FIG. 7 illustrates, by way of example, a diagram of embodiments of operations that can aid in geo-locating a 3D point cloud.
- FIG. 8 illustrates, by way of an example, an embodiment of a system 800 for 3D point set registration and merging.
- FIG. 9 illustrates an example diagram of an embodiment of the relationship between ground point coordinate estimates ⁇ circumflex over (V) ⁇ j and corresponding 3D data set observations ⁇ tilde over (V) ⁇ ij .
- FIG. 10 illustrates an example of an embodiment of a bundle adjustment operation.
- FIG. 1 illustrates, by way of example, a diagram of an embodiment of a method for 3D point set generation and registration.
- FIG. 12 illustrates, by way of example, a block diagram of an embodiment of a machine in the example form of a computer system within which instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
- aspects of embodiments regard improving point cloud generation or registration of the point cloud to a geo-location of the Earth or other surface.
- FIG. 1 illustrates, by way of example, a conceptual block diagram of an embodiment of a technique for 3D point cloud registration, such as with error propagation.
- a system of FIG. 1 includes input 102 provided to processing circuitry 114 .
- the processing circuitry 114 generates output in the form of a 3D point cloud 128 .
- the input 102 can include one or more of an image or video 104 , a light detection and ranging (LIDAR) scan 106 , a three-dimensional (3D) computer-aided drafting (CAD) model 108 , satellite imagery 110 , or other data 112 .
- LIDAR light detection and ranging
- CAD computer-aided drafting
- the input 102 can be processed into two or more 3D point clouds before being provided to the processing circuitry 114 .
- the image, video 104 can include a red, green, blue (RGB), infrared (IR), black and white, grayscale, or other intensity image.
- the image, video 104 can include a video that comprises frames. Photogrammetry can be performed on the data of image, video 104 , such as to generate one of the 3D point clouds.
- Photogrammetry can include performing a geometric bundle adjustment on the two-dimensional (2D) images, to register the geometry of the 2D images to each other.
- the bundle adjustment can adjust geometry of an image of the images to be consistent with geometry of other images of the images.
- the geometry can be defined in metadata, such as by using rational polynomial coefficients (RPC).
- RPC rational polynomial coefficients
- Other image registration techniques are possible.
- 2D images not previously associated with the 3D point cloud can be registered to the 3D point cloud.
- Tie points can be identified between each 2D image and the 3D point cloud.
- the geometry of each 2D image can be adjusted to match the 3D point cloud by using an affine transformation.
- the LIDAR scan 104 can be generated by illuminating a target with a laser light and measuring the reflected light with a sensor. Differences in laser return times and wavelengths can then be used to make one or more of the 3D point clouds. This is because these differences can be used to determine distance to the object off of which the light was reflected and returned to the sensor. LIDAR 3D point clouds often have no intensity information. For the LIDAR case, and others, it can be useful to attribute the 3D point cloud with intensity or color data from an image that covers the same area of interest. Further discussion on this point is provided elsewhere herein.
- the CAD model 108 is a human-designed (e.g., with or without computer aid) 2D or 3D model, such as the blueprint of a building.
- the CAD model 108 is defined by geometrical parameters and readily adjustable by a human using a computer.
- the satellite imagery 110 includes images generated at high altitudes from one or more cameras of a satellite or satellites.
- the satellite image provides a nadir, or near-nadir view of a geographical location.
- the nadir view is of a point on a celestial sphere directly below an observer (e.g., the point on the sphere closest the observer).
- Other sources 112 can include other man-made measurements (e.g., thermal imaging), computer aided measurements, or other data that can be used to generate a 3D point cloud of the geographical region.
- man-made measurements e.g., thermal imaging
- computer aided measurements e.g., computer aided measurements
- the processing circuitry 114 can include hardware, software, firmware, or a combination thereof, configured to implement the operations of registering 3D point clouds to each other.
- Hardware can include one or more electric or electronic components configured to electrical signals to indicate results of the operation.
- Electric or electronic components can include one or more transistors, resistors, capacitors, diodes, inductors, switches, logic gates (e.g., AND, OR, XOR, negate, buffer, or the like), multiplexers, power supplies, regulators, analog to digital or digital to analog converters, amplifiers, processors (e.g., application specific integrated circuits (ASIC), field programmable gate array (FPGA), graphics processing units (GPUs), central processing units (CPUs), or the like), or the like electrically coupled to perform the operations.
- ASIC application specific integrated circuits
- FPGA field programmable gate array
- GPUs graphics processing units
- CPUs central processing units
- the operations for 3D point cloud registration can include 2D/3D point cloud conversion and normalization at operation 116 , 3D point cloud geo-registration at operation 118 , and 3D point cloud fusion and adaptive filtering at operation 120 . These operations are discussed in more detail below.
- the operation 118 in general, can include determining a scale 122 factor adjustment, a rotation 124 adjustment, and a translation 126 adjustment between the 3D point clouds to be registered.
- the operation 118 can include using an iterative, normalized cross-covariance technique, that minimizes a least squares difference between tie-points ground control points (GCPs), or the like. This is discussed in more detail below.
- GCPs ground control points
- the result of the registration is a registered 3D point cloud 128 that inherits the best errors (smallest errors) in the 3D point cloud inputs. Again, more detail is provided below.
- FIG. 2 illustrates, by way of example, a diagram of an embodiment of a technique of performing the operation 118 of the processing circuitry 114 .
- tie points 212 , 214 , 216 , 218 between the various 3D point cloud inputs can be identified.
- the 3D point clouds include the LIDAR scan 106 , the CAD model 108 , the image, video 104 , and the satellite imagery 110 . More or fewer 3D point clouds can be used.
- the tie points 212 , 214 , 216 , 218 are data points that correspond to a same geographic location.
- a corner of a structure, a high elevation point, or the like can make for a good tie point.
- the tie points 212 , 214 , 216 , 218 can be used to determine how to adjust the corresponding 3D point clouds to be registered to generate the 3D point cloud 128 .
- the tie point 220 corresponds to the registered location of the tie points 212 , 214 , 216 , 218 .
- the registered points 222 can include an x value 224 , a y value 226 , a z value 228 , metadata 230 , and source pointers 232 .
- the metadata 230 can include source reference vectors 234 indicating data source (e.g., the image, video 104 , LIDAR scan 106 , CAD model 108 , or satellite imagery 110 , etc.) from which the registered points 222 were determined and the source data 236 of those sources.
- a system, device, or method to implement the techniques of FIG. 1 or 2 can provide a hybrid, multi-source, multi-resolution 3D point cloud creation and enrichment through multi-source, multi-modal 3D point cloud ingest and fusion.
- the techniques of FIG. 1 or 2 can provide an ability to create a more comprehensive (as compared to prior techniques), multi-resolution, 3D hybrid point cloud by combining a sparse 3D point cloud with a dense 3D point cloud, a low-resolution 3D point cloud with a high-resolution 3D point cloud or the like.
- the techniques of FIG. 1 or 2 can provide an ability to fill in a missing section of a 3D point cloud, such as by using another 3D point cloud.
- the techniques of FIGS. 1 and 2 can provide an ability to replace or augment noisy/low quality 3D point cloud sections with high fidelity data from other location-relevant 2D and 3D data sources.
- the techniques of FIGS. 1 and 2 can provide an ability to detect error and correct a 3D point cloud through multi-3D point cloud cross-correlation & validation.
- a resulting hybrid 3D point cloud can preserve, through metadata, data source lineage that allows users to leverage metadata (e.g., pixel color and intensity, object classifications and dimensions) from fused sources.
- the techniques of FIGS. 1 and 2 provide a hybrid 3D point cloud-derived location intelligence (e.g., detection, localization, and classification of permanent or stationary objects in a scene or environmental conditions (e.g., power or phone lines, radiation areas that may not be detectable by the onboard sensors of an unmanned aerial vehicle (UAV) or an unmanned ground vehicle (UGV)) that can aid in obstacle avoidance and planning of complex mapping operations.
- the techniques of FIGS. 1 and 2 can provide an ability to incorporate fusion criteria, such as trustworthiness of the data source or classification levels.
- the techniques of FIGS. 1 and 2 provide a user-controllable filtering and criteria, such as classification level of a data source or trustworthiness of a data source, quality, or age.
- the techniques of FIGS. 1 and 2 provide an ability to control which areas (e.g., rooms within a building) and/or objects get included or excluded in a resulting, hybrid 3D point cloud.
- the techniques of the FIGS. provide an ability to link other measurements (e.g., temperature, radiation, noise level, humidity) to each data point in the resulting hybrid 3D point cloud (e.g., via a graph, multi-dimensional array, hash table, dictionary, or linked list representation).
- the extensible, hybrid cloud source vector 222 allows the resulting hybrid 3D point cloud to also store non-sensitive (e.g., unclassified) and sensitive (e.g., classified) point cloud data in the same cloud.
- Hybrid point cloud access controls for example, can be enforced by controlling the visibility of the source data vectors 222 , or encrypting the sensitive source data elements 236 of the hybrid point cloud.
- FIG. 3 illustrates, by way of example, a conceptual block diagram of an embodiment of a system 300 for 3D point cloud generation.
- the 3D point cloud generated using the system 300 can be used as an input 102 to the processing circuitry 114 , for example.
- the system 300 as illustrated includes unmanned aerial vehicles (UAVs) 330 , 332 , 334 , 336 , with imaging devices (indicated by diverging lines 338 ).
- the imaging devices can include intensity or non-intensity imaging devices. Examples of intensity imaging device include an RGB, grayscale, black and white, infrared, or another camera. Examples of non-intensity imaging devices include LIDAR, or the like.
- the UAVs 330 , 332 , 334 , 336 can be programmed to capture image data of an object 340 or a geographical region of interest.
- the UAVs 330 , 332 , 334 , 336 can cooperatively capture sufficient image data of the object 340 to generate a 3D point cloud of the object 340 .
- UAVs are not limited to drone-/UAV-collected data. Concepts can also apply to UGVs, unmanned vessels, and manual-/human-driven data and collection methods.
- FIG. 4 illustrates, by way of example, a diagram of an embodiment of a system 400 for generating the registered 3D point cloud 128 .
- the system 400 as illustrated includes 3D mapping scheduler 440 , a collaborative 3D object mapping operation 442 , 3D point clouds 444 from the UAVs 330 , 332 , 334 , 336 , a 3D point cloud database (DB) 446 , the processing circuitry 114 , and the registered 3D point cloud 128 (both from FIG. 1 ).
- 3D mapping scheduler 440 includes 3D mapping scheduler 440 , a collaborative 3D object mapping operation 442 , 3D point clouds 444 from the UAVs 330 , 332 , 334 , 336 , a 3D point cloud database (DB) 446 , the processing circuitry 114 , and the registered 3D point cloud 128 (both from FIG. 1 ).
- DB 3D point cloud database
- the 3D mapping scheduler 440 can command the UAVs 330 , 332 , 334 , 336 what tasks to perform and when.
- the 3D mapping scheduler 440 can change the task or time to perform the task after a mission has begun.
- the UAVs 330 , 332 , 334 , 336 can communicate and alter the task or timing on their own.
- the UAVs 330 , 332 , 334 , 336 can be autonomous or semi-autonomous.
- the 3D mapping scheduler 440 can provide the UAVs 330 , 332 , 334 , 336 with a task that includes a geographic region to be modelled, a resolution of the model to be generated, a technique to be used in generating the model (e.g., color image, satellite image, radiation or temperature scan, LIDAR, etc.), or one or more time constraints in performing the tasks.
- the UAVs 330 , 332 , 334 , 336 can operate to satisfy the schedule and constraints provided by the scheduler 440 . While illustrated as a centralized unit, the scheduler 440 does not need to be a centralized unit.
- the scheduler can be distributed across multiple resources (e.g., UAVs) and run locally/on-board (e.g., as an agent) to provide a distributed dynamic scheduler.
- resources e.g., UAVs
- Such an implementation can include the UAVs 330 , 332 , 334 , 336 communicating and planning tasks among themselves.
- the operations of the UAVs 330 , 332 , 334 , 336 are part of collaborative 3D object mapping operation 442 .
- the result of the 3D mapping operation 442 can be a 3D point cloud 444 from the UAVs.
- UAVs unmanned vehicles, such as ground, water, air vehicles, or a combination thereof.
- the 3D point cloud 444 can be stored in a point cloud database 446 .
- the point cloud database 446 can include a memory device for storing one or more point clouds, such as the 3D point clouds 444 or the registered point cloud 128 .
- the 3D point clouds 444 from the UAVs 330 , 332 , 334 , 336 can be provided to the processing circuitry 114 .
- the processing circuitry 114 can register the point clouds 444 to generate the registered point cloud 128 .
- FIG. 5 illustrates, by way of example, a diagram of an embodiment of a volume that is being mapped by UAVs, such as three of the UAVs 330 , 332 , 334 , 336 .
- UAVs such as three of the UAVs 330 , 332 , 334 , 336 .
- part of the volume as indicated at 502 , is partially mapped.
- the different patterns on voxels or subsections of the volume indicate which UAV has mapped the subsection (if a UAV has mapped the subsection).
- the scheduler 440 or the UAVs 330 , 332 , 334 , 336 can determine which UAV will map the currently unimaged subsections based on a variety of criteria (e.g., object priorities, speed, cost, distances, available sensors, required scan resolution, remaining flight times/battery life, UAV health status, etc.).
- An example assignment of the mapping is provided at 504 .
- An advantage of one or more embodiments of FIGS. 1 - 5 can include one or more of: enabling collaborative, concurrent, inside-out and outside-in 3D mapping of objects of interests (e.g., buildings, tunnels, or other objects) using a swarm of two or more autonomous, semi-autonomous, or man-controlled drones; providing an ability to leverage pre-existing 3D point clouds and 3D point clouds generated through other means (e.g., CAD models, photogrammetry) to speed up mapping process, preventing replication of effort, increasing fault-tolerance, and/or reducing cost; generating a fused 3D point cloud that can be composed of multi-resolution 3D point clouds; providing a collaborative mapping and hybrid 3D point cloud creation process supports dynamic filtering based on user-definable criteria (e.g., exclude basement from mapping, specified resolution, specified time frame, specified technique, or the like); efficiently delegating mapping objectives/areas of interests using pre-assigned mission planner or assigned at run time by a dynamic scheduler; providing
- FIG. 6 illustrates, by way of example, a diagram of an embodiment of a system 600 for 3D point cloud generation and geo-registration.
- the system 600 as illustrated includes an operator 602 , the UAV 330 , and a geographical region 604 to be mapped.
- the operator 602 can operate the UAV 330 , or the UAV 330 can operate autonomously or semi-autonomously to generate image data of the geographical region 604 .
- the UAV 330 can have a global positioning system (GPS) or the like that informs the UAV 330 of its location relative to the surface of the Earth. In these instances, the GPS coordinates can be used to register the 3D data to the surface of the Earth. In other instances, however, the UAV 330 does not have such a system.
- GPS global positioning system
- GPS data for the entire flight can be used to register data from the entire flight.
- some other techniques can be used to register the 3D point cloud generated using the image data from the UAV 330 to the surface of the Earth. Operations of the other techniques are provided regarding a method 700 provided in FIG. 7 . Note that not all operations of FIG. 7 are required or even useful in all situations.
- FIG. 7 illustrates, by way of example, a diagram of an embodiment of a technique 700 for registration of a 3D point cloud.
- the technique 700 for registration of the 3D point cloud generated by the UAV 330 can include the operator 602 providing a starting location and a heading of the UAV 330 , at operation 702 .
- the operation 702 is helpful, such as when no overhead imagery of the area or no 3D point cloud of the area is available.
- the heading and starting location can be determined using a compass, computing device, or the like.
- the heading and starting location can include an associated, estimated error. This data can be used to register the image data to the surface of the Earth. Using this technique, the initial heading and starting location can be used to associate the remaining 3D points to points on the surface of the Earth.
- the technique 700 for registration of the 3D point cloud generated by the UAV 330 can include flying to a specified height and taking a nadir, or near nadir image of the starting location, at operation 704 .
- the operation 704 is helpful, such as when overhead imagery or the 3D point cloud of the area are available.
- Overhead imagery often includes metadata, sometimes called rational polynomial coefficients (RPC), that detail locations of the pixels of the overhead imagery on the Earth.
- RPC rational polynomial coefficients
- the image captured by the UAV 330 can then be registered (using normalized cross correlation of image chips, for example) to the available overhead imagery.
- the UAV 330 can perform a LIDAR scan and take an image at the elevation. This data can be correlated with the overhead imagery to determine the starting location, at operation 706 .
- the geo-location registration can be performed with error (e.g., linear error or circular error, or a combination thereof) propagation.
- error e.g., linear error or circular error, or a combination thereof
- the linear error and circular error are best when correlating to a 3D point cloud, less accurate when correlating to overhead imagery, and even less accurate when only a starting heading and starting location are available. GPS data is about as accurate as an overhead imagery correlation.
- the operation 704 can be helpful because the overhead imagery and the 3D point clouds are generally in nadir or near-nadir views of the tops of objects, while LIDAR from the UAV 330 sees the sides of objects in the imagery.
- a nadir or near-nadir image can be generated of the starting location.
- FIGS. 8 - 10 regard methods, systems, and devices for registering a first 3D point cloud (or a portion thereof) to a second 3D point cloud (or a portion thereof) to generate a merged 3D point cloud.
- One or more the first and second 3D point clouds can include an associated error.
- the associated error can be propagated to the merged 3D point cloud.
- the error of the 3D point cloud can be used in a downstream application.
- Example applications include targeting and mensuration.
- a targeteer one who performs targeting
- a mensuration of an object can benefit from the error as well.
- the merged 3D point clouds can include error that is better than any of the 3D point clouds individually. For example, if the first 3D point cloud includes a lower error (relative to the second 3D point cloud) in the x and y directions and the second 3D point cloud includes a lower error (relative to the first 3D point cloud) in the z direction, the merged 3D point cloud can include error bounded by the first 3D point cloud in the x and y directions and by the second 3D point cloud in the z direction. The merged point cloud can thus inherit the better of the errors between the first and second point clouds for a specified parameter.
- FIG. 8 illustrates, by way of an example, an embodiment of a system 800 for 3D point set registration and merging.
- the system 800 can include the processing circuitry 114 that receives tie points 808 , tie point error 810 , a first 3D point set 102 A, a second 3D point set 102 B, and first or second point set error 812 .
- the first or second point set error 812 includes error for at least one of the first 3D point set 102 A and the second 3D point set 102 B.
- the first or second point set error 812 can thus include error for the first 3D point set 102 A, the second 3D point set 102 B, or the first 3D point set 102 A and the second 3D point set 102 B.
- the first 3D point set 102 A or the second 3D point set 102 B can include a point cloud, a 3D surface, or the like.
- the first 3D point set 102 A and the second 3D point set 102 B can include (x, y, z) data for respective geographic regions. The geographic regions of the first 3D point set 102 A and the second 3D point set 102 B at least partially overlap.
- One or more of the first point set 102 A and the second point set 102 B can include intensity data.
- Intensity data can include one or more intensity values, such as red, green, blue, yellow, black, white, gray, infrared, thermal, or the like.
- One or more of the first point set 102 A and the second the point set 102 B can include error data.
- the error data is illustrated as being a separate input in FIG. 1 , namely the first or second point set error 812 .
- the error data can indicate an accuracy of the corresponding point of the point set.
- the tie points 808 can associate respective points between the first 3D point set 102 A and the second 3D point set 102 B.
- the tie points 808 can indicate a first point (x 1 , y 1 , z 1 ) in the first 3D point set 102 A, a second point (x 2 , y 2 , z 2 ) in the second 3D point set 102 B or an error associated with the tie point 808 (shown as separate input tie point error 810 ).
- the tie point error 810 can indicate how confident one is that the first and second points correspond to the same geographic location.
- the tie point error 810 can include nine entries indicating a covariance (variance or cross-covariance) between three variables. The three variables can be error in the respective directions (x, y, z).
- a matrix representation of the tie point error 810 is provided as
- the first or second point set error 812 can sometimes be improved, such as to be more rigorous. Sometimes, the first or second point set error 812 can be in a form that is not digestible by the bundle adjustment operation 818 .
- the point set error 812 can be conditioned by a condition point set error operation 814 to generate an error matrix 816 .
- the condition point set error operation 814 can include generating a covariance matrix 816 of error parameters of the first 3D point set 102 A or the second 3D point set 102 B.
- the error parameters can include seven parameters. Three of the parameters can include translation in x, y, and z, respectively. Three of the parameters can be for rotation in x, y, and z (roll, pitch, and yaw), respectively. One of the parameters can be for a scale factor between the first 3D point set 102 A and the second 3D point set 102 B.
- An example of the matrix 816 produced by the condition point set error operation 814 is provided as
- the bundle adjustment operation 818 can receive the tie points 808 , tie point error 810 , first 3D point set 102 A, second 3D point set 102 B, and the error matrix 816 at input.
- the bundle adjustment operation 818 can produce a merged 3D point set 128 and a merged 3D point set error 822 as output.
- the bundle adjustment operation 818 can use a least squares estimator (LSE) for registration of the first 3D point set 102 A and the second 3D point set 102 B.
- LSE least squares estimator
- the operation 818 is easily extendable to merging more than two 3D data sets even though the description regards only two 3D data sets at times.
- the bundle adjustment operation 818 can use one or more photogrammetric techniques.
- the bundle adjustment operation 818 can include outlier rejection.
- the bundle adjustment operation 818 can determine error model parameters for the 3D data sets. Application of the error model parameters to the first 3D point set 102 A and the second 3D point set 102 B, results in the relative alignment (registration) of the first 3D point set 102 A and the second 3D point set 102 B.
- a reference number with a letter suffix is a specific instance of the general item without the letter suffix.
- the 3D point set 102 A is a specific instance of the general 3D point set 102 .
- FIG. 9 illustrates an example diagram of an embodiment of the relationship between ground point coordinate estimates ⁇ circumflex over (V) ⁇ j and the corresponding 3D data set observations ⁇ tilde over (V) ⁇ ij .
- three misregistered 3D data sets 902 , 904 , and 906 and a reference frame 924 are illustrated.
- First image observations 908 , 910 , 912 and a first associated ground point 914 and second image observations 916 , 918 , 920 , and a second associated ground point 922 are illustrated.
- the ground point 914 can be determined using a least squares estimator.
- the least squares estimator can reduce (e.g., minimize) the discrepancy (across all observations and ground points) across all images.
- the least squares estimator can project an error in one or more of the 3D data sets to an error in a registered 3D data set.
- the bundle adjustment operation 818 can include identifying a ground point that reduces a discrepancy between the ground point and corresponding points in respective images, and then adjusting points in the 3D data sets in a manner that reduces the discrepancy.
- the term “3D data set” is sometimes referred to as an “image”. For convenience, example sizes of vectors and matrices are indicated below the symbol in red. Thus, the symbol
- a N ⁇ M denotes a matrix A with N rows and M columns.
- Column vectors from R 3 thus have the annotation 3 ⁇ 1.
- Components of a vector V are written as
- Equation modeling of the relationship between points in one 3D space to corresponding points in another 3D space is now described.
- a common reference space is established across all of the images.
- the reference space can be constructed to accommodate a simultaneous adjustment of more than two images.
- Correspondences can be formed between points in the reference space and the measured conjugate point locations in each image.
- the observation equation can be represented as Equation 1:
- V ⁇ 3 ⁇ 1 ( 1 + s ) ⁇ T 3 ⁇ 3 ( V ⁇ 3 ⁇ 1 - V _ 3 ⁇ 1 ) Equation ⁇ 1
- Equation 2 ⁇ circumflex over (V) ⁇ is a reference-space 3D coordinate
- ⁇ tilde over (V) ⁇ is the observation of ⁇ circumflex over (V) ⁇ in an image and the orientation and offset relationship between reference space and image space is taken from the orientation matrix V and offset vector using Equation 2:
- ⁇ 3 ⁇ 1 ⁇ [ ⁇ ⁇ ⁇ ] T refer to rotation angles (roll, pitch and yaw) about an image's x, y, and z axes respectively.
- the scalar s represents an isometric scale correction factor (nominally zero). The above form is conducive to modeling a simultaneous least squares adjustment of all images' offsets and orientations, provided that estimates of reference space coordinates for all conjugate image observations vectors are available.
- This form is more suitable and flexible than explicitly holding a single image as a reference for at least one of several reasons: (1) there are reference space ground coordinates that permit the potential use of ground control points, whose a priori covariances are relatively small (e.g., they carry high weighting in the solution); (2) the above formulation is suitable for a simultaneous adjustment for data that includes small or minimal overlap (mosaics), as well as, many images collected over the same area (stares) or any combination in between; and (3) a single image can effectively (e.g., implicitly) be held as a reference by appropriate a priori weighting of its error model parameters.
- the symbol ⁇ circumflex over (V) ⁇ will be referred to as a ground point (akin to tie point ground locations and ground control point locations in a classical photogrammetric image adjustment).
- the symbol ⁇ tilde over (V) ⁇ will be referred to as a ground point observation (akin to image tie point observation locations in a classical photogrammetric image adjustment).
- ⁇ circumflex over (V) ⁇ and ⁇ tilde over (V) ⁇ are both “on the ground” in the sense that they both represent ground coordinates in 3D (in the classical imagery case, the observations are in image space and are thus 2D coordinates). Further, the point may very well not be “on the ground” but could be on a building rooftop, treetop canopy, etc. However, the terminology “ground point” and “ground point observation” will be used.
- ⁇ tilde over (V) ⁇ ij is the coordinate of the i th image's observation of ground point j.
- tie points or tie points
- a single tie point is often referred to as a collection of image observations (with coordinates) of the same point on the ground along with the corresponding ground point (with coordinates).
- Equation 5 Equation 5
- ground points themselves are treated as derived (but unconstrained) observations and allowed to adjust in performance of the operation 818 .
- the bundle adjustment operation 818 can operate on two or more images taken over a same area (with observations for tie points, sometimes called a stare scenario); two or more images taken in strips (forming a mosaic of data, with 2-way, 3-way, or m-way observations in strip overlap regions); tie points in which the corresponding ground points may appear in two or more images, incorporation of GCPs for features in imagery, providing an absolute registration; accommodation of a full covariance for tie point observations. This is conducive for tie point correlation techniques which are highly asymmetrical (e.g., as long as the asymmetry can be characterized as a measurement covariance).
- ground point coordinate estimates ⁇ circumflex over (V) ⁇ j and the corresponding image observations ⁇ tilde over (V) ⁇ ij can be understood as a stare scenario between three misregistered images.
- V R W 3 ⁇ 1 Location of the origin of the reference frame with respect to the world frame. This is thus the location of the reference frame coordinatized in the world-frame.
- V _ i 3 ⁇ 1 Translation of i th image with respect to reference frame origin.
- Each element is taken to be zero.
- M b G Mapping of observation index to Ground point index.
- M b G gives the ground point index ( ⁇ ⁇ 1, 2, . . . , n ⁇ ) for a specified observation index b ⁇ ⁇ 1, 2 . . . , q ⁇ .
- Cardinality of set S (e.g., the number of index elements in set S).
- Ground point observations can be indexed by ground point j and image i (as in ⁇ tilde over (V) ⁇ ij ) or by linear indexing, b (as in ⁇ tilde over (V) ⁇ b ). Use of the subscripting depends upon the context. In the former, it is of interest to characterize the fact that a particular ground point j appears on a particular image i. In the latter, it is of interest to enumerate all observations independent of to which image or to which ground point they refer.
- ground point observation locations are specified in world coordinates, it is of interest to transform the ground point observation locations to be “image” relative. Further, it can be of interest to obtain the ground locations and image offsets themselves to be relative to a “local” reference coordinate frame.
- a motivation for a local reference coordinate frame can be to remove large values from the coordinates.
- UTM coordinates can typically be in the hundreds of thousands of meters. This makes interpretation of the coordinates more difficult, for example, when examining a report of updated coordinate locations.
- a motivation for an image-relative coordinate frame can be so that the interpretation of the orientation angles comprising the T i matrices can be relative to the center of the data set. This is contrasted with the origin of rotation being far removed from the data set (e.g., coincident with the local reference frame origin in the mosaic scenario).
- V A B represent “the location of the origin of frame A coordinatized in frame B”.
- V R W can represent the location of the reference frame in the world coordinate system (e.g., UTM coordinates of the origin of the reference frame).
- the reference frame can be established as an average of all of the world-space coordinates of tie points. This offset (denoted V R W ) can be determined using Equation
- the reference frame origin referred to by the world frame
- the reference frame origin can be computed by a process external to the bundle adjustment operation 818 (e.g., by the process that assembles the tie points 808 for use in the bundle adjustment operation 818 ).
- the image frame (e.g., a frame defined on a per-image basis) can be the world coordinates of the center of an image. Under the assumption that there are bounding coordinates in the image data (specifying the min and max extents of the data in world-frame X, Y and Z), the center of the data can thus be taken to be the respective averages of the min and max extents. Since this image frame refers to world space, the computed offset is denoted V I W . If bounding coordinates are not available, value for V I W is taken as the average of the tie point locations over the specific image i, as described in Equation 8
- V I W 1 ⁇ " ⁇ [LeftBracketingBar]" G i ⁇ " ⁇ [RightBracketingBar]" ⁇ ⁇ j ⁇ G i ⁇ V ⁇ ij Equation ⁇ 8
- the image frame offset in reference space coordinates is taken to be the initial value for V (0) on a per image basis.
- tie point observation values can be input in world coordinates and since the observation equation domain assumes reference frame coordinates, some preprocessing of the input data can help make it consistent with that assumed by the observation equation (Equations 1 or 3).
- ground point coordinates used in Equation 3 can be unknown, they can be estimated.
- the ground point coordinates can be assumed to be coordinatized in the reference frame.
- the initial estimated values for the ground coordinates of each tie point can be computed as an average of the ground point observations over all images in which it appears as described b Equation 11
- tie point observation coordinates for use in the observation equation can be converted to image-relative coordinates using Equation 13.
- ⁇ tilde over (V) ⁇ ij ⁇ tilde over (V) ⁇ ij R ⁇ V i (0) Equation 13
- Equation 1 is non-linear in the orientation angles that form T i
- Equation 3 can be linearized. Solving the linearized equation can be a multidimensional root finding problem (in which the root is the vector of solution parameters).
- Equation 14 For simplification in notation of the linearization, consider a fixed image and a fixed ground point. Let the unknown error model parameters (offset, orientation, and scale correction) be represented by Equation 14:
- T is the true image orientation matrix
- V is the true image translation vector
- ⁇ circumflex over (V) ⁇ is the true ground point coordinate
- ⁇ tilde over (V) ⁇ is the corresponding ground point observation coordinate
- Equation 19 The function F can be approximated using a first-order Taylor series expansion of F about initial estimates X (0) and ⁇ circumflex over (V) ⁇ (0) as in Equation 19
- ⁇ F ( 0 ) ⁇ X ⁇ and ⁇ ⁇ F ( 0 ) ⁇ V ⁇ are the partial derivatives of F evaluated at X (0) and ⁇ circumflex over (V) ⁇ (0) respectively
- ⁇ dot over ( ⁇ ) ⁇ [ ⁇ x ⁇ y ⁇ z ⁇ ⁇ ⁇ ⁇ s] T is a vector of corrections to X
- ⁇ circumflex over ( ⁇ ) ⁇ [ ⁇ circumflex over (x) ⁇ ⁇ ⁇ circumflex over (z) ⁇ ] is a vector of corrections to ⁇ circumflex over (V) ⁇ .
- the values for X (0) and ⁇ circumflex over (V) ⁇ (0) are discussed in Table 4 and in sections 3.3.2 and 3.3.3.
- dot symbols are merely notations, following the classical photogrammetric equivalent, and do not intrinsically indicate “rates,” as is sometimes denoted in other classical physics contexts.
- Equation 19 Equation 19 can be written as
- Equation 24 The linearized form of Equation 22 at iteration (p) can be represented as in Equation 24.
- ⁇ F ( p ) ⁇ V ⁇ is the Jacobian of F with respect to ⁇ circumflex over (V) ⁇ evaluated at ⁇ circumflex over (V) ⁇ (p)
- ⁇ dot over ( ⁇ ) ⁇ is a vector of corrections to X for the p th iteration
- ⁇ umlaut over ( ⁇ ) ⁇ is a vector of corrections to ⁇ circumflex over (V) ⁇ for the p th iteration.
- ⁇ circumflex over (V) ⁇ (p) (p-1) + ⁇ umlaut over ( ⁇ ) ⁇ Equation 26
- Equation 24 For the initial iteration, initial values for X (0) and ⁇ circumflex over (V) ⁇ (0) can be estimated as discussed previously.
- the system represented by Equation 24 is now linear in ⁇ dot over ( ⁇ ) ⁇ and ⁇ umlaut over ( ⁇ ) ⁇ .
- a linear solver can be used to solve for the parameters.
- Equation 23 For a particular image i and a particular ground point j, Equation 23 can be written as Equation 27
- Equation 27 can be extended as
- B is very sparse; (2) the quantities ⁇ dot over (B) ⁇ ij and ⁇ umlaut over (B) ⁇ ij are nonzero if and only if ground point j is observed on image i. For this reason, the classical development of the normal matrix B T B and right-hand side vector B T E uses summations over the appropriate indexing. These summations are provided in the normal matrix partitioning below.
- the matrices can be partitioned as in Equations 35-37
- Equation 39 The matrix Z can thus be represented as Equation 39
- Equation 40 The matrix ⁇ dot over (N) ⁇ can be written as Equation 40
- N . 7 ⁇ m ⁇ 7 ⁇ m [ N . 1 7 ⁇ 7 0 ⁇ 0 0 N . 2 7 ⁇ 7 ⁇ 0 ⁇ ⁇ ⁇ ⁇ 0 0 ⁇ N . m 7 ⁇ 7 ] Equation ⁇ 40
- N . i 7 ⁇ 7 ⁇ j ⁇ G i ⁇ B . ij T ⁇ W ⁇ ij ⁇ B . ij Equation ⁇ 42
- the subscripts ij on the ⁇ dot over (B) ⁇ ij matrices indicate that they are a function of image i and ground point j.
- Equation 43 The matrix N can be expanded as in Equation 43
- N ⁇ 3 ⁇ n ⁇ 3 ⁇ n [ N ⁇ 1 3 ⁇ 3 ⁇ ⁇ 0 0 ⁇ ⁇ 0 ⁇ ⁇ ⁇ ⁇ 0 0 ⁇ N ⁇ n 3 ⁇ 3 ] Equation ⁇ 43
- Equation 44 ⁇ umlaut over (W) ⁇
- Equation 43 The block entries of Equation 43 can be defined as in Equation 45
- N ⁇ j 3 ⁇ 3 ⁇ i ⁇ O j ⁇ B ⁇ ij T ⁇ W ⁇ i ⁇ j ⁇ B ⁇ i ⁇ j Equation ⁇ 45
- Equation 46 The matrix N from Equation 39 can be expanded as in Equation 46
- Equation 47 The block entries of N from Equation 45 can be defined as in Equation 47
- N ⁇ i ⁇ j 7 ⁇ 3 B . ij T ⁇ W ⁇ ij ⁇ B ⁇ i ⁇ j ⁇ i ⁇ ⁇ 1 , ... , m ⁇ , j ⁇ ⁇ 1 , ... , n ⁇ Equation ⁇ 47
- Equation 48 Equation 48
- the subblocks of H can be defined as in Equations 49 and 50
- K . i 7 ⁇ 1 ⁇ j ⁇ G i ⁇ B . ij T ⁇ W ⁇ ij ⁇ ⁇ ij Equation ⁇ 49
- K ⁇ j 3 ⁇ 1 ⁇ i ⁇ O j ⁇ B ⁇ ij T ⁇ W ⁇ ij ⁇ ⁇ ij Equation ⁇ 50
- the values for ⁇ (0) and ⁇ umlaut over (C) ⁇ (0) are the initial parameter values.
- the initial values for the translation parameters portion of ⁇ (0) can be taken to be the V i (0) R as computed in Equation 9.
- the initial values for the rotation parameters portion of 60) can be taken to be zero.
- the initial values of ⁇ umlaut over (C) ⁇ (0) can be taken to be the values of the ground point coordinates ⁇ circumflex over (V) ⁇ j (0) as computed in accord with Equation 11.
- the parameters can be updated via Equations 51 and Equation 52 and the normal matrix can be formed and solved again. The process can continue until the solution converges. Examples of the convergence criterion can be discussed in the following section.
- Equation 54 An example of a convergence criterion is to compute the root-mean-square (RMS) of the residuals as in Equation 54
- Equation 54 represents the number of degrees of freedom (e.g., the number of observation equations minus the number of estimated parameters).
- Equation 54 Since typically q»7m Equation 54 can be estimated as in Equation 55
- condition q»7m can be guaranteed with sufficient redundancy of ground point observations as compared with the number of images (e.g., enough tie points are measured between the images so that the aforementioned condition is satisfied).
- Equation 57 A rigorous formulation for the standard error of unit weight (to be used in error propagation discussed elsewhere) is provided in Equation 57
- Equation 58 the value for q in Equation 58 can be the number of non-blundered observations.
- the full form of the matrix Equation 34 can be reduced under the assumption that the errors in the ground point locations are uncorrelated. Under this assumption, the error covariance matrix of the ground point locations ⁇ umlaut over ( ⁇ ) ⁇ becomes a block-diagonal matrix of 3 ⁇ 3 matrix blocks. Since it is a sparse matrix, its inverse is easily computed by inverting the 3 ⁇ 3 diagonal blocks.
- the development in this section reformulates the normal equations taking advantage of this. The result is a reduced normal equation matrix in which the size of the normal matrix is 6m ⁇ 6m instead of (6m+3n) ⁇ (6m+3n). This gives the obvious advantage that the size of the normal matrix is much smaller and remains invariant with the number of ground points.
- the reduced system formation is sometimes referred to as a “ground point folding,” since the ground point portion of the reduced normal matrix is incorporated into the image portion.
- Equation 59 can be re-written as Equation 60
- matrix D is non-singular and can be represented as a sparse block diagonal matrix.
- Equation 71 The blocks of ⁇ circumflex over (Z) ⁇ in Equation 71 can be the equivalent N ij as defined in equation 47.
- M ( 7 ⁇ m ⁇ 7 ⁇ m ) [ Z . 1 ( 7 ⁇ 7 ) 0 ⁇ 0 0 Z . 2 ( 7 ⁇ 7 ) ⁇ 0 ⁇ ⁇ ⁇ ⁇ 0 0 ⁇ Z .
- the reduced matrix M can be formed by first storing the diagonal entries of ⁇ and then subtracting the summed entries of the subtrahend in Equation 75 (namely the ⁇ circumflex over (Z) ⁇ r,c defined in Equation 74).
- the matrix, M can be built by iterating over the ground points (assuming the minuend of Equation 75 on-diagonals were formed in advance) and subtracting out the contributions for a particular ground point in the appropriate place within M.
- the constant column vector C can be formed similarly with some of the same matrices:
- the solution vector can be decomposed into per-image-adjustable vectors di for each image i as in Equation 78:
- Equation 67 can be used to obtain Equation 79
- ⁇ ⁇ 3 ⁇ n ⁇ 1 [ ⁇ 1 ⁇ 3 ⁇ 1 ⁇ 2 ⁇ 3 ⁇ 1 ⁇ ⁇ n ⁇ 3 ⁇ 1 ] Equation ⁇ 80
- I j is as defined in Table 5 (the index set of images upon which ground point j is an observation).
- This section provides formulations for extraction of a posteriori error covariances for ground points. If a priori sensor model error estimates are available (and reliable), the errors may be propagated to the space of the registration error models. In this case, the error propagation is a rigorous predicted error for the accuracy of the a posteriori ground point locations.
- the a posteriori error covariances of the image parameters are the appropriate subblocks of the inverse of the reduced normal matrix M ⁇ 1 from Equation 69 (after application of the variance of unit weight, as described at the end of this section).
- the a posteriori error covariance can be the inverse of the normal matrix, Z ⁇ 1 , times the variance of unit weight.
- the a posteriori error covariances of the ground points can be extracted from M ⁇ 1 by unfolding. To facilitate this, the full normal matrix can be written as
- ground points r and c can be represented as block element
- the r th row of ⁇ umlaut over ( ⁇ ) ⁇ involves only ⁇ umlaut over (Z) ⁇ r ⁇ 1 of the first ⁇ umlaut over (Z) ⁇ ⁇ 1 matrix in term two of Equation 86.
- the c th column of ⁇ umlaut over ( ⁇ ) ⁇ involves only ⁇ umlaut over (Z) ⁇ x ⁇ 1 of the second ⁇ umlaut over (Z) ⁇ ⁇ 1 matrix in term two.
- the r th row of G involves only the r th row of Z T and the c th column of G involves only the c th column of Z .
- Equation 92 Equation 92
- the a posteriori covariance is usually defined by scaling the inverse of the normal matrix by an estimate of the variance of unit weight.
- An estimate of the variance of unit weight is denoted as [ ⁇ (p) ] 2 and is provided in Equation 57.
- FIG. 10 illustrates an example of an embodiment of the operation 818 .
- the operation 818 can include 3D data set registration with error propagation.
- the operation 818 includes initializing solution and corrections, at operation 1002 ; determining discrepancies, at operation 1004 ; determining the normal equation, at operation 1006 ; updating parameters based on the determined normal equation, at operation 1008 ; determining discrepancies, at operation 1010 ; determining error, at operation 1012 ; and compensating misregistration of the first 3D point set 102 A and the second 3D point set 102 B, at operation 1014 .
- the operation 1002 can include setting the solution vector X and the correction vector ⁇ X to the zero vector 1 :
- the solution vector X can be set to a fixed-point location for the linearization. If an a priori estimate is available, it can be used here in place of the zero vector.
- the operation 1004 can include computing the discrepancy vector for each observation as provided in Equation 29.
- the operation 1006 can include building the normal equations matrices and solving for the correction vector as provided in Equation 53.
- the operation 1008 can include updating the parameter vector for the current iteration as provided in Equations 51 and 52. Details of the operation 1008 for unfolding of the ground points for the folded normal equation solution is provided via pseudocode below.
- the operation 1010 can include computing the discrepancies as provided in Equation 29.
- the convergence criterion check can be augmented with a check to see if the blunder weights should be used in continuation of the solution (“useBW”, indicating to use “blunder-checking weighting”). If convergence occurs and useBW is true, this is an indicator to perform blunder checking, and this time using a normalized residual computation in order to check for blunders on the next iteration.
- blunders can be computed. If there are blunders remaining, the blunder “cycle” number is incremented and the process is repeated with the correction vector reset to a priori values (e.g., go to operation 1002 ). If there are no blunders remaining, a check can be performed to see if the number of post convergence blunder cycles can be set to zero. This check can be performed to effectively force one more solution after all blunders have been eliminated.
- useBW can be set to true. This has the effect of forcing the normalized residual blunder iteration for determining the blunders on subsequent iterations. In this case, a solution has converged but normalized blunder residuals have not been computed. Setting useBW to true can forces this to happen on the next solution iteration. The solution can be iterated by going to the operation 1006 . If there are no more blunders and the number of blunders is not zero, this indicates the “non-blunder iteration” solution has converged.
- the operation 818 can include providing a report that includes an iteration number, current correction vector ⁇ X, current iteration estimates of parameters and ground points (e.g., as computed in equations 51 and 52), standard error of unit weight (e.g., as provided in Equation 55).
- the operation 818 can include a check for non-convergence by examining the current iteration number with a maximum number of iterations, M. If the number of iterations exceeds the maximum, stop the iteration process. The solution did not converge. An exception can be raised and the operation 818 can be complete.
- the pseudocode begins by setting the non-linear least squares iteration index (p) to zero.
- Equation 100 Compute ⁇ dot over (B) ⁇ ij and ⁇ umlaut over (B) ⁇ ij as in Equations 101 and 102. B .
- the same per-image elements are cached in a workspace object and updated with each iteration.
- the algorithm for the reduced solution can be broken into two major portions: priming and folding. Priming involves storing of weights and the contributions along the diagonal of the full normal equation matrix (and corresponding data for the right hand column vector H). This corresponds to the ⁇ portion of Z. Thus, priming involved formation of the minuends of Equation 75 and Equation 76. Folding can include incorporation of the subtrahends of the aforementioned Equations.
- a ground point workspace can be created.
- the workspace can include the following elements: ⁇ umlaut over (Z) ⁇ j , ⁇ umlaut over (H) ⁇ , ⁇ umlaut over (Z) ⁇ j ⁇ 1 . These things are indexed by ground point for the ground point workspace.
- the technique can begin by setting the non-linear least squares iteration index (p) to zero.
- Equation 94 The general cross error covariance between ground point indexes r and c can obtained by evaluation of Equation 94.
- ⁇ ⁇ 3 ⁇ n ⁇ 3 ⁇ n may be obtained by invoking the method for r ⁇ 1, 2, . . . , n ⁇ and for c ⁇ r, r+1, . . . , n ⁇ . Note that the indexing for c starts with r since the full ground covariance matrix is symmetric (i.e., build the upper triangle of
- the operation 1014 proceeds given the outputs of the MLSE techniques discussed.
- the motivation for providing the inputs and outputs in world space coordinates can be that is the native space of the inputs and desired space of the outputs for each element of each image's point cloud.
- the compensation formula can be performed as in Equation 107
- V reg w 1 ( 1 + s i ( p ) ) ⁇ T i T ( V misreg w _ - V _ i ( 0 ) R - V R W ) + V _ i ( p ) + V R W Equation ⁇ 107
- Equation 107 is constructed from the solution vector ⁇ i (p) and the other symbols in Equation 107 are defined elsewhere. Note that the values for ⁇ V i (0) R ⁇ V R W and + V i (0) +V R W and T i T can be precomputed on a per image basis when applying Equation 107 for a time-efficient implementation.
- FIG. 11 illustrates, by way of example, a diagram of an embodiment of a method 1100 for 3D point set generation and registration.
- the method 1100 as illustrated includes capturing, by unmanned vehicles (UVs), image data representative of respective overlapping subsections of the object, at operation 1102 ; registering the overlapping subsections to each other, at operation 1104 ; and geo-locating the registered overlapping subsections, at operation 1106 .
- UVs unmanned vehicles
- the method 1100 can include capturing, by a UV of the UVs, a first overhead image of a starting geo-location at which the image data is captured and wherein geo-locating the overlapping subsections includes correlating the first overhead image with a second overhead image for which geo-location is known.
- the method 1100 can include, wherein the second overhead image is a satellite image.
- the method 1100 can include, wherein geo-locating the registered overlapping subsection includes determining a normalized cross correlation of image chips of the first overhead image and the second overhead image.
- the method 1100 can include receiving, from an operator of a UV of the UVs, a starting geo-location, and a heading of the UV.
- the method 1100 can include wherein geo-locating the registered overlapping subsections is performed based on the starting geo-location and the heading.
- the method 1100 can include performing, by a UV of the UVs, a light detection and ranging (LIDAR) scan to generate LIDAR scan data.
- LIDAR light detection and ranging
- the method 1100 can include wherein geo-locating the registered overlapping subsections includes correlating the first overhead image with the LIDAR scan data.
- the method 1100 can include associating, by the UV, geo-location data of the UV with image data generated by the UV.
- the method 1100 can include, wherein geo-locating the registered overlapping subsections occurs based on the geo-location data.
- the method 1100 can include generating a first three-dimensional (3D) point set based on the geo-located registered overlapping subsections.
- the method 1100 can include registering the first 3D point set to a second 3D point set to generate a merged 3D point set.
- the method 1100 can include, wherein registering the first 3D point set to the second 3D point set includes scaling, rotating, and translating one or more of the first and second 3D point sets using a least squares estimate bundle adjustment based on tie points between the first and second 3D point sets.
- FIG. 12 illustrates, by way of example, a block diagram of an embodiment of a machine in the example form of a computer system 1200 within which instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
- the machine operates as a standalone device or may be connected (e.g., networked) to other machines.
- the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
- the machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine.
- PC personal computer
- PDA Personal Digital Assistant
- STB set-top box
- WPA Personal Digital Assistant
- a cellular telephone a web appliance
- network router switch or bridge
- machine any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine.
- machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
- the example computer system 1200 includes a processor 1202 (e.g., processing circuitry 118 , such as can include a central processing unit (CPU), a graphics processing unit (GPU), field programmable gate array (FPGA), other circuitry, such as one or more transistors, resistors, capacitors, inductors, diodes, regulators, switches, multiplexers, power devices, logic gates (e.g., AND, OR, XOR, negate, etc.), buffers, memory devices, or the like, or a combination thereof), a main memory 1204 and a static memory 1206 , which communicate with each other via a bus 1208 .
- a processor 1202 e.g., processing circuitry 118 , such as can include a central processing unit (CPU), a graphics processing unit (GPU), field programmable gate array (FPGA), other circuitry, such as one or more transistors, resistors, capacitors, inductors, diodes, regulators, switches, multiplexers, power devices, logic
- the computer system 1200 may further include a display device 1210 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)).
- the computer system 1200 also includes an alphanumeric input device 1212 (e.g., a keyboard), a user interface (UI) navigation device 1214 (e.g., a mouse), a disk drive unit 1216 , a signal generation device 1218 (e.g., a speaker), a network interface device 1220 , and radios 1230 such as Bluetooth, WWAN, WLAN, and NFC, permitting the application of security controls on such protocols.
- a display device 1210 e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)
- the computer system 1200 also includes an alphanumeric input device 1212 (e.g., a keyboard), a user interface (UI) navigation device 1214 (e.g., a mouse), a disk drive unit 1216 , a signal
- the disk drive unit 1216 includes a machine-readable medium 1222 on which is stored one or more sets of instructions and data structures (e.g., software) 1224 embodying or utilized by any one or more of the methodologies or functions described herein.
- the instructions 1224 may also reside, completely or at least partially, within the main memory 1204 and/or within the processor 1202 during execution thereof by the computer system 1200 , the main memory 1204 and the processor 1202 also constituting machine-readable media.
- machine-readable medium 1222 is shown in an example embodiment to be a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions or data structures.
- the term “machine-readable medium” shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention, or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions.
- the term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.
- machine-readable media include non-volatile memory, including by way of example semiconductor memory devices, e.g., Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
- semiconductor memory devices e.g., Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), and flash memory devices
- EPROM Erasable Programmable Read-Only Memory
- EEPROM Electrically Erasable Programmable Read-Only Memory
- flash memory devices e.g., electrically Erasable Programmable Read-Only Memory (EEPROM), and flash memory devices
- magnetic disks such as internal hard disks and removable disks
- magneto-optical disks e.g., magneto-optical disks
- the instructions 1224 may further be transmitted or received over a communications network 1226 using a transmission medium.
- the instructions 1224 may be transmitted using the network interface device 1220 and any one of a number of well-known transfer protocols (e.g., HTTP).
- Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), the Internet, mobile telephone networks, Plain Old Telephone (POTS) networks, and wireless data networks (e.g., WiFi and WiMax networks).
- POTS Plain Old Telephone
- the term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such software.
- Example 1 includes a method for generating a three-dimensional (3D) point cloud of an object, the method comprising capturing, by unmanned vehicles (UVs), image data representative of respective overlapping subsections of the object, registering the overlapping subsections to each other, and geo-locating the registered overlapping subsections.
- UVs unmanned vehicles
- Example 1 can further include capturing, by a UV of the UVs, a first overhead image of a starting geo-location at which the image data is captured and wherein geo-locating the overlapping subsections includes correlating the first overhead image with a second overhead image for which geo-location is known.
- Example 2 can further include, wherein the second overhead image is a satellite image.
- Example 3 can further include, wherein geo-locating the registered overlapping subsection includes determining a normalized cross correlation of image chips of the first overhead image and the second overhead image.
- Example 5 at least one of Examples 1-4 can further include receiving, from an operator of a UV of the UVs, a starting geo-location, and a heading of the UV, and wherein geo-locating the registered overlapping subsections is performed based on the starting geo-location and the heading.
- Example 6 at least one of Examples 2-5 can further include performing, by a UV of the UVs, a light detection and ranging (LIDAR) scan to generate LIDAR scan data, and wherein geo-locating the registered overlapping subsections includes correlating the first overhead image with the LIDAR scan data.
- LIDAR light detection and ranging
- Example 7 at least one of Examples 1-6 can further include associating, by the UV, geo-location data of the UV with image data generated by the UV, and wherein geo-locating the registered overlapping subsections occurs based on the geo-location data.
- Example 8 at least one of Examples 1-7 can further include generating a first three-dimensional (3D) point set based on the geo-located registered overlapping subsections and registering the first 3D point set to a second 3D point set to generate a merged 3D point set.
- 3D three-dimensional
- Example 8 can further include, wherein registering the first 3D point set to the second 3D point set includes scaling, rotating, and translating one or more of the first and second 3D point sets using a least squares estimate bundle adjustment based on tie points between the first and second 3D point sets.
- Example 10 includes a system comprising unmanned vehicles configured to capture image data representative of respective overlapping subsections of an object, and processing circuitry configured to register the overlapping subsections to each other, and geo-locate the registered overlapping subsections.
- Example 10 can further include, wherein a UV of the UVs is further configured to capture a first overhead image of a starting geo-location at which the image data is captured and wherein geo-locating the overlapping subsections includes correlating the first overhead image with a second overhead image for which geo-location is known.
- Example 11 can further include, wherein the second overhead image is a satellite image.
- Example 12 can further include, wherein geo-locating the registered overlapping subsection includes determining a normalized cross correlation of image chips of the first overhead image and the second overhead image.
- Example 14 at least one of Examples 10-13 can further include, wherein the processing circuitry is further configured to receive, from an operator of a UV of the UVs, a starting geo-location and a heading of the UAV, and wherein geo-locating the registered overlapping subsections is performed based on the starting geo-location and the heading.
- Example 15 at least one of Examples 11-14 can further include, wherein a UV of the UVs is further configured to perform a light detection and ranging (LIDAR) scan to generate LIDAR scan data; and wherein geo-locating the registered overlapping subsections includes correlating the first overhead image with the LIDAR scan data.
- LIDAR light detection and ranging
- Example 16 includes a (e.g., non-transitory) machine-readable medium including instructions that, when executed by a machine, cause the machine to perform operations comprising receiving, by unmanned vehicles (UVs), image data representative of respective overlapping subsections of an object, registering the overlapping subsections to each other, and geo-locating the registered overlapping subsections.
- UVs unmanned vehicles
- Example 16 can further include, wherein the operations further comprise receiving, by the UV, geo-location data of the UV associated with the image data generated by the UV, and wherein geo-locating the registered overlapping subsections occurs based on the geo-location data.
- Example 18 at least one of Examples 16-17 can further include, wherein the operations further comprise generating a first three-dimensional (3D) point set based on the geo-located registered overlapping subsections and registering the first 3D point set to a second 3D point set to generate a merged 3D point set.
- the operations further comprise generating a first three-dimensional (3D) point set based on the geo-located registered overlapping subsections and registering the first 3D point set to a second 3D point set to generate a merged 3D point set.
- Example 18 can further include, wherein registering the first 3D point set to the second 3D point set includes scaling, rotating, and translating one or more of the first and second 3D point sets using a least squares estimate bundle adjustment based on tie points between the first and second 3D point sets.
- Example 20 at least one of Examples 16-19 can further include, wherein the operations further comprise receiving light detection and ranging (LIDAR) scan data of the object from a UV of the UVs; and wherein geo-locating the registered overlapping subsections includes correlating the first overhead image with the LIDAR scan data.
- LIDAR light detection and ranging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Astronomy & Astrophysics (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Aviation & Aerospace Engineering (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
where the diagonal terms are respective variances in the given directions, and the off-diagonal terms are covariances between the directions.
where
denotes a matrix A with N rows and M columns. Column vectors from R3 thus have the
If the vector includes diacritical marks or distinguishing embellishments, these are transferred to the components, as in
refer to rotation angles (roll, pitch and yaw) about an image's x, y, and z axes respectively. The scalar s represents an isometric scale correction factor (nominally zero). The above form is conducive to modeling a simultaneous least squares adjustment of all images' offsets and orientations, provided that estimates of reference space coordinates for all conjugate image observations vectors are available. This form is more suitable and flexible than explicitly holding a single image as a reference for at least one of several reasons: (1) there are reference space ground coordinates that permit the potential use of ground control points, whose a priori covariances are relatively small (e.g., they carry high weighting in the solution); (2) the above formulation is suitable for a simultaneous adjustment for data that includes small or minimal overlap (mosaics), as well as, many images collected over the same area (stares) or any combination in between; and (3) a single image can effectively (e.g., implicitly) be held as a reference by appropriate a priori weighting of its error model parameters.
{tilde over (V)} ij=(1+s i)T i({circumflex over (V)} j −
TABLE 1 |
Definitions of Symbols |
Symbol | Definition |
|
Location of the origin of the reference frame with respect to the world frame. This is thus the location of the reference frame coordinatized in the world-frame. |
|
Translation of ith image with respect to reference frame origin. |
|
Orientation angles of ith image with respect to image frame origin θi ≡ [ωi ϕi κi]T |
|
Isometric scale factor correction for the ith image. |
|
Initial (zeroth-iteration) value for |
|
Initial (zeroth-iteration) value for θi. Each element is taken to be zero. |
|
Initial (zeroth iteration) value for si. Nominally si (0) ≡ 0. |
|
Orientation matrix for ith image built from θi |
|
Ground point coordinates for ground point j with respect to the reference frame origin {circumflex over (V)}j ≡ [ {circumflex over (x)}j ŷj {circumflex over (z)}j]T |
|
Initial (zeroth-iteration) estimated value for {circumflex over (V)}j |
|
Ground point observation coordinate of ground point j on image i coordinatized in the world frame (e.g., these are UTM coordinates of the ground point observation location). |
|
Ground point observation coordinate of ground point j on image i. These are implicitly assumed to be coordinatized in the local image frame for image i. {tilde over (V)}ij ≡ [{tilde over (x)}ij {tilde over (y)}ij {tilde over (z)}ij]T |
|
A priori covariance of ith image translation, orientation and scale correction parameter vector |
|
A priori parameter weight matrix for image i.
|
|
A priori covariance of ground point j |
|
A priori weight matrix for ground point j.
|
|
A priori covariance for observation of ground point j upon image i |
|
A priori observation weight matrix for observation of ground pointj upon image i. {tilde over (W)}ij = ({tilde over (Σ)}ij)−1 |
General Indexing |
m | Number of images |
i | Image index. i ∈ {1, 2 . . . , m} |
n | Total number of ground points |
j | Ground point index j ∈ {1, 2 . . . , n} |
q | Total number of ground point observations |
b | Ground point observation index b ∈ {1, 2 . . . , q} |
(p) | Non-linear least squares iteration index |
Gi | The index set of all ground points appearing in image i. |
Thus Gi ⊆ {1, 2 . . . , n} | |
0j | The index set of observations of ground pointj over all |
images. Thus 0j ⊆ {1, 2 . . . , q} | |
Ij | The index set of images upon which ground pointj is an |
observation. Thus Ij ⊆ {1, 2 . . . , m} | |
Mb G | Mapping of observation index to Ground point index. Mb G |
gives the ground point index (∈ {1, 2, . . . , n}) for a specified | |
observation index b ∈ {1, 2 . . . , q}. | |
|S| | Cardinality of set S (e.g., the number of index elements in |
set S). | |
V W =V R +V R W Equation 6
{tilde over (V)} ij R ={tilde over (V)} ij W −V R W Equation 10
{umlaut over (Σ)}j≡diag([101210121012]) Equation 12
{tilde over (V)} ij ={tilde over (V)} ij R −
{tilde over (V)}=(1+s)T({circumflex over (V)}−{tilde over (V)}) Equation 15
F(X;{circumflex over (V)})=0 Equation 16
where
F(X;{circumflex over (V)})={tilde over (V)}−(1+s)T({circumflex over (V)}−
and are the partial derivatives of F evaluated at X(0) and {circumflex over (V)}(0) respectively, {dot over (Δ)}≡[Δ
is the Jacobian of F with respect to X evaluated at X(p),
is the Jacobian of F with respect to {circumflex over (V)} evaluated at {circumflex over (V)}(p), {dot over (Δ)} is a vector of corrections to X for the pth iteration, and {umlaut over (Δ)} is a vector of corrections to {circumflex over (V)} for the pth iteration.
X (p) =X (p-1)+{dot over (Δ)} Equation 25
{circumflex over (V)} (p)=(p-1)+{umlaut over (Δ)} Equation 26
εij (p) =−F(X (p) ;{circumflex over (V)} (p)) Equation 28
and thus
εij (p) =−[{tilde over (V)} ij−(1+s i)T i({circumflex over (V)} j −
BΔ=E Equation 31
(B T B)Δ=B T E Equation 32
ZΔ=H Equation 33
ZΔ=H Equation 34
Ċ (p) =Ċ (p-1) −Ċ (0) Equation 51
{umlaut over (C)} (p) ={umlaut over (C)} (p-1) −{umlaut over (C)} (0) Equation 52
Δ(p) =Z −1 H Equation 53
|R (p) −R (p-1)|<δ Equation 56
ndof=q−7m Equation 58
ZΔ=H Equation 59
Ż={dot over (N)}+{dot over (W)} Equation 61
{umlaut over (Z)}={umlaut over (N)}+{umlaut over (W)} Equation 63
{dot over (H)}={dot over (K)}−{dot over (W)}Ċ Equation 64
{umlaut over (H)}={umlaut over (K)}−{umlaut over (W)}{umlaut over (C)} Equation 65
[A−BD −1 C][a]=[c−BD −1 d] Equation 67
[Ż−
M{dot over (Δ)}=C Equation 69
{dot over (Δ)}=M −1 C Equation 77
of {umlaut over (Σ)}. With n as the number of ground points and m as the number of images,
{umlaut over (Σ)}r,c=δ(r,c){umlaut over (Z)} c −1 +{umlaut over (Z)} r −1
T r,c=Σt∈I
{umlaut over (Σ)}r,c=δ(r,c){umlaut over (Z)} c −1 +{umlaut over (Z)} r −1[Σs∈I
{umlaut over (Z)} r,c=[σ(p)]2δ(r,c){umlaut over (Z)} c −1 +{umlaut over (Z)} r −1[Σt∈I
-
- 1. Values for trig functions sω, cΩ, sΩ, cΩ, sκ, cκ as given in Equation 4.
- 2. Three 3×3 partial derivative matrices of T with respect to the three angles as given in the following Equations 95-97:
-
- 3. Rotation matrix Ti as given in Equation 4.
- 4. 3×1 vector
V i
-
- 1. {initializeData} Store nominal initial values for the solved parameters for each image i∈{0, . . . , m−1}. The initial values for
V i (0) and θi (0).
- 1. {initializeData} Store nominal initial values for the solved parameters for each image i∈{0, . . . , m−1}. The initial values for
-
- 2. {initializeData} Compute
- a. Initial ground point coordinates via Equation 11 for each ground point j∈{0, . . . , n−1}. These form {umlaut over (C)}j (0)
- b. initial image cache data as described previously.
- 3. {outputIterationReport} Output initial iteration report (input parameters, initial ground point and ground point observation coordinates.)
- 4. {initializeMatrices} Block partition the {dot over (N)} portion of Z into in subblocks, each of size 6×6.
- 5. {initializeMatrices} Block partition the {umlaut over (N)} portion of Z into n subblocks, each of
size 3×3. - 6. {initializeMatrices} Block partition H similarly
- 7. {computeAndStoreDiscrepancies} For each ground point index j∈{0, . . . , n−1} and each observation of that ground point i∈Oj, compute the discrepancy vector given in Equation 29 as follows.
- for j∈{0, . . . , n−1}
- {
- Fetch the most recent ground point position {circumflex over (V)}j.
- for i∈Ij and observation index b∈Oj
- {
- a) Retrieve image-cached values for Ti and
V i - b) Retrieve ground point observation {tilde over (V)}ij={tilde over (V)}b
- c) Apply the observation equation to obtain the projected value for {circumflex over (V)}j.
V ij=(1+s i)T i({circumflex over (V)} j −V i) Equation 99
- d) Compute and store the discrepancy vector for observation b as in Equation 29
εb≡εij =−[{tilde over (V)} ij −V ij]Equation 100- }//end for i
- }//end for j
- 8. Compute the standard error of unit weight as in Equation 57.
- 9. {builddNormalMatrices+initialize Weights} Zero Z and H and initialize Z (and likewise H) with the block weight matrices on the diagonal. This involves setting the blocks of Z to the subblocks of {dot over (W)} and {umlaut over (W)}, and setting the subblocks (subrows) of H to −{dot over (W)}Ċ and −{umlaut over (W)}{umlaut over (C)}.
- 10. {sumInPartials} Loop over ground points and images containing the ground points and sum in the contributions of the {dot over (B)} and {umlaut over (B)} matrices into Z and H.
- for j∈{0, . . . , n−1}
{ | |||
for i ∈ Ij | |||
{ | |||
Retrieve εb ≡ εtj as computed in |
|||
Compute {dot over (B)}ij and {umlaut over (B)}ij as in Equations 101 and 102. | |||
|
|||
Retrieve observation weight matrix {tilde over (W)}ij | |||
{dot over (N)}i : Sum {dot over (B)}ij T{tilde over (W)}ij{dot over (B)}ij into Z.block(i, i) | |||
{umlaut over (N)}j : Sum {umlaut over (B)}ij T{tilde over (W)}ij{umlaut over (B)}ij into Z.block(m + j, m + j) | |||
|
|||
Ċi : Sum {dot over (B)}ij T{tilde over (W)}ijεij into H.block(i, 0) | |||
{umlaut over (C)}j : Sum {umlaut over (B)}ij T{tilde over (W)}ijεij into H.block(m + j, 0) | |||
} //end i | |||
} // end j | |||
-
- 11. {solveNormalEquation} Form the lower transpose of the Z matrix and solve the system Δ=Z−1 H. Note that the normal equation system is a symmetric system (e.g., the normal matrix Z is symmetric). Thus, a symmetric system solver can be used. In the case of a symmetric system solver, it may not be necessary to form the lower triangle.
- 12. {updateParameters} Update all the parameters by extracting the corrections from the Δ matrix as in Equations 51 and 52,
- 13. If (p)≠0 compare with the previous root mean square (RMS) of residuals and check for convergence. The convergence condition can be
|R (p) −R (p-1)|<δ Equation 103 - 14. {computePostConvergenceResiduals+checkForBlunders} If convergence has been reached, perform automatic blunder editing. If convergence has not been reached, increment the iteration index
(p)←(p+1)Equation 104- and go to step 7.
- 1. {initializeData} Store nominal initial values for the solved parameters for each image i∈{0, . . . , m−1}. The initial values for
V i (0) and θi (0) can also be set along with Ċi (0) as in Equation 98. - 2. {initial Data} Compute
- a. Initial ground point coordinates via Equation 11 for each ground point j∈{0, . . . , n−1}. These form {umlaut over (C)}j (0).
- b. Initial image cache data as described above.
- 3. {outputIterationReport} Output initial iteration report (input parameters, initial ground point, and ground point observation coordinates).
- 4. {initializeMatrices} Block partition the reduced normal matrix M into m subblocks, each of size 6×6. Block partition the reduced column vector C similarly
- 5. {computeAndStoreDiscrepancies} For each ground point index j∈{0, . . . , n−1} and each observation of that ground point i∈Oj, compute the discrepancy vector εij given in Equation 29 as:
- for j∈{0, . . . , n−1}
- {
- Fetch the most recent ground point position {circumflex over (V)}j.
- for i∈Ij and observation index b∈Oj
- {
- a) Retrieve image-cached values for Ti and
V i- b) Retrieve ground point observation {tilde over (V)}ij={tilde over (V)}b
- c) Apply the observation equation to obtain the projected value for {circumflex over (V)}j from Equation 99.
- d) Compute and store the discrepancy vector for observation b as in
Equation 100- }//end for i
- }//end for j
- 6. Compute the standard error of unit weight as in Equation 57.
- 7. {buildNormalMatrices}
- a. Zero the reduced matrices M and C
- b. {initialize Weights} Initialize M (and likewise C) with the block weight matrices on the diagonal. This involves setting the blocks of M to the subblocks of {dot over (W)} and setting the subblocks (subrows) of C to −{dot over (W)}Ċ.
- c. {sumInPartialsAndFoldGPs} Form the main diagonal and ground point matrices {umlaut over (Z)}j by iterating over ground points. Perform folding for ground point j
- for j∈{0, . . . , n−1}
- {
- PRIMING:
- Store {umlaut over (W)}j into {umlaut over (Z)}j of GPWS
- Store −{umlaut over (W)}j{umlaut over (C)}j into {umlaut over (H)}j of GPWS
- for i∈Ij (where Ij is set of image indexes upon which GP j is an observation)
- {
- Form partial derivatives:
- Build {dot over (B)}ij as in Equation 101
- Build {umlaut over (B)}ij as in Equation 102
- Retrieve discrepancy vector εij as computed in
Equation 100. - Retrieve observation weight matrix {tilde over (W)}ij
- Sum in contribution of GP j's obs in image i within M:
- Sum {dot over (B)}ij T{tilde over (W)}ij{dot over (B)}ij into M·block(i,i)
- Sum {dot over (B)}ij T{tilde over (W)}ijεij into C·block(i,0)
- Sum in i's contribution to {umlaut over (Z)}j and {umlaut over (H)}j.
- Sum {dot over (B)}ij T{tilde over (W)}ij{dot over (B)}ij into {umlaut over (Z)}j
- Sum {dot over (B)}ij T{tilde over (W)}εij into {umlaut over (H)}j
- }//end i
- Invert {umlaut over (Z)}j and store into GPWS as {umlaut over (Z)}j −1
- FOLDING INTO M (note: iteration loop over j is still valid)
- for r∈Ij
- {
- Form
Z rj={dot over (B)}ij T{tilde over (W)}ij{umlaut over (B)}ij as in Equations 69 and 47 - For c∈Ij|c≥r
- {
- Form
Z cj T - Sum in −{umlaut over (Z)}rj{umlaut over (Z)}j −1{umlaut over (Z)}cj T into M·block(r, c).
- Form
- }//end c
- Sum in −
Z rj{umlaut over (Z)}j −1{umlaut over (H)}j into C·block(r, 0).
- Form
- }//end r
- }//end j
- {
- for j∈{0, . . . , n−1}
- 8. Complete the lower diagonal entries of M and solve Δ=M−1C. As in the full normal equation solution, note that M is symmetric and thus a symmetric system solver is in order.
- 9. First use pseudocode provided below to compute corrections to ground points. Then update all the parameters from the A vector.
- 10. If (p)≠0 compare with the previous RMS of residuals and check for convergence. The convergence condition is
|R (p) −R (p-1)|<ε Equation 105 - 11. {computePostConvergenceResiduals+checkForBlunders} if convergence has been reached, perform automatic blunder editing as detailed elsewhere. After there are no more blunders, proceed with error propagation {propagateErrors}. If convergence has not been reached, increment the iteration index as in
Equation 104 and go to step 5.
-
- {unfoldGroundPoints}
- for j∈{0, . . . , n−1}
- {
- Retrieve {umlaut over (Z)}j −1 and {umlaut over (H)}j from GPWS
- Store {umlaut over (H)}j into a new subtrahend matrix S (i.e. initialize S to {umlaut over (H)}j)
- for r∈Ij
- {
- Form
Z rj - Sum −
Z rj T {dot over (Δ)}r into S.
- Form
- }//end r
- Compute {umlaut over (Δ)}j={umlaut over (Z)}j −1S
- }//end j
-
- {relativeGroundPointCov}
- Retrieve {umlaut over (Z)}r −1 and {umlaut over (Z)}c −1 from ground point workspace.
- Obtain indexing sets Ir and Ic (image indexes of ground points r and c)
- Allocate matrix
-
- and initialize to zero. This is the output of this function.
- Allocate matrix
-
- and initialize to zero
for t ∈ Ic | |
{ | |
|
and initialize to zero | |
for s ∈ Ir | |
{ | |
Form |
|
(Note that for the reduced system, |
|
Extract {dot over (Σ)}st = {dot over (Σ)}.getBlock( s, t ), | |
where {dot over (Σ)} is as defined in Equation 93 | |
Set Q ← Q + |
|
}//end s | |
Set P ← P + Q |
|
}//end t | |
Compute {umlaut over (Σ)}r,c = {umlaut over (Z)}r −1P{umlaut over (Z)}c −1 | |
if(r = c) | |
{ | |
Set {umlaut over (Σ)}r,c ← [σ(p)]2{umlaut over (Z)}c −1 + {umlaut over (Σ)}r,c | |
} | |
return {umlaut over (Σ)}r,c | |
//end {relativeGroundPointCov} | |
may be obtained by invoking the method for r∈{1, 2, . . . , n} and for c∈{r, r+1, . . . , n}. Note that the indexing for c starts with r since the full ground covariance matrix is symmetric (i.e., build the upper triangle of
and “reflect about the diagonal” to obtain the lower symmetric portion).
V reg W=Compensatei(V misreg W)
Claims (18)
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/171,544 US11964762B2 (en) | 2020-02-11 | 2021-02-09 | Collaborative 3D mapping and surface registration |
CA3108547A CA3108547A1 (en) | 2020-02-11 | 2021-02-10 | Collaborative 3d mapping and surface registration |
EP21709843.3A EP4104143A1 (en) | 2020-02-11 | 2021-02-10 | Collaborative 3d mapping and surface registration |
AU2021200832A AU2021200832A1 (en) | 2020-02-11 | 2021-02-10 | Collaborative 3d mapping and surface registration |
PCT/US2021/017410 WO2021163157A1 (en) | 2020-02-11 | 2021-02-10 | Collaborative 3d mapping and surface registration |
TW110105355A TWI820395B (en) | 2020-02-11 | 2021-02-17 | Method for generating three-dimensional(3d) point cloud of object, system for 3d point set generation and registration, and related machine-readable medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202062975016P | 2020-02-11 | 2020-02-11 | |
US17/171,544 US11964762B2 (en) | 2020-02-11 | 2021-02-09 | Collaborative 3D mapping and surface registration |
Publications (2)
Publication Number | Publication Date |
---|---|
US20210256722A1 US20210256722A1 (en) | 2021-08-19 |
US11964762B2 true US11964762B2 (en) | 2024-04-23 |
Family
ID=77272114
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/171,544 Active 2042-02-17 US11964762B2 (en) | 2020-02-11 | 2021-02-09 | Collaborative 3D mapping and surface registration |
Country Status (5)
Country | Link |
---|---|
US (1) | US11964762B2 (en) |
EP (1) | EP4104143A1 (en) |
AU (1) | AU2021200832A1 (en) |
TW (1) | TWI820395B (en) |
WO (1) | WO2021163157A1 (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021113147A1 (en) * | 2019-12-04 | 2021-06-10 | Waymo Llc | Efficient algorithm for projecting world points to a rolling shutter image |
US11964762B2 (en) | 2020-02-11 | 2024-04-23 | Raytheon Company | Collaborative 3D mapping and surface registration |
US11288493B2 (en) * | 2020-02-25 | 2022-03-29 | Raytheon Company | Point cloud registration with error propagation |
US20210407302A1 (en) * | 2020-06-30 | 2021-12-30 | Sony Group Corporation | System of multi-drone visual content capturing |
US20220067768A1 (en) * | 2020-08-28 | 2022-03-03 | Telenav, Inc. | Navigation system with high definition mapping mechanism and method of operation thereof |
CN113689471B (en) * | 2021-09-09 | 2023-08-18 | 中国联合网络通信集团有限公司 | Target tracking method, device, computer equipment and storage medium |
US20240078914A1 (en) * | 2022-09-05 | 2024-03-07 | Southwest Research Institute | Navigation System for Unmanned Aircraft in Unknown Environments |
CN116165677B (en) * | 2023-04-24 | 2023-07-21 | 湖北中图勘测规划设计有限公司 | Geological investigation method and device based on laser radar |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050031197A1 (en) * | 2000-10-04 | 2005-02-10 | Knopp David E. | Method and apparatus for producing digital orthophotos using sparse stereo configurations and external models |
US20130135440A1 (en) * | 2011-11-24 | 2013-05-30 | Kabushiki Kaisha Topcon | Aerial Photograph Image Pickup Method And Aerial Photograph Image Pickup Apparatus |
US20150262335A1 (en) * | 2013-03-15 | 2015-09-17 | Digitalglobe, Inc | Automated geospatial image mosaic generation with automatic source selection |
US20170084037A1 (en) | 2015-09-17 | 2017-03-23 | Skycatch, Inc. | Generating georeference information for aerial images |
CN107274380A (en) | 2017-07-07 | 2017-10-20 | 北京大学 | A kind of quick joining method of unmanned plane multispectral image |
CN108876828A (en) | 2018-04-12 | 2018-11-23 | 南安市创培电子科技有限公司 | A kind of unmanned plane image batch processing three-dimensional rebuilding method |
US20190285412A1 (en) * | 2016-11-03 | 2019-09-19 | Datumate Ltd. | System and method for automatically acquiring two-dimensional images and three-dimensional point cloud data of a field to be surveyed |
US20200043195A1 (en) * | 2017-05-16 | 2020-02-06 | Fujifilm Corporation | Image generation apparatus, image generation system, image generation method, and image generation program |
WO2021163157A1 (en) | 2020-02-11 | 2021-08-19 | Raytheon Company | Collaborative 3d mapping and surface registration |
US20210358315A1 (en) * | 2017-01-13 | 2021-11-18 | Skydio, Inc. | Unmanned aerial vehicle visual point cloud navigation |
-
2021
- 2021-02-09 US US17/171,544 patent/US11964762B2/en active Active
- 2021-02-10 WO PCT/US2021/017410 patent/WO2021163157A1/en unknown
- 2021-02-10 AU AU2021200832A patent/AU2021200832A1/en active Pending
- 2021-02-10 EP EP21709843.3A patent/EP4104143A1/en active Pending
- 2021-02-17 TW TW110105355A patent/TWI820395B/en active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050031197A1 (en) * | 2000-10-04 | 2005-02-10 | Knopp David E. | Method and apparatus for producing digital orthophotos using sparse stereo configurations and external models |
US20130135440A1 (en) * | 2011-11-24 | 2013-05-30 | Kabushiki Kaisha Topcon | Aerial Photograph Image Pickup Method And Aerial Photograph Image Pickup Apparatus |
US20150262335A1 (en) * | 2013-03-15 | 2015-09-17 | Digitalglobe, Inc | Automated geospatial image mosaic generation with automatic source selection |
US20170084037A1 (en) | 2015-09-17 | 2017-03-23 | Skycatch, Inc. | Generating georeference information for aerial images |
US20180040137A1 (en) * | 2015-09-17 | 2018-02-08 | Skycatch, Inc. | Generating georeference information for aerial images |
US20190285412A1 (en) * | 2016-11-03 | 2019-09-19 | Datumate Ltd. | System and method for automatically acquiring two-dimensional images and three-dimensional point cloud data of a field to be surveyed |
US20210358315A1 (en) * | 2017-01-13 | 2021-11-18 | Skydio, Inc. | Unmanned aerial vehicle visual point cloud navigation |
US20200043195A1 (en) * | 2017-05-16 | 2020-02-06 | Fujifilm Corporation | Image generation apparatus, image generation system, image generation method, and image generation program |
CN107274380A (en) | 2017-07-07 | 2017-10-20 | 北京大学 | A kind of quick joining method of unmanned plane multispectral image |
CN108876828A (en) | 2018-04-12 | 2018-11-23 | 南安市创培电子科技有限公司 | A kind of unmanned plane image batch processing three-dimensional rebuilding method |
WO2021163157A1 (en) | 2020-02-11 | 2021-08-19 | Raytheon Company | Collaborative 3d mapping and surface registration |
TW202214487A (en) | 2020-02-11 | 2022-04-16 | 美商雷神公司 | Collaborative 3d mapping and surface registration |
TWI820395B (en) | 2020-02-11 | 2023-11-01 | 美商雷神公司 | Method for generating three-dimensional(3d) point cloud of object, system for 3d point set generation and registration, and related machine-readable medium |
Non-Patent Citations (12)
Also Published As
Publication number | Publication date |
---|---|
TWI820395B (en) | 2023-11-01 |
US20210256722A1 (en) | 2021-08-19 |
AU2021200832A1 (en) | 2021-08-26 |
TW202214487A (en) | 2022-04-16 |
WO2021163157A1 (en) | 2021-08-19 |
EP4104143A1 (en) | 2022-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11964762B2 (en) | Collaborative 3D mapping and surface registration | |
Liu et al. | Balm: Bundle adjustment for lidar mapping | |
Zhang et al. | Laser–visual–inertial odometry and mapping with high robustness and low drift | |
Zhang et al. | Low-drift and real-time lidar odometry and mapping | |
EP2423871B1 (en) | Apparatus and method for generating an overview image of a plurality of images using an accuracy information | |
EP2423873A1 (en) | Apparatus and Method for Generating an Overview Image of a Plurality of Images Using a Reference Plane | |
Eynard et al. | Real time UAV altitude, attitude and motion estimation from hybrid stereovision | |
He et al. | Automated relative orientation of UAV-based imagery in the presence of prior information for the flight trajectory | |
Warren et al. | Long-range stereo visual odometry for extended altitude flight of unmanned aerial vehicles | |
Jin et al. | An Indoor Location‐Based Positioning System Using Stereo Vision with the Drone Camera | |
Liu et al. | A novel adjustment model for mosaicking low-overlap sweeping images | |
Delaune et al. | Visual–inertial navigation for pinpoint planetary landing using scale-based landmark matching | |
CA3108547A1 (en) | Collaborative 3d mapping and surface registration | |
Zhang et al. | Visual–inertial combined odometry system for aerial vehicles | |
Mansur et al. | Real time monocular visual odometry using optical flow: study on navigation of quadrotors UAV | |
Zhao et al. | Digital Elevation Model‐Assisted Aerial Triangulation Method On An Unmanned Aerial Vehicle Sweeping Camera System | |
KR20210009019A (en) | System for determining position and attitude of camera using the inner product of vectors and three-dimensional coordinate transformation | |
Zhang et al. | INS assisted monocular visual odometry for aerial vehicles | |
US11288493B2 (en) | Point cloud registration with error propagation | |
Walvoord et al. | Geoaccurate three-dimensional reconstruction via image-based geometry | |
Liu et al. | Adaptive re-weighted block adjustment for multi-coverage satellite stereo images without ground control points | |
Deng et al. | Measurement model and observability analysis for optical flow-aided inertial navigation | |
He | 3d reconstruction from passive sensors | |
Kim | Absolute Position/Orientation Estimation and Calibration Models Without Initial Information for Smart City Sensors | |
Shao et al. | Stable estimation of horizontal velocity for planetary lander with motion constraints |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
AS | Assignment |
Owner name: RAYTHEON COMPANY, MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:STAAB, TORSTEN A.;SEIDA, STEVEN B.;VERRET, JODY D.;AND OTHERS;SIGNING DATES FROM 20210212 TO 20210329;REEL/FRAME:055793/0591 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
ZAAB | Notice of allowance mailed |
Free format text: ORIGINAL CODE: MN/=. |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |