IL303469B1 - A method for revealing the surface of dense forests using remote sensing - Google Patents
A method for revealing the surface of dense forests using remote sensingInfo
- Publication number
- IL303469B1 IL303469B1 IL303469A IL30346923A IL303469B1 IL 303469 B1 IL303469 B1 IL 303469B1 IL 303469 A IL303469 A IL 303469A IL 30346923 A IL30346923 A IL 30346923A IL 303469 B1 IL303469 B1 IL 303469B1
- Authority
- IL
- Israel
- Prior art keywords
- captured
- images
- canopy
- overlapping images
- parameters
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/188—Vegetation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—Three-dimensional [3D] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three-dimensional [3D] modelling for computer graphics
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Geometry (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Astronomy & Astrophysics (AREA)
- Image Processing (AREA)
Description
45829/IL/23-ORP 303469/ - 1 - A METHOD FOR REVEALING THE SURFACE OF DENSE FORESTS USING REMOTE SENSING Field of the Invention The invention is from the field of remote sensing. In particular it relates to remote sensing using aerial platforms to gather images of a geographical area. More particularly it relates to a method of imaging the terrain under densely forested areas. Background of the Invention Publications and other reference materials referred to herein are numerically referenced in the following text and respectively grouped in the appended Bibliography which immediately precedes the claims. Traditional imaging methods often fail to provide accurate information about the terrain underneath the thick canopy of trees in densely forested areas. Accurate information about the surface below the forest canopy can be helpful for a number of applications, including: creating accurate maps of the forest; monitoring changes in the forest over time; supporting conservation efforts by providing detailed information about the distribution and health of the vegetation; aiding search and rescue operations by providing detailed information about the terrain and potential obstacles, allowing rescue teams to navigate through dense forests more efficiently and locate missing people faster; and monitoring wildlife in forests, providing detailed information about their movements and habitat use. It is therefore a purpose of the present invention to provide a method revealing the surface of forests. Further purposes and advantages of this invention will appear as the description proceeds. 45829/IL/23-ORP 303469/ - 2 - Summary of the Invention The invention is a method for constructing under-the-canopy terrain imagery. The method comprises the steps: a. capturing overlapping images to cover an area-of-interest from an imaging sensor on an aerial platform; b. performing bundle adjustment utilizing parameters of said imaging sensor at any given point of capturing the images; c. training a neural network model for implicit volumetric rendering, wherein said model is configured to generate simulated views from any point of said area-of-interest; and d. generating terrain images by discarding occluding objects above ground level from the scene. In embodiments of the method the overlapping images are captured using high overlap and density, wherein the density of the images is dependent on the density of the canopy. In embodiments of the method the overlapping images are captured using a wide angular range of pixels from the nadir and oblique angles. In embodiments of the method the overlapping images are captured during the twilight hours or under full cloud cover. In embodiments of the method the overlapping images are captured using a high ground sample distance. In embodiments of the method the overlapping images are captured under the same lighting conditions. In embodiments of the method at least some of the overlapping images are captured at different altitudes. In embodiments of the method at least some of the overlapping images are captured using different cameras. 45829/IL/23-ORP 303469/ - 3 - In embodiments of the method at least some of the overlapping images are captured using cameras operating in different wavelength bands. In embodiments of the method at least some of the overlapping images are captured using cameras operating in at least one of the following wavelength bands: middle infrared, near infrared, and visible. In embodiments of the method at least some of the overlapping images are captured by two or more aerial platforms on the same day. In embodiments of the method the overlapping images are captured by one or more aerial platforms on different days. In embodiments of the method the aerial platform is a drone. In embodiments of the method the parameters used in the bundle adjustment step include at least one of the following: a) location of the aerial platform including: longitude, latitude, and altitude; b) angles of the camera including: azimuth, pitch, and roll; c) properties of the camera and lens including: focal length and image center; and d) scene parameters including the three dimensional coordinates of objects in the scene that are observed in the images. In embodiments of the method bundle adjustment is an iterative optimization process that involves refining the estimates of camera and scene parameters by minimizing the re- projection error. In embodiments of the method the iterative optimization process uses one of the Levenberg-Marquardt algorithm or the Gauss-Newton algorithm. In embodiments of the method training the model is done using implicit volumetric rendering. 45829/IL/23-ORP 303469/ - 4 - In embodiments of the method the trained model is queried for views of the terrain below the canopy by invoke the rendering process only below a certain altitude. In embodiments of the method the trained model is queried for views of the terrain below the canopy by a two-stage approach of obtaining a mask of the object/s to be removed followed by deletion of the object/s based on the mask. In these embodiments the mask can be obtained by usage of infrared channels in conjunction with the visible spectrum. In these embodiments the model can be trained on multiple color channels originating from different cameras. In embodiments of the method the canopy is formed by trees in a forest, wood, or orchard. In embodiments of the method the canopy is formed by agricultural crops. In embodiments of the method the canopy is formed by a camouflaging or shade cover. All the above and other characteristics and advantages of the invention will be further understood through the following illustrative and non-limitative description of embodiments thereof, with reference to the appended drawings. Brief Description of the Drawings Fig. 1 is an aerial photograph showing a dense oak forest; Fig. 2 shows a simulated RGB image of the illumination with light conditions simulating a degree sun angle above the horizon; Fig. 3 shows a simulated RGB image of the illumination with light conditions simulating a 5 degree sun angle above the horizon; Fig. 4 schematically illustrates bundle adjustment; Fig. 5 schematically illustrates the technique of implicit volumetric rendering; Figs. 6A to 6C show an example of using masks to remove the trees from an image during the Nerf training process ; Figs. 7A and 7B show pealing of the trees from a section of forest using the MVS method; Figs. 8A and 8B show pealing of the trees from the same section of the forest using the presently disclosed method; 45829/IL/23-ORP 303469/ - 5 - Figs. 9A and 9B show pealing of the trees in the MIR band using the SAP method; Figs. 10 A and 10b show pealing of the trees in the MIR band from the same section of the forest using the presently disclosed method; Fig. 11 is an aerial photograph showing an area of interest; Fig. 12 shows the locations of the flight lines overlaid on the aerial photograph of the area of interest; Fig. 13 is a single image photographed with the optimal light conditions; Fig. 14 shows the corresponding points between neighboring frames; Fig. 15 shows the Agisoft bundle report; Fig. 16 shows the entire area of interest after peeling the trees; and Fig. 17 is a zoom-in to a small area before and after the peeling process. Detailed Description of Embodiments of the Invention Disclosed herein is a method for revealing the surface of forests using regular RGB remote sensing. In some examples other spectral bands like infrared, which can be used for vegetation monitoring, can also be used for revealing the surface of forests. As opposed to the synthetic aperture photography (SAP), which is only applied to the relatively small area in which it is suspected that there is present a target, the disclosed method involves generating a "ground pixel mosaic" that can cover a wide area, which in principle is unlimited in size. by using a series of overlapping images captured by a drone. The method includes four phases: a. Image capturing: Images are captured using a drone with guidelines to be described herein below for increasing the chances of penetrating the tree canopy. b. Bundle adjustment: This phase involves solving for the accurate parameters, e.g. camera positions, focal lengths, and lens distortions of the images that will be used in the next phase. The aim of this step is to obtain a highly precise presentation of the scene as will be described in more detail herein below. c. Implicit volumetric rendering: This phase involves training a model to generate novel views of the scene without the occluding vegetation. d. Once the model is trained, it can be queried for views revealing a ground pixel mosaic that do not include vegetation. The ground mosaic can be exploited to detect objects and land-type segments. The method allows for further classification and analysis of the terrain, providing valuable information 45829/IL/23-ORP 303469/ - 6 - about the vegetation, topography, and other features of the forest that can be applied to all of the applications listed in the background section of this application. The phases of the method will now be described in more detail. First phase: The first phase of the method involves capturing images using a drone flying repeated parallel passes over the section of forest being investigated. The inventors have identified certain guidelines for image capturing that should be followed to increase the chances of penetrating the tree canopy and seeing the surface underneath. These guidelines include: High image overlaps and density: For forest surface mapping projects, both the forward and side density of images and their corresponding footprint overlap need to be considered for effective tree penetration. The necessary density of images is dependent on the density of the forest and is expressed in meters. As an example, Fig. 1 is an aerial photograph showing a dense oak forest that the inventors selected for a test of the method. In this example, they found that a forward density of an image every 2 meters with a 95% footprint overlap and a side density of every 25 meters with 90% footprint overlap effectively reconstructed the forest ground surface. The approach of using two parameters that determine the mission plan - i.e. footprint overlap and camera density, avoids the variability introduced by using just footprint overlap, as is usually the common parameter for planning an aerial photography mapping project, which can vary between cameras and flight altitudes. A wide angular range of pixels from the nadir and oblique angles: The nadir angle is the angle directly below the drone, and is the most important for tree penetration. Oblique angles are those that are not directly below the drone. They provide additional information about the terrain by increasing the penetration and accuracy of the model. Capturing the images with high image density and overlap should produce many nadir views. Capturing images during the twilight hours or under full cloud cover: This guideline is necessary since sharp light changes between shaded areas and open areas exposed to the sun can interrupt the ground pixel restoration process. This is shown in Fig. 2, which shows a simulated RGB image of the illumination with light conditions simulating a 90 degree sun angle above the horizon. To avoid this, the inventors 45829/IL/23-ORP 303469/ - 7 - recommend capturing images during the twilight hours or under full cloud cover, which reduces the amount of direct sunlight in the images. The effect of following this recommendation is seen in Fig. 3, which shows a simulated RGB image of the illumination with light conditions simulating a 5 degree sun angle above the horizon. High GSD resolution. GSD, or ground sample distance, is a measure of the spatial resolution of the images. A higher GSD means that each pixel in the image represents a smaller area on the ground, providing more detailed information about the terrain. High GSD also provides a higher potential for penetrating the forest canopy as pixel size will be smaller than a typical leaf or the gaps between them. The inventors conducted a test using simulation and were able to reconstruct the ground surface also by using images with a 1 cm GSD to a 4 cm GSD. additional conditions are: All images should be taken under the same lighting conditions. It is not necessary that all images are taken from the same altitude although this is the more common practice. It is not necessary that all images are taken using the same camera. Images from several different cameras operating in the same or in different wavelength bands, e.g. MIR, NIR, RGB, can be used in the method. Images can be recorded by two or more drones on the same day or by one or more drones on different days. Capturing images following these guidelines increases the chance of penetrating the tree foliage and the chance of being able to see what's underneath it. This is essential for generating a high-quality ground pixel mosaic. Second phase: The second phase of the method is bundle adjustment. Bundle adjustment, which is schematically illustrated in Fig. 4, is the process of solving for the accurate parameters of the images that were captured in the first phase. These parameters are used as an input in the neural radiance field algorithm to later generate an accurate and detailed scene. The parameters of the images at any given time are defined as the location (i.e., longitude, latitude, altitude), angles (i.e., azimuth, pitch, roll), and optical parameters. These parameters 45829/IL/23-ORP 303469/ - 8 - provide information about the position and orientation of the drone at the time the images were captured, as well as the characteristics of the camera and lens. Solving for the accurate parameters of the images is an important step in generating a high-quality ground pixel mosaic. It ensures that the images are aligned and consistent, which is necessary for generating a coherent and accurate implicit volumetric rendering and depth map. Solving for the parameters in the context of bundle adjustment refers to the process of refining the initial estimates of camera and scene parameters to improve the accuracy of the 3D reconstruction from the images captured in the first phase. The camera parameters depend upon the type of camera and typically include intrinsic parameters such as, for example, the focal length, image center, and radial distortion, as well as extrinsic parameters such as the position and orientation of the camera in space. The scene parameters typically include the 3D coordinates of the tie points in the scene that were observed in the images. Bundle adjustment is an iterative optimization process that seeks to minimize the re-projection error, which is the difference between the observed image points and the projected 3D points in the images, taking into account the uncertainties in the measurements and the estimated parameters. The optimization can be done using various techniques, such as the Levenberg-Marquardt algorithm or Gauss-Newton algorithm. In summary, solving for the parameters in bundle adjustment involves refining the estimates of camera and scene parameters by minimizing the re-projection error in an iterative optimization process. Third phase: In the third phase of the method, a technique called implicit volumetric rendering is used to generate novel views of the scene without the occluding vegetation. This technique involves fitting, or training, of a model that can render the given images accurately. The technique is described in an article by Mildenhall, et al. (2020) [1] and illustrated schematically in Fig. 5 45829/IL/23-ORP 303469/ - 9 - reproduced from that publication. The training process encourages the model to correctly render the images and penalizes any incorrect rendering results. Fourth phase: Once the model is trained, it can be queried for views that do not include vegetation, allowing the surface of the forest to be revealed. There are several ways to discard the vegetation in these views. Two examples are: One can invoke the rendering process only below a certain altitude, which ignores tall elements like treetops. Specifically, if the "z"-axis represents altitude, points whose z-coordinate is above the height of the ground are never sampled. A two-stage approach of occlude annotation followed by its deletion: First, one obtains the mask of the object to be removed. While this can be done manually, recent methods, e.g. Fan, Z.a., et al. (2022) [2], show that this can be done automatically. The second part is removing the occluded object or objects based on the mask. Figs. 6A to 6C show an example of using masks to remove the trees from an image; wherein Fig. 6A shows the original image, Fig, 6B shows the mask, and Fig. 6C shows the peeled image. While similar approaches like Liu, H-K, et al. (2022) [3] and Mirzaei, Ashkan, et al. (2022) [4] show such removal, the inventors emphasize that this needs to be done in a physical manner to prevent hallucination details. i.e. predicting existence of items for which there is no physical evidence, of the occluded surface. A physical method to obtain the mask can be the usage of infrared channels in conjunction with the visible spectrum, as done in the "Normalized difference vegetation index" (NDVI). The model can be trained on multiple color channels originating from different cameras as was done by Poggi, Matteo (2022) [5]. This phase of the method is crucial for revealing the surface of forests and providing valuable information about the terrain. It allows us to generate novel views of the scene without the occluding vegetation, providing a more accurate representation of the terrain and supporting the other phases of our method. 45829/IL/23-ORP 303469/ - 10 - Using implicit methods, i.e. the process of rendering the entire scene, is the key in uncovering the ground, especially when compared with existing methods (such as Multiview stereo) that require extra assumption on the scene such as local smoothness, which cannot be assumed for vegetation. In addition to detecting and classifying natural features in the forest, the disclosed method can also be used to detect and classify human-made features and activities. For example, the ground pixel mosaic and depth maps generated in our method can be used to identify and classify humans, man-made object, trails, and other human-made paths in the forest. The novelty and inventive step of the disclosed method is the combination of image capture techniques, photography methodologies, and a unique solution to the challenge of revealing the surface of dense forests. The approach described herein is different from other solutions in the field in several ways.
Firstly, the method uses implicit volumetric rendering to accurately render the scene without making any pre-assumptions, unlike other multi-view stereo (MVS) algorithms or other papers like the one of Schedl, D. K. 2020) which use synthetic aperture photography (SAP) to find humans in a dense forest. Their approach is capable of achieving results only using thermal images and revealing objects (humans) that exhibit a large thermal anomaly relative to their surrounding area while implicit volumetric rendering allows the reconstruction of complex scenes with greater detail and accuracy.
Figs. 7A and 7B show pealing of the trees from a section of forest using the MVS method. Figs. 8A and 8B show pealing of the trees from the same section of the forest using the presently disclosed method.
Note that with the MVS method, nearly all terrain features of the forest floor are missing while with the presently disclosed method many features including trails are clearly revealed on the forest floor.
Figs. 9A and 9B show pealing of the trees in the MIR band using the SAP method. Figs. 10 A and 10b show pealing of the trees in the MIR band from the same section of the forest using the presently disclosed method. Fig. 10A shows rendering of the scene by an orthographic camera and without the process of removing the occluded vegetation while Fig. 10B is the same scene using the same orthographic camera after the process of removing the occluded vegetation. 45829/IL/23-ORP 303469/ - 11 - Note that using the SAP method the humans lying on the forest floor are clearly revealed; however no other features of the terrain are seen. In contrast, using the presently disclosed method both the humans and terrain features are visible.
Secondly, the present method handles the peeling phase, where images are processed to remove occluding vegetation, differently from other approaches. In other methods, the images are placed on a uniform plane after the solution, producing a unified image that contains all the pixels. This image highlights objects with high contrast to their surroundings, but the rest of the image may appear out of focus. In contrast, the present method involves deleting the trees and rendering the entire scene captured beneath them. This allows for a clear view of the entire area and the objects on it, without the need for additional post-processing or assumptions about the scene. The only dependencies are on the flight parameters, photography parameters, and the density of the trees.
A detailed description of a tree peeling experiment will now be described. 1 .Description of the Experimental Site The experimental site is predominantly populated by mature pine trees and cypress trees. Beneath these trees, various shrubs and other types of vegetation can be found. The area is frequently traversed by individuals, leaving the majority of the ground exposed, with no clear paths. The site, rectangular in shape, measures approximately 250 meters in length and 50 meters in width. The density of the trees in this area is notably high. Fig. 11 is an aerial photograph showing the area of interest. 2 .Image Capturing Process The image capturing operation was executed on Thursday, May 11th, 2023.
The aerial photography was carried out using a DJI Mavic Mini 2 drone. The operation was strategically conducted immediately following sunset to take advantage of optimal lighting conditions. The drone was flown at an altitude of 90 meters above ground level, corresponding to a Ground Sample Distance (GSD) of 3.2 cm. Fig. 13 is a single image photographed with the optimal light conditions described.
During the flight, a total of 155 images were captured. These images were distributed across six distinct flight paths. A significant overlap of 90% was maintained between each 30 45829/IL/23-ORP 303469/ - 12 - path, resulting in an average distance of approximately 13 meters between each path. Within each individual flight line, the overlap between consecutive images was maintained at 95%. This corresponded to an approximate distance of 5 meters from one camera position to the next. Fig. 12 shows the locations of the flight lines overlaid on the aerial photograph of the area of interest. The dashes on the flight lines symbolically show the locations of the camera position for each of the images. 3.Processing step Camera Alignment is carried out using Agisoft Metashape software developed by Agisoft LLC. The captured images are fed into this software, which uses a feature matching algorithm to identify the same point across multiple images. From this, it computes the relative positions and orientations (6 DOF - Degrees of Freedom) of the cameras used to capture the images. Fig. 14 shows the corresponding points between neighboring frames.
Parameter Estimation: Once the software has the camera parameters (location and orientation), it refines them using a bundle adjustment algorithm. This optimization process minimizes the reprojection error, which is the difference between the observed and predicted image points. Fig. 15 shows the Agisoft bundle report.
Using Parameters for NeRF: The camera parameters refined by Agisoft are then used as input to the Neural Radiance Fields algorithm (NeRF), which constructs a 3D model of the scene.
Training with NeRF: After feeding the refined camera parameters into NeRF, the algorithm is trained to understand the 3D structure of the scene.
? (? )? (? )? (? , ? )?? ? ? ? ? Equation 1 – is the ray-rendering function that guides the NeRF algorithm during training. ? ? and ? ? are the start and end position of the ray, respectively. ? is the transmission function, which represents occlusions. ? and ? are the density and color functions.
After the NeRF training, an image of the entire scene is synthesized without any cropping. This image gives a complete view of the area of interest and its surroundings. 45829/IL/23-ORP 303469/ - 13 - Now, a new image is synthesized. This time, however, a Digital Terrain Model (DTM) is used to guide the cropping process. This allows elimination all non-relevant volume from the scene (in this case the trees) and focus on rendering a scene that contains only ground pixels ? (? )? (? )? (? , ? )?? ? ? ? ??? Equation 2 – is a modified version of Eq. 1 used during peeling and ground mosaic generation. Note that here the integration starts at t_veg rather than t_1, i.e. after the vegetation rather than at the camera position.
Fig. 16 shows the entire area of interest after peeling the trees. 4.Results Fig. 17 is a zoom-in to a small area before (top image) and after (bottom image) the peeling process. The results demonstrate the ability of the method and algorithm to expose ground surfaces in forested areas. As can be seen from the examples, the exposure of the surface enables the identification of trails, rocks, exposed ground areas, or those covered by bushes, and to perform analysis and management of protected areas in an efficient and innovative manner.
Herein, the method has been described as being used to image the terrain under densely forested areas; however it is noted that the same method can be used in other scenarios. For example, in agriculture to disclose the terrain under crops, e.g. an orchard, vineyard, or field of corn or sunflowers; and in law enforcement to reveal objects, e.g. stolen cars, hidden beneath a camouflaging cover.
Although embodiments of the invention have been described by way of illustration, it will be understood that the invention may be carried out with many variations, modifications, and adaptations, without exceeding the scope of the claims. 45829/IL/23-ORP 303469/ - 14 - Bibliography [1] Mildenhall, Ben, et al. (2020); NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis; Retrieved from https://arxiv.org/abs/2003.08934. [2] Fan, Z.a., et al. (2022); NeRF-SOS: Any-View Self-supervised Object Segmentation on Complex Scene; Retrieved from https://arxiv.org/abs/2209.08776. [3] Liu, H-K, et al. (2022); NeRF-In: Free-Form NeRF Inpainting with RGB-D Priors; Retrieved from https://arxiv.org/abs/2206.04901. [4] Mirzaei, Ashkan, et al. (2022); SPIn-NeRF: Multiview Segmentation and Perceptual Inpainting with Neural Radiance Fields; Retrieved from https://arxiv.org/abs/2211.12254. [5] Poggi, Matteo, et al., (2022); Cross-Spectral Neural Radiance Fields; Retrived from: https://arxiv.org/abs/2209.00648. [6] Schedl, D.K., et al (2020); Search and rescue with airborne optical sectioning; Nat Mach Intell.; Retrieved from https://www.flir.com/discover/cores-components/researchers-develop-search-and-rescue-technology-that-sees-through-forest-with-thermal-imaging.
Claims (22)
1. 45829/IL/23-ORP 303469/3 - 16 - Claims 1. A method for constructing under-the-canopy terrain imagery, the method comprising thesteps: a. capturing overlapping images to cover an area-of-interest from at least one imagingsensor on at least one aerial platform; 5b. performing bundle adjustment utilizing parameters of said imaging sensor to generaterefined parameters at any given point of the captured images;c. training a neural network model for implicit volumetric rendering, wherein said model istrained using the refined parameters from the bundle adjustment and the overlappingimages to generate simulated views from any point of said area-of-interest; and 10d. querying said neural network model for views of the terrain below the canopy by one ofthe following methods:i) invoking the rendering process only below the altitude of the canopy;ii) a two-stage approach of obtaining a mask of object/s above the canopy followed bydeletion of the object/s based on the mask; 15thereby, revealing an under-the-canopy ground pixel mosaic.
2. The method of claim 1, wherein the overlapping images are captured using high overlap anddensity, wherein the density of the images is dependent on the density of the canopy.
3. The method of claim 1, wherein the overlapping images are captured using a wide angularrange of pixels from the nadir and oblique angles.
4. The method of claim 1, wherein the overlapping images are captured during the twilighthours or under full cloud cover. 25
5. The method of claim 1, wherein the overlapping images are captured using a high groundsample distance.
6. The method of claim 1, wherein the overlapping images are captured under the same 30lighting conditions. 45829/IL/23-ORP 303469/3 - 17 -
7. The method of claim 1, wherein at least some of the overlapping images are captured atdifferent altitudes.
8. The method of claim 1, wherein at least some of the overlapping images are captured usingdifferent cameras. 5
9. The method of claim 1, wherein at least some of the overlapping images are captured usingcameras operating in different wavelength bands.
10. The method of claim 1, wherein at least some of the overlapping images are captured using 10cameras operating in at least one of the following wavelength bands: middle infrared, nearinfrared, and visible.
11. The method of claim 1, wherein at least some of the overlapping images are captured bytwo or more aerial platforms on the same day. 15
12. The method of claim 1, wherein the overlapping images are captured by one or more aerialplatforms on different days.
13. The method of claim 1, wherein the aerial platform is a drone. 20
14. The method of claim 1, wherein the parameters used in the bundle adjustment stepinclude at least one of the following:a) location of the aerial platform including: longitude, latitude, and altitude;b) angles of the camera including: azimuth, pitch, and roll; 25c) properties of the camera and lens including: focal length and image center; andd) scene parameters including the three dimensional coordinates of objects in the scenethat are observed in the images.
15. The method of claim 1, wherein bundle adjustment is an iterative optimization process 30that involves refining the estimates of camera and scene parameters by minimizing the re-projection error. 45829/IL/23-ORP 303469/3 - 18 -
16. The method of claim 15, wherein the iterative optimization process uses one of theLevenberg-Marquardt algorithm or the Gauss-Newton algorithm.
17. The method of claim 1, wherein training the neural network model is done to performimplicit volumetric rendering. 5
18. The method of claim 1, wherein the mask is obtained by usage of infrared channels inconjunction with the visible spectrum.
19. The method of claim 18, wherein the model is trained on multiple color channels 10originating from different cameras.
20. The method of claim 1, wherein the canopy is formed by trees in a forest, wood, ororchard.
21. The method of claim 1, wherein the canopy is formed by agricultural crops.
22. The method of claim 1, wherein the canopy is formed by a camouflaging or shade cover.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IL303469A IL303469B1 (en) | 2023-06-05 | 2023-06-05 | A method for revealing the surface of dense forests using remote sensing |
| PCT/IL2024/050281 WO2024252384A1 (en) | 2023-06-05 | 2024-03-18 | A method for revealing the surface of dense forests using remote sensing |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IL303469A IL303469B1 (en) | 2023-06-05 | 2023-06-05 | A method for revealing the surface of dense forests using remote sensing |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| IL303469A IL303469A (en) | 2025-01-01 |
| IL303469B1 true IL303469B1 (en) | 2025-12-01 |
Family
ID=93795161
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| IL303469A IL303469B1 (en) | 2023-06-05 | 2023-06-05 | A method for revealing the surface of dense forests using remote sensing |
Country Status (2)
| Country | Link |
|---|---|
| IL (1) | IL303469B1 (en) |
| WO (1) | WO2024252384A1 (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130009950A1 (en) * | 2009-12-01 | 2013-01-10 | Rafael Advanced Defense Systems Ltd. | Method and system of generating a three-dimensional view of a real scene for military planning and operations |
| US20180275658A1 (en) * | 2017-03-23 | 2018-09-27 | DeepScale, Inc. | Data synthesis for autonomous control systems |
| US20210133936A1 (en) * | 2019-11-01 | 2021-05-06 | Microsoft Technology Licensing, Llc | Recovering occluded image data using machine learning |
| US20210142559A1 (en) * | 2019-11-08 | 2021-05-13 | General Electric Company | System and method for vegetation modeling using satellite imagery and/or aerial imagery |
| US20220392156A1 (en) * | 2021-06-08 | 2022-12-08 | Vricon Systems Aktiebolag | Method for 3d reconstruction from satellite imagery |
-
2023
- 2023-06-05 IL IL303469A patent/IL303469B1/en unknown
-
2024
- 2024-03-18 WO PCT/IL2024/050281 patent/WO2024252384A1/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130009950A1 (en) * | 2009-12-01 | 2013-01-10 | Rafael Advanced Defense Systems Ltd. | Method and system of generating a three-dimensional view of a real scene for military planning and operations |
| US20180275658A1 (en) * | 2017-03-23 | 2018-09-27 | DeepScale, Inc. | Data synthesis for autonomous control systems |
| US20210133936A1 (en) * | 2019-11-01 | 2021-05-06 | Microsoft Technology Licensing, Llc | Recovering occluded image data using machine learning |
| US20210142559A1 (en) * | 2019-11-08 | 2021-05-13 | General Electric Company | System and method for vegetation modeling using satellite imagery and/or aerial imagery |
| US20220392156A1 (en) * | 2021-06-08 | 2022-12-08 | Vricon Systems Aktiebolag | Method for 3d reconstruction from satellite imagery |
Also Published As
| Publication number | Publication date |
|---|---|
| IL303469A (en) | 2025-01-01 |
| WO2024252384A1 (en) | 2024-12-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Iglhaut et al. | Structure from motion photogrammetry in forestry: A review | |
| Matese et al. | Assessment of a canopy height model (CHM) in a vineyard using UAV-based multispectral imaging | |
| Fritz et al. | UAV-based photogrammetric point clouds–tree stem mapping in open stands in comparison to terrestrial laser scanner point clouds | |
| Chu et al. | Cotton growth modeling and assessment using unmanned aircraft system visual-band imagery | |
| Yang | A high-resolution airborne four-camera imaging system for agricultural remote sensing | |
| NO337638B1 (en) | Method for determining file attributes and computer program for executing the method | |
| Mayr et al. | Disturbance feedbacks on the height of woody vegetation in a savannah: a multi-plot assessment using an unmanned aerial vehicle (UAV) | |
| US20230247313A1 (en) | Systems and Methods For Multispectral Landscape Mapping | |
| Schedl et al. | Airborne optical sectioning for nesting observation | |
| US20250274643A1 (en) | Real-time multi-spectral system and method | |
| CN110476412B (en) | Information processing apparatus, information processing method, and storage medium | |
| Wallace et al. | Using orthoimages generated from oblique terrestrial photography to estimate and monitor vegetation cover | |
| Wierzbicki et al. | Method of radiometric quality assessment of NIR images acquired with a custom sensor mounted on an unmanned aerial vehicle | |
| Jenerowicz et al. | The fusion of satellite and UAV data: simulation of high spatial resolution band | |
| Demir | Using UAVs for detection of trees from digital surface models | |
| Flynn et al. | UAV‐derived greenness and within‐crown spatial patterning can detect ash dieback in individual trees | |
| Wulder et al. | Digital high spatial resolution aerial imagery to support forest health monitoring: the mountain pine beetle context | |
| Gruen et al. | DSM/DTM-related investigations of the Moorea Avatar project | |
| Martínez-Sánchez et al. | UAV and satellite imagery applied to alien species mapping in NW Spain | |
| WO2024252384A1 (en) | A method for revealing the surface of dense forests using remote sensing | |
| Chiappini et al. | Comparing the accuracy of 3D urban olive tree models detected by smartphone using LiDAR sensor, photogrammetry and NeRF: a case study of’Ascolana Tenera’in Italy | |
| Li et al. | Algorithm for automatic image dodging of unmanned aerial vehicle images using two-dimensional radiometric spatial attributes | |
| Bellia et al. | A preliminary assessment of the efficiency of using drones in land cover mapping | |
| Youssef et al. | DeepForest: Sensing into Self-occluding Volumes of Vegetation with Aerial Imaging | |
| Ferrell | Applications of Close-Range Photogrammetry for Documenting Human Skeletal Remains in Obstructed Wooded Environments |