GB2561329A - Method and system for creating images - Google Patents

Method and system for creating images Download PDF

Info

Publication number
GB2561329A
GB2561329A GB1620652.6A GB201620652A GB2561329A GB 2561329 A GB2561329 A GB 2561329A GB 201620652 A GB201620652 A GB 201620652A GB 2561329 A GB2561329 A GB 2561329A
Authority
GB
United Kingdom
Prior art keywords
image
images
tile
camera
homography
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1620652.6A
Other versions
GB201620652D0 (en
Inventor
Mark Remde Stephen
Tanathong Supannee
Alfred Peter Smith William
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gaist Solutions Ltd
Original Assignee
Gaist Solutions Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gaist Solutions Ltd filed Critical Gaist Solutions Ltd
Priority to GB1620652.6A priority Critical patent/GB2561329A/en
Publication of GB201620652D0 publication Critical patent/GB201620652D0/en
Priority to GB1715491.5A priority patent/GB2557398B/en
Priority to GB2011002.9A priority patent/GB2584027A/en
Priority to PCT/GB2017/053410 priority patent/WO2018104700A1/en
Priority to EP17808973.6A priority patent/EP3549094A1/en
Publication of GB2561329A publication Critical patent/GB2561329A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

A method of generating an orthomosaic image of a geographical survey area is disclosed. The method comprises the steps of (a) recording a set of still images of the survey area using a land-borne camera, in which the camera has an optical axis that is angled downwardly with respect to a surface of the survey area and recording a geographical position corresponding to each image (b) comparing pairs of partially overlapping images to identify common features in the image to derive a model of the motion between the images (c) calculating a pose in respect of each image describing where the camera was in world coordinates and where it was looking when the image was captured; d. projecting each image onto a ground plane to create a set of image tiles; and e. stitching together a plurality of image tiles to create an orthostatic image corresponding to an area that is present in a plurality of the images recorded in step (a). Procedures for detecting overlaps in images and identifying common features are also disclosed.

Description

(71) Applicant(s):
(56) Documents Cited:
CN 105592294 A1 US 20160150142 A1
US 20160229555 A1 US 20120300019 A1
Gaist Solutions Limited lnfoLab21, Lancaster University, Bailrigg, Lancaster, LA1 4WA, United Kingdom (58) Field of Search:
INT CL G03B, H04N Other: EPODOC, WPI (72) Inventor(s):
Stephen Mark Remde Supannee Tanathong William Alfred Peter Smith (74) Agent and/or Address for Service:
Alistair Hamilton
Ty Eurgain, Cefn Eurgain Lane, Rhosesmor, MOLD, Flintshire, CH7 6PG, United Kingdom (54) Title of the Invention: Method and system for creating images Abstract Title: Orthomosaic imaging (57) A method of generating an orthomosaic image of a geographical survey area is disclosed. The method comprises the steps of (a) recording a set of still images of the survey area using a land-borne camera, in which the camera has an optical axis that is angled downwardly with respect to a surface of the survey area and recording a geographical position corresponding to each image (b) comparing pairs of partially overlapping images to identify common features in the image to derive a model of the motion between the images (c) calculating a pose in respect of each image describing where the camera was in world coordinates and where it was looking when the image was captured; d. projecting each image onto a ground plane to create a set of image tiles; and e. stitching together a plurality of image tiles to create an orthostatic image corresponding to an area that is present in a plurality of the images recorded in step (a). Procedures for detecting overlaps in images and identifying common features are also disclosed.
Figure GB2561329A_D0001
in
Figure GB2561329A_D0002
Figi
Figure GB2561329A_D0003
Fig 2
2/7
Images GPS data
Camera
Calibration
Parameters
START
1
Image to Ground Plane Remapping
1 r
Seamless Orthomosaic Tile Stitching
C end )
Figure GB2561329A_D0004
Map Tile Index Repository
Fig 3
Camera
Captured image image-to-ground projection
Space of ground plane) with equai-sized tile grid i
Figure GB2561329A_D0005
I Projected giiound-plane imiage
2D Cartesian ground/reference coordinate system
Fig 4
3/7
Figure GB2561329A_D0006
Figure GB2561329A_D0007
{
Fig 6
4/7
Captured image
Figure GB2561329A_D0008
2D ground coordinates (u,v) hi the tile
1
Perform forward Perspective Projection on (u,vj
1 (u'X)
Compute the distorted coordinates of (if/) given the camera's distortion coefficients
kv) r
Obtain the image intensity source image at the pixe i.e. red, green, blue) of the i coordinates (x,y): i(xfy)
i(x,y) f
Assign the image intensity image at the ec i(x,y) to the ground-plane )ordi nates (u,v)
Fig 8
5/7
Figure GB2561329A_D0009
Figure GB2561329A_D0010
Camera
Fig 9
Figure GB2561329A_D0011
Figure GB2561329A_D0012
5. Create the guide intensity image
Ψ
lavg
—y— lg m x m
Fig 10
6/7
1 ΧΧΧϊΧ:5ΙΧΙ?Χ ΐ X; 5X55X.2 ΐ XskfX': X 2 2222· Y; XXXXXX 22s X: EiiiXX X; 222522
i;
[ ΤΧχΧ: 52X2X2 s X; XX XX XX Current tile T5s X; 222272- V: X XXX XX
| ΧΧΧϊ X: 522222 s X; 322222 XsXfX': XXXXXX X: 222222 25s X: XXXXXX X: XXXXX5
Fig ll
7/7
Figure GB2561329A_D0013
Fig 12
Method and System for Creating Images
This invention relates to a method and system for creating images. In particular, it relates to a system capable of building virtual top-down view images of very large areas, for example, of road networks simply by driving a vehicle around the road network and collecting images with a camera oriented towards the road surface.
The term mosaic” means an image that is constructed from a number of overlapping images.
The type of images created by a system embodying the invention are referred to as orthomosaics”, reflecting the fact that they are a mosaic of many images (perhaps millions) and that they approximate an orthographic view of an area being surveyed from above, similar to an image that might be captured by a satellite.
There is a demand for high-resolution orthogonal imagery of large areas of terrain, typically road surfaces, to enable surveying and assessment for example to enable maintenance to be scheduled. Such imagery can be obtained from satellites, but this is a very costly operation, and requires favourable atmospheric conditions if sufficient detail is to be resolved.
An aim of this invention is to produce orthomosaic images using a ground-based system at a cost that is significantly less and in a more versatile manner than is the case for satellite imaging.
To this end, this invention provides a method of generating an orthomosaic image of a geographical survey area comprising:
a. recording a set of still images of the survey area using a land-borne camera, in which the camera has an optical axis that is angled downwardly with respect to a surface of the survey area and recording a geographical position corresponding to each image;
b.
c.
d.
comparing pairs of partially overlapping images to identify common features in the image to derive a model of the motion between the images;
calculating a pose in respect of each image describing where the camera was in world coordinates and where it was looking when the image was captured;
projecting each image onto a ground plane to create a set of image tiles; and
e. stitching together a plurality of image tiles to create an orthostatic image corresponding to an area that is present in a plurality of the images recorded in step a.
This method computes camera poses without ever estimating 3D positions of feature points. 10 This makes the dimensionality of the problem much lower than conventional structure-frommotion analysis, meaning it can be solved more quickly allow it to scale to much larger sets of images. This is made possible by assuming that the surface being imaged is approximately planar.
In step b., motion between pairs of images may be modelled using a homography that is a 15 transformation valid when two images view the same planar scene from different positions.
Step b., may include simultaneously finding feature matches between pairs of images that are consistent with a homography and computing the homography that is consistent with these matches.
Estimated homographies are, most advantageously, filtered to discard those likely to be a 20 poor model of the transformation between two views. This allows unreliable transitions to be removed from the process of optimisation, so avoiding corruption of the motion estimation in step c. A homography can be deemed to be poor” using several alternative heuristic tests. For example, a homography may be deemed to be poor if the number of common features identified in step b., is below a threshold. A homography may be deemed to be poor if, when applied to a first of an image pair, the number of common features identified in step b., that it maps erroneously onto a second of an image pair is above a threshold.
In step b., the method may include a step of optimising the estimated pose of the camera, by identifying poses that would give homographies between views that are consistent with those estimated from the image pairs in step b.
An objective function may advantageously be optimised in step c., to ensure that the estimated poses give homographies that are consistent with those estimated between overlapping images. To create the objective function, for each of a pair of images, a homography is computed from the image to the ground plane, and this homography is applied to the 2D positions of matched image features to yield matched features on the ground plane, wherein the distance between pairs of matched features are residuals that are added to the objective function such that the square of these residuals is minimised. These steps directly optimise how well the images will align when they are projected to the ground plane. This is entirely appropriate since production of a seamless mosaic is an aim of the invention.
Other residual errors in the camera pose are included as appropriate. For example, a pose may be deemed to be poor if it deviates from an initial estimate derived from a recorded geographical position by more than a threshold, and a pose is deemed to be poor if it implies non-smooth motion. Other tests may be used and tests may be applied individually or in combination.
In a preferred embodiment, after step d., with each orthomosaic tile to be stitched, there is associated the set of images whose projections overlap that tile. This can be achieved by computing the projection of the corners of each image and then performing an inside-polygon test for each tile. For each pixel in a tile there may be applied a perspective projection operation into each image, giving a non-integer 2D image coordinate. The nonlinear distortion induced by the camera lens may then be applied to the 2D position to giving the final image coordinate. Hence, for each image there is produced a set of 2D coordinates corresponding to each pixel in the ground plane tile. Linear interpolation may then be used to compute the colour at each point in an image to be stitched to give a reprojection of the image to the ground plane.
The final step in the pipeline is to combine the information from the ground plane projections of the images into a single, seamless orthomosaic. If the method were to compute a single mosaiced image covering the whole surveyed area, the problem would be intractable.
Moreover, to enable viewing of the images over a slow network, the mosaic must be broken up into tiles covering different areas at different zoom levels. For this reason, stitching is most advantageously performed for tiles independently.
In step e., a weighting function that chooses the best” gradient for each tile pixel from each image is typically applied to create the stitched image. The weighting scheme means that the gradient is taken from the image that observed that point at the highest resolution. This encourages the inclusion of maximum detail in the orthomosaic. Advantageously, the stitched image may be computed by solving a sparse system of linear equations. Such systems can be solved efficiently meaning the tile stitching process could potentially be done in real time, on demand as requests for tiles are made from the viewing interface. The selected gradients typically provide targets that the stitched image must try to match and guide intensities that the stitched image must also try to the match during the stitching process. The guide may be the average of the projected images.
To avoid seams at the boundaries of tiles, in step e., a check may be made as to whether any adjacent tiles have already been stitched, and if they have, a small overlap into the existing tiles is added to the new tile. In the overlap region, there may be only the guide intensity constraint which is weighted very highly in the objective function and a smooth transition to a lower weighted guide constraint for the interior of the tile. The result is that intensities at tile boundaries match exactly and any large intensity changes that would have occurred are made less visible as low frequency transitions. This is a beneficial step that tiles to be stitched one at a time but still produce a seamless mosaic when adjacent tiles are viewed next to each other.
In order to optimise and accelerate the estimation of poses in step a., an initial estimate of a pose may be derived from the recorded GPS position. The initial estimate typically includes the direction of the optical axis of the camera (yaw) being determined by the recorded GPS direction of orientation, the viewing angle of the imaged surface (pitch) is as determined during an initial calibration of the system, and that transverse rotation (roll) is zero.
Step a., is typically carried out on a land vehicle such as a road vehicle, such as a motor road vehicle (a van, or similar) or a human-powered vehicle, such as a bicycle. Steps b. to e. are typically carried out on a programmed computer.
From a second aspect, this invention provides a method of image processing comprising performing steps b. to e. according to any preceding claim.
From a third aspect, this invention provides computing apparatus programmed to perform steps b. to e. according to any preceding claim.
An embodiment of the invention will now be described in detail, by way of example, with reference to the accompanying drawings, in which:
Figure 1 shows, diagrammatically, a vehicle equipped with a camera that is suitable for use with a system embodying the invention;
Figure 2 is a schematic illustration of a system to generate orthomosaic map in an 10 embodiment of the present invention;
Figure 3 is high-level process flow diagram showing steps used to produce an orthomosaic for a tile;
Figure 4 illustrates projecting an image captured from a capturing device to the ground plane presented as tile grid with respect to the 2D Cartesian ground/reference coordinate system;
Figure 5 illustrates multiple projected images overlapping on the same tile;
Figure 6 illustrates a Map Tile Index data file;
Figure 7 is an illustration of the creation of ground plane image tile from a source image in a reverse order via a reverse remapping function;
Figure 8 is a process flow diagram showing steps to associate 2D ground coordinates in tile 20 with pixel coordinates of the source image in order to create a ground plane image in a reverse order;
Figure 9 is an illustration of assigning weight coefficient to pixels in the captured image, in which the weight assigned is inversely proportional to the distance of the pixel to the camera;
Figure 10 is a process flow diagram showing steps to construct the inputs from the captured image in tile for the linear equation system to generate an orthomosaic tile;
Figure 11 is an illustration of an image of guide intensity as part of generating orthomosaic tile, in which the size of the guide image is enlarged such that the guide image includes the intensity of the neighbour orthomosaic tile at the overlap regions to avoid seams at the boundaries of the tile; and
Figure 12 is an illustration of determining weight values when there exist stitched tiles at the boundaries of the current tile, in which the tile size is extended from η x n to m x m such that it includes the overlap regions with its neighbour tiles.
Introduction
A vehicle 10 for use with the present invention includes a camera 12 that is mounted on a high part of the vehicle pointing generally forward and angled downward. The vehicle 10 is also equipped with global navigation satellite system apparatus 14 typically, using the Global Positioning System (GPS], As the vehicle is driven forward, the camera captures a sequence of images of the terrain immediately in front of the vehicle. These images are stored in a data capture and storage system 16 and the geographical location of each image, as determined by the global navigation satellite system, is recorded.
An initial step is to perform a camera calibration in order to compute an intrinsic calibration matrix K:
(1] where fx and fy are the focal lengths in the x andy direction and cx and cy define the centre of projection. This calibration matrix describes the specific properties of how the camera projects the 3D world into a 2D image. In addition, a calibration is made for distortion parameters that model how the camera optics deviate from a simple pinhole camera model.
These distortion parameters can be used for a nonlinear adjustment to images or image locations to remove the effect of these distortions.
Image Processing
Data from the data storage device are transferred to a computer for processing in order to generate an orthomosaic image. The computer is programmed to process the data in a manner that will now be described.
Motion-from-homographies
The first stage of the processing applied to the image sequence is to compute an accurate pose (orientation and position) for the camera in every captured image. The pose needs to be sufficiently accurate that, when images are later projected to the ground plane, overlapping images have at least pixel-accurate alignment. Otherwise, there will be misalignment artefacts in the generated orthomosaic images.
Although this is a structure-from-motion (SFM) problem. However, previous approaches for SFM are not applicable in this setting for two reasons:
• Images in which the scene is primarily the road surface are largely planar. This is a degenerate case for estimating a fundamental matrix and subsequently reconstructing 3D scene points.
• Typically, the number of 3D scene points matched between images and reconstructed by the SFM process is much larger than the number of pose parameters to be estimated. This means that SFM does not scale well to very large problems, e.g. those containing millions of images.
Similarly, methods based on Simultaneous Localisation and Mapping (SLAM) are not applicable. They require high frame rates in order to robustly track feature points over time. Sampling images at this frequency is simply not feasible when we wish to build orthomosaics of thousands of kilometres of road.
Therefore, this embodiment provides an alternative that addresses the drawbacks of using
SFM or SLAM for our problem. This approach will be called motion-from-homographies”.
Motion-from-homographies assumes that the scene being viewed is locally planar. This allows it to relate images that are close together in the sequence by a planar homography. The approach exploits the temporal constraints that arise from knowing the images in a trace come from an ordered motion sequence by only finding feature matches between pairs of images that are close together in the sequence. Therefore, it is only necessary to compute pairwise matches between image features and not reconstruct the 3D world position of image features. This vastly reduces the complexity of the optimisation problem that to be solved compared with known approaches. The number of unknowns is simply 6N for an image sequence of N images.
Motion Model
The pose of the camera is represented by a rotation matrix and a translation vector. The pose associated with the zth captured image in a sequence is R, and t,. The position in world coordinates of a camera can be computed from its pose as c, = -Rf ti.
A world point is represented by coordinates w = [u v w]T, where (w,v) is a 2D UTM coordinate representing position on the ground plane and w is altitude above sea level. Each camera has a standard right handed coordinate system with the optical axis aligned with the w axis. A world point in the coordinate system of the zth camera is given by:
w, = R,w + t,.
(2)
The rotation is a composition of a number of rotation matrices. The first aligns the world w 20 axis with the optical axis of the camera:
(3)
The system models vehicle orientation by three angles. This representation is chosen because the vehicle motion model leads to constraints that can be expressed very simply in terms of these angles. Hence, we define three rotation matrices in terms of these angles:
cos(a) Rya„(«) = Rroll(Y)
-sin(a) sin(a) cos(a)
0 0 0 cos(/3) -sin(/3) 0 sin(/3) cos(/3) cos(y ) - sin(y ) 0 sin(y) cos(y) 0
0 1 (4) (5) (6)
The overall rotation as a function of these three angles is given by:
R(«,p,7) = Rroll(7)Rpitch(P)Ryaw(«)Rw2c. (7)
Hence, the rotation of the zth camera depends upon the estimate of the three angles for that camera:
R, =R(a,,/3,,7,).
(8)
Initialisation
The system relies on GPS and an initial estimate of the camera height above the road surface to initialise the location of each camera:
GPS cimt VGPS measured (9) where (m, , v, ) is the GPS estimate of the ground plane position of the zth camera and ^measured measureci height of the camera above the road surface in metres. This need only be a rough estimate as the value is subsequently refined.
To initialise rotation, the yaw angle from the GPS bearing is computed first.
A first step in doing this is to compute a bearing vector using a central difference approximation:
b,=0.5
GPS GPS
'/+1 ~U,-1
GPS ,, GPS
J,+1 - -T-l
(10)
Second, this is converted into a yaw angle estimate:
a,mit = atan2(-bil,bi2). (11)
The pitch is initialised to a measured value for the angle between the camera optical axis and the road surface and the roll to zero:
p init _ β measured (12 J y,iDit = 0. (13)
Again, /}'casurcd, only need be roughly estimated since it is later refined.
Feature Matching and Filtering
Images processed in this embodiment come from a sequence. Moreover, by using GPS it is possible to ensure that images are taken at an approximately fixed distance between consecutive images. This means that it is reasonable to choose a constant offset 0, within which images can be expected to overlap. That is, image i can be expected to contain feature matches with images in the range i-0 to i+0. The number of overlapping pairs is therefore NO-0(0 + Y) / 2.
Local features from all images in a sequence are extracted. These could be any feature that is repeatably detectable and that has a distinctive descriptor. For example, the scale-invariant feature transform” algorithm as described in US-A-6 711 293 may be used. The 2D location of each feature is undistorted using the distortion parameters obtained during calibration. The system then computes greedy matches between features in pairs of images that are within the overlap threshold. These matches are filtered for distinctiveness using Lowe's ratio test with a threshold of 0.6. This means that the distance in feature space to the first closest match must be no more than 0.6 of the distance to the second closest match. Even with this filter applied, the matches will still contain noise that will disrupt the alignment process. Specifically, they may include matches between features that do not lie on the road plane (such as buildings or signage] or between dynamically moving objects (such as other vehicles]. If such matches were retained, they would introduce significant noise into the pose refinement process. Therefore, these matches are removed by enforcing a constraint that is consistent with a planar scene (the homography model] and further restricting motion to a model with only three degrees of freedom.
Since it is assumed that the scene is locally planar, feature matches can be described by a homography. In other words, if a feature with 2D image position xe I2 in image i is matched to a feature with image position x e R2 in image j, then there is expected to exist a 3 x 3 matrix H that satisfies:
X = H X
1 1
(14] where s is an arbitrary scale. This homography constraint is used to filter the feature matches. However, for the purposes of filtering, a stricter motion model is assumed than elsewhere in the process. Specifically, the assumption is made that the vehicle has two degrees of freedom to move in the ground plane and that its yaw angle may change arbitrarily. However, pitch and the height of the camera above the ground plane are assumed to be fixed to their measured values and that roll is zero. This allows parameterised a homography by only three parameters and enables matches to be ignored that would otherwise be consistent with but which would lead to incorrect motion estimates. For example, if a planar surface (such as the side of a lorry] is visible in a pair of images, then features matches between the two planes would be consistent with a homography model. However, they would not be consistent with the stricter motion model and will therefore be removed.
Under this motion model, a homography can be constructed between a pair of images a) based on the 2D displacement in the ground plane (w,v) and the change in the yaw angle a . This homography is constructed as follows. First, place the first image in a canonical pose:
R1 = R(0,p~ed,0) (15) (16) measured
The homography from the ground plane to this first image is given by:
H, =K (17)
We define the pose of the second image relative to the first image as:
R2(ot) = R(ot,p~ed,0) (18) t2(w,v) = -Ri u
V measured (19)
Therefore, the homography from the ground plane to the second image is parameterised by the ground plane displacement and change in yaw angle:
H2(u,v, a) = k[(R2(<x)).1:2 t2(w,v)] (20)
Finally, the homography can be defined from the first image to the second image as:
H^2 (w, v, a ) = H2 (w, v, oc)H[1.
(21)
Given a set of tentative matches between a pair of images, we now use the random sample consensus (RANSAC) algorithm to simultaneously fit the constrained homography model to the matches and remove matches that are outliers under the fitted model. Since our constrained homography depends on only three parameters, two matched points are sufficient to fit a homography. The fit is obtained by solving a nonlinear least squares optimisation problem. The RANSAC algorithm proceeds by randomly selecting a pair of matches, fitting the homography to the matches and then testing the number of inliers under the fitted homography. An inlier” is defined as a point whose symmetrised distance under the estimated homography is less than a threshold (for example, 20 pixels in this embodiment). This process is repeated, keeping track of the model estimate that maximised the number of inliers. Once RANSAC has completed, there exists a set of filtered matches between a pair of images that are consistent with the constrained motion model. Although the model is overly strict, the use of a relaxed threshold means that matches survive even when there is motion due to roll, changes in pitch or changes in the height of the camera.
Pose optimisation
The system now has initial estimates for the pose of every camera and also pairs of matched features between images that are close together in the sequence. A largescale nonlinear refinement of the estimated pose of every camera can now be performed. Key to this is the definition of an objective function comprised of a number of terms. The first term, £data, measures how well matched features align in the image plane. This is referred to as the data term:
u,ata({iMX) =
Figure GB2561329A_D0014
where Mtj is the number of matched features between image /' and z + j, x;yi. e R2 is the 2D position of the kth feature in image i that has a match in image i + j. The 2D position of the corresponding feature in image i + j is given by yjjk e R2. H; is the homography from the ground plane to the zth image and is given by:
(23)
Η =K
The function h homogenises a point:
//([», ν,ιι]7)(24)
Equation 22 shares some similarities with a process known as bundle adjustment” in the general structure-from-motion problem. However, there are some important differences. First, rather than measuring reprojection error” in the image plane, we measure error when image features are projected to the ground plane. Second, the objective depends only on the camera poses: it does not need to estimate any 3D world point positions. The first difference is important because it encourages exactly what we ultimately want: namely, that corresponding image positions should align in the final orthomosaic. The second difference is important because it vastly reduces the complexity of the problem and makes it viable to process very large sets of images.
To solve Equation 22, the system initialises using the process described above and then optimise using nonlinear least squares. Specifically, the Levenberg-Marquardt algorithm is used with an implementation that exploits the sparsity of the Jacobian matrix to improve efficiency. Moreover, some additional terms (described in the next section) are included to softly enforce additional prior constraints on the problem.
Priors
Since it is to be expected that the orientation of the vehicle with respect to the road surface will remain approximately constant, it is possible to impose priors on two of the angles. First, side-to-side roll” is expected to be small, only being non-zero when the vehicle is cornering. Hence, the first prior simply penalises the variance of the roll angle estimates from zero:
N
Figure GB2561329A_D0015
(25)
The second angular prior penalises variance in the angle between the camera optical axis and the road plane, i.e. the pitch angle:
ΜΜ£)=Σ (26) /=1
Next, variance in the estimated height of the camera above the road surface is penalised 5 because this is expected to remain relatively constant:
N /=1
Figure GB2561329A_D0016
(27)
Finally, the estimated position of the camera in each image is encouraged to remain close to that provided by the GPS estimate:
A 2
W(R,.t£)=X(<v<ij /=1 (28)
The hybrid objective that we ultimately optimise is a weighted sum of the data term and all priors.
Map Tile Index Generation
The image of the entire ground plane is presented as a raster map in which any pixel of the image is associated with the coordinates in the ground/reference coordinate system.
Depending on the level of detail of the ground plane (which defines the ground resolution indicating how much distance on the ground is represented by a single pixel), the dimension of the map can be extremely large. To enable creating and viewing of the image of the entire ground plane over the end product platform (an example of the implementation of this invention is the web), the image is cut into non-overlapping grids of equal size called tiles.
Each tile is identified by a coordinate describing its position on the ground plane.
Given the estimated pose of the camera (as discussed above) for every captured image together with the intrinsic camera parameters obtained during calibration, the captured images are then projected onto the ground plane as illustrated in Figure 4. As the position determined as part of the camera pose is defined with respect to the ground/reference frame, the projected ground-plane image is thus georeferenced. Depending on the camera trajectory, velocity of the camera motion, frame rate, field of view of the camera and other relevant factors, multiple projected images can overlap on the ground plane of the same tile.
Since the orthomosaic for each tile is created by combining into a single image the information from all the captured images that are projected into that tile and because each tile typically contains many overlapping images as presented in Figure 5 the Map Tile Index database needs to be generated and updated whenever new images are acquired. The Map Tile Index database stores for each tile a data file containing image indices that are included in that tile.
The procedure to determine the captured image is visible in which the tile index (or indices) is as follows. Referring to Figure 2, after the camera poses are estimated, image pixel coordinates x = [x,y]T of the four corners of the captured images (for all images of the same size, the coordinates are equal) are projected onto the ground plane by using the equation below, which returns the 2D ground coordinates u. Optionally, provided camera intrinsic parameters and distortion coefficients, the pixel coordinates of the four image corners may be undistorted to obtain distortion-free coordinates prior to projection.
u = A(H 1
Figure GB2561329A_D0017
(29) where H is the homography defined as Equation (23), and h is the function defined in 20 Equation (24).
The projected ground coordinates u are then intersected with the tile boundary to determine in which tile index (or tile indices) it is to be included. The simple method to perform the boundary intersection is to compute the bounding box coordinates from the four projected corners. Then the index of the current captured image is included into the Map Tile Index data file of the intersected tiles.
A Map Tile Index data file for a tile may contain its level of detail, its coordinates with respect to the 2D Cartesian ground coordinate system and all the image indices that are visible in the tile coverage. Since the images that are visible in the tile area may come from difference sources, for example multiple image sequences, the data file may include the index of the image sequences together with the indices of the images in the sequences. An example structure of Map Tile Index data file for a tile index is illustrated as Figure 6. For the presented figure, Z denoted the level of detail of the ground plane aka zoom factor, and X and Y indicate the coordinates of the tile on the reference ground plane. S;. indicates the index of the image sequence and I; represents the index of the images in the sequence S;..
Orthomosaic Tile Creation
Since each tile typically contains many overlapping images as illustrated in Figure 5, in order to produce a single image for that tile, the information from all these images must be combined in some way. The naive approach of simply averaging the images leads to very bad results (seams at projected image boundaries, loss of detail through blurring caused by imprecise alignment). To overcome the mentioned problems, this present invention combines the overlapping images by blending in the gradient domain. This is a standard technique in image stitching and seeks to preserve high frequency detail while hiding gradual changes in low frequency variations. However, the present approach contains a number of significant novelties. First, the image-to-ground plane projection procedure is encapsulated into a single function in the reverse order, which offers two advantages including speed and avoiding linear interpolation of a nonlinear mapping. Second, the weighting function used in this invention ensures that, among the overlapping images, the best gradient is selected for each pixel in the tile image which encourages the inclusion of maximum detail. Third, the intensity values of all the pixels in the final tile images are computed by solving a sparse system of linear equations which can be solved almost, if not, in real time. Fourth, the present invention ensures that there are no seams at the boundaries of tiles even when each tile is created independently. The detailed description of the implementation procedure, as shown in Figure 3, is explained as follows.
Image to Ground Plane Remapping
The process of creating an orthomosaic for a tile starts by projecting all the images that belong to the tile, as stored in the Map Tile Index data file, onto the area on the ground plane that is covered by the tile. When mentioning projecting an image to the ground plane, it theoretically means projecting every pixel coordinates of the source image to the ground plane defined with respect to the 2D Cartesian ground coordinate system. Instead of deriving the 2D ground coordinates from the pixel coordinates, the system of the present embodiment encapsulates the image-to-ground plane projection procedure into a single remapping function in the reverse direction. The process of creating a ground plane image from the captured image in the reverse order is illustrated as Figure 7, and the process flow diagram of the remapping function is shown in Figure 8. The single remapping function described in the present invention offers two advantages including speed and avoiding linear interpolation of a nonlinear mapping.
The process of projecting a captured image to a tile on ground plane is as follows. Since a tile is a fixed sized plane on the ground and has a fixed coordinate on the georeferenced ground coordinate system, we can determine the coordinates of each pixel in the tile image (also referred to as ground-plane image]. The process of generating a ground-plane image from a captured image is done in the reverse order in which the colour of pixels in the ground-plane image are obtained from their corresponding pixels in the captured image. The relationship between pixels in the ground-plane image and the zth camera captured image is defined by the perspective projection matrix P.
R,
0,.., (30)
To determine the pixel coordinates x = of (u,v), this is done by forward perspective projection using the following equation.
x = h(P v
Figure GB2561329A_D0018
(31) (x',y') is a distortion-free pixel coordinates. Since the captured image is distorted, the obtained coordinates must also be added the distortion factors such that the obtained coordinates are the distorted (x, y) in the same way as the captured image. Knowing where in source image the pixel (u,v) on the ground-plane image is associated with, it is possible to then assign the image intensity 7(x,y) to the pixel (w,v). Finally, we perform a bilinear interpolation to obtain the projected ground-plane image.
Seamless Orthomosaic Tile Stitching
Each tile will typically contain many overlapping ground-plane images which must be combined to generate a single tile image from all the projected ground-plane images. The naive approach of simply averaging the images leads to very low quality results (seams at projected image boundaries and loss of detail through blurring caused by imprecise alignment). To overcome this, we perform blending in the gradient domain. This is a standard technique in image stitching and seeks to preserve high frequency detail while hiding gradual changes in low frequency variations. However, we incorporate some significant modifications to gradient domain stitching which make it possible to stitch tiles independently whilst still ensuring that they are seamless when placed side by side.
This is done by constructing a sparse linear equation system of which the unknowns are the image intensity for every pixel in the output orthomosaic tile. The constant terms of the linear system, which are computed as illustrated in Figure 10, are composed of gradients in the horizontal and vertical direction, denoted as Gv and Gv, and the guide intensity image, defined as I . The procedure of computing Gx, Gv and I is described as follows.
For each ground-plane image, weight coefficient, in the range of [0,1], for every pixel is determined as an element of the weight image W of size nxn, which is the same as the tile size. The pixels with high weight means that they are observed at the high resolution while the pixels with low weight are those observed at the low resolution. For example, high weight coefficients are assigned to the pixels with short distances to the camera, in which for our system are the pixels at the bottom part of the captured image, while the pixels with large distances to the camera are assigned with low weight. By defining the size of the captured image to be r rows and c columns, and the origin of the image (0,0) is at the top-left corner, the weight assigned to the pixels at they row isy/r. This scheme is illustrated as Figure 9.
The procedure of computing W for a ground-plane image is performed in the same way as the image to ground plane projection procedure as illustrated in Figure 7. However, a slight change is made in Step 3 and 4 of the process in Figure 8. In Step 3, the weight coefficient at the pixel (x,y) of the captured image is obtained instead and assign to W at the pixel (u, v) in Step 4. The final weight image W for the ground plane image is then generated by performing a bilinear interpolation.
We denote the intensity at pixel (x,y) in the desired ground plane image as J(x,y). The gradient in the x direction can be approximated using forward differences as:
+ (32)
Similarly for they direction dyI{x, y) « J(x,y)-/(x,y + l) (33)
Given all the gradient images for each ground-plane image, the target gradient images for the horizontal and vertical direction Gx and Gv are created by choosing the best” gradient among all the gradient images. This is done by, for each pixel (x,y) of Gx, finding which ground plane image has the highest value of W(x,y) then assigning the gradient Gx(x, y) of that ground plane image to the best gradient image Gx. The same procedure is applied to Gy. This scheme encourages the inclusion of maximum detail.
The selected gradients provide the final orthomosaic that the stitched image must try to match. However, providing gradient constraints alone would leave an unknown constant offset To remove this indeterminacy and to encourage the overall brightness and colour to match the original captured images, we include guide intensities which the stitched image must also try to match. The guide intensity image, Iavg, is simply the average of all the ground-plane images, as illustrated as Step 4 in Figure 10. In other words, we recover detail from the gradient in the best image but overall brightness from the average of all images. The overall problem can be written as a sparse linear system of equations. Such systems can be solved efficiently meaning the tile stitching process could potentially be done in real time, on demand as requests for tiles are made from the viewing interface.
However, as the orthomosaic is computed independently for each tile, this causes seams at the boundaries of tiles when they are presented alongside other tiles. To avoid seams at the boundaries of tiles, overlaps are included with previously stitched neighbouring orthomosaic tiles as presented in Figure 11. This is done by extending the size of the average image for e pixels equally at each side to create a small overlap with the neighbour tiles. We refer to this enlarged image of size mxm as the guide intensity image, I , presented as Step 5 in Figure
10. To create the guide intensity image at the boundary region, the first step is to check whether any adjacent tiles have already been stitched. If they have, we add to the new tile a small overlap (e) of the neighbour tile into the enlarged tile, as presented again in Figure 11. If not, the overlap region for that neighbour tile is set to an invalid number (N a N) such that they will not be included in the linear equation system.
In the overlap region, there is only the guide intensity” constraint which is weighted very highly in the objective function. In the implementation, the weights for the pixels in that region are set to 1. This enforces the intensities of the pixels near the boundaries to be blended smoothly with its adjacent tiles. In order to retain the details of the current tile while still transition smoothly from its boundaries, the weights of the interior pixels will constantly increase from 0 at the outer pixels of the original tile size for a number of e pixels until it reaches λ, meaning the weight increases λ / e every times it moves towards to centre of the tile. Then, the weights remain at this value for every pixels in interior region. The result is that intensities at tile boundaries match exactly with the adjacent stitched tiles and any large intensity changes that would have occurred are made less visible as low frequency transitions. This is a key step that allows us to stitch tiles one at a time but still produce a seamless mosaic when adjacent tiles are viewed next to each other.
At this point, the constant terms of the linear equation system are all defined which include Gy, Gv and I . The sparse system of the linear equations can then be constructed as follows.
If all image intensities of the target orthomosaic are stacked into a long vector IeR , we can compute gradients in the vertical and horizontal directions by multiplication with a gradient coefficient matrix JG e R”x” and JG e R”x” respectively. JG I computes a vector containing the horizontal gradient at each pixel, and JG I for a vector containing the vertical gradient Both JG and JG are sparse, since only two elements in each row are non-zero. Hence, the structure of JG looks like:
.. -1 1 (34)
For pixels on the bottom or right boundary, it is possible to switch to backward finite differences to compute the gradient.
The sparse system of the linear equations for our seamless image stitching is constructed as:
JX JX w1 ... 0 (35) where gx is a vector containing the horizontal gradients for all pixels for which the gradient is defined. Similarly, gv contains the vertical gradients, w, is the weight associated with the zth pixel, computed according to the scheme shown in Figure 12. Although the system is sparse and thus quick to solve, we can still improve the speed by reducing the number of equations for guide intensity constraints. Instead of including all the pixels in lg, only a small number of guided intensity pixels are included which their weights are equal to λ. This is because it is an over-constrained system and the number of equations in the system is sufficient to be solved using only equations with gradient terms and guide intensities at the boundaries. Not only does this improve speed, the quality of the stitched image is also improved because it avoids the solution being encouraged towards the overly smooth average image.
Orthomosaic Map Generation
Since each orthomosaic tile is generated in such a way that it produces no seams at the boundaries between tiles, an orthomosaic for any region can be constructed by simply placing an appropriate subset of the tiles side by side according to its tile positions. In practice, a user interacts with the map by changing the region of interest (e.g. by dragging the map] or the zoom level, which generates requests for a set of tiles. These are either returned directly (if they were precomputed in advance] or stitched on-the-fly in an on-demand implementation. In principle, the entire orthomosaic for the whole captured region could be created by concatenating all stitched tiles together into one large map.

Claims (27)

  1. Claims
    5 1. A method of generating an orthomosaic image of a geographical survey area comprising:
    a. recording a set of still images of the survey area using a land-borne camera, in which the camera has an optical axis that is angled downwardly with respect to a surface of the survey area and recording a geographical position
    10 corresponding to each image;
    b. comparing pairs of partially overlapping images to identify common features in the image to derive a model of the motion between the images;
    c. calculating a pose in respect of each image describing where the camera was in world coordinates and where it was looking when the image was captured;
    d. projecting each image onto a ground plane to create a set of image tiles; and
    e. stitching together a plurality of image tiles to create an orthostatic image corresponding to an area that is present in a plurality of the images recorded in step a.
  2. 2. A method according to claim 1 in which, in step b., motion between pairs of images is
    20 mapped using a homography that is a transformation valid when two images view the same planar scene from different positions.
  3. 3. A method according to claim 1 or claim 2 in which step b., includes simultaneously finding feature matches between pairs of images that are consistent with a homography and computing the homography that is consistent with these matches.
  4. 4. A method according to claim 2 or claim 3 including, in step b., a step of optimising the estimates of camera poses created in step b., by identifying poses that would give homographies between views that are consistent with those estimated from the image pairs in step b.
  5. 5. A method according to any one of claims 2 to 4 in which estimated homographies are filtered to discard those likely to be a poor model of the transformation between two views.
  6. 6. A method according to claim 5 in which a homography is deemed to be poor if the number of common features identified in step b., is below a threshold.
    A method according to claim 5 or claim 6 in which a homography is deemed to be poor if, when applied to a first of an image pair, the number of common features identified in step b., that it maps erroneously onto a second of an image pair is above a threshold.
  7. 8. A method according to any one of claims 5 to 7 in which a pose is deemed to be poor if
    15 it deviates from an initial estimate derived from a recorded geographical position by more than a threshold.
  8. 9. A method according to any one of claims 5 to 7 in which a homography is deemed to be poor if it implies non-smooth motion.
  9. 10. A method according to any one of claims 2 to 9 in which an objective function is 20 applied in step c., ensuring that the estimated poses give homographies that are consistent with those estimated between overlapping images.
  10. 11. A method according to claim 10 in which for each of a pair of images, a homography is computed from the image to the ground plane, and this homography is applied to the 2D positions of matched image features to yield matched features on the ground
    25 plane, wherein the distance between pairs of matched features are residuals that are added to the objective function such that the square of these residuals is minimised.
  11. 12. A method according to any preceding claim in which, after step d., with each orthomosaic tile to be stitched, there is associated the set of images whose projections overlap that tile.
  12. 13. A method according to claim 12 in which the associated set is derived by computing
    5 the projection of the corners of each image and then performing an inside-polygon test for each tile.
  13. 14. A method according to claim 12 or claim 13 in which for each pixel in a tile there is applied a perspective projection operation into each image, giving a non-integer 2D image coordinate.
    10 15. A method according to claim 14 in which the nonlinear distortion induced by the camera lens is applied to the 2D position to giving the final image coordinate.
    16. A method according to any preceding claim in which linear interpolation is used to compute the colour at each point in an image to be stitched to give a reprojection of the image to the ground plane.
  14. 15 17. A method according to any preceding claim in which, in step e., a weighting function that chooses the best” gradient for each tile pixel from each image is applied to create the stitched image.
  15. 18. A method according to claim 17 in which the weighting function is chosen such that the gradient is taken from the image that observed that point at the highest
    20 resolution.
  16. 19. A method according to claim 17 or claim 18 in which the stitched image is computed by solving a sparse system of linear equations.
  17. 20. A method according to any one of claims 17 to 19 in which the selected gradients provide targets that the stitched image must tiy to match and guide intensities that
    25 the stitched image must also try to the match during the stitching process.
  18. 21. A method according to claim 20 in which the guide is the average of the projected images.
  19. 22. A method according to any preceding claim in which, in step e., in which a check is made as to whether any adjacent tiles have already been stitched, and if they have, a
    5 small overlap into the existing tiles is added to the new tile.
  20. 23. A method according to claim 22 wherein in the overlap region, there is only the guide intensity constraint which is weighted very highly in the objective function and there is a smooth transition to a lower weighted guide constraint for the interior of the tile.
  21. 24. A method according to any preceding claim in which, in step a., an initial estimate of a
    10 pose is derived from the recorded geographical position.
  22. 25. A method according to claim 24 in which the initial estimate includes the direction of the optical axis of the camera being determined by the recorded GPS direction of orientation, the viewing angle of the imaged surface is as determined during an initial calibration of the system, and that transverse rotation is zero.
  23. 26. A method according to any preceding claim in which step a., is carried out on a land vehicle.
  24. 27. A method according to claim 25 in which the vehicle is a road vehicle.
  25. 28. A method according to any preceding claim in which steps b. to e. are carried out on a programmed computer.
    20
  26. 29. A method of image processing comprising performing steps b. to e. according to any preceding claim.
  27. 30. Computing apparatus programmed to perform steps b. to e. according to any preceding claim.
    Intellectual
    Property
    Office
    Application No: GB1620652.6 Examiner: Mr Richard Nicholls
GB1620652.6A 2016-12-05 2016-12-05 Method and system for creating images Withdrawn GB2561329A (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
GB1620652.6A GB2561329A (en) 2016-12-05 2016-12-05 Method and system for creating images
GB1715491.5A GB2557398B (en) 2016-12-05 2017-09-25 Method and system for creating images
GB2011002.9A GB2584027A (en) 2016-12-05 2017-09-25 Method and system for creating images
PCT/GB2017/053410 WO2018104700A1 (en) 2016-12-05 2017-11-13 Method and system for creating images
EP17808973.6A EP3549094A1 (en) 2016-12-05 2017-11-13 Method and system for creating images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1620652.6A GB2561329A (en) 2016-12-05 2016-12-05 Method and system for creating images

Publications (2)

Publication Number Publication Date
GB201620652D0 GB201620652D0 (en) 2017-01-18
GB2561329A true GB2561329A (en) 2018-10-17

Family

ID=58159665

Family Applications (3)

Application Number Title Priority Date Filing Date
GB1620652.6A Withdrawn GB2561329A (en) 2016-12-05 2016-12-05 Method and system for creating images
GB1715491.5A Active GB2557398B (en) 2016-12-05 2017-09-25 Method and system for creating images
GB2011002.9A Withdrawn GB2584027A (en) 2016-12-05 2017-09-25 Method and system for creating images

Family Applications After (2)

Application Number Title Priority Date Filing Date
GB1715491.5A Active GB2557398B (en) 2016-12-05 2017-09-25 Method and system for creating images
GB2011002.9A Withdrawn GB2584027A (en) 2016-12-05 2017-09-25 Method and system for creating images

Country Status (3)

Country Link
EP (1) EP3549094A1 (en)
GB (3) GB2561329A (en)
WO (1) WO2018104700A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111678502A (en) * 2020-06-09 2020-09-18 中国科学院东北地理与农业生态研究所 Method for extracting frozen soil disaster information based on unmanned aerial vehicle aerial survey image

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109631919B (en) * 2018-12-28 2022-09-30 芜湖哈特机器人产业技术研究院有限公司 Hybrid navigation map construction method integrating reflector and occupied grid
CN110084743B (en) * 2019-01-25 2023-04-14 电子科技大学 Image splicing and positioning method based on multi-flight-zone initial flight path constraint
CN110097498B (en) * 2019-01-25 2023-03-31 电子科技大学 Multi-flight-zone image splicing and positioning method based on unmanned aerial vehicle flight path constraint
WO2021046304A1 (en) * 2019-09-04 2021-03-11 Shake N Bake Llc Uav surveying system and methods
CN111161143A (en) * 2019-12-16 2020-05-15 首都医科大学 Optical positioning technology-assisted operation visual field panoramic stitching method
FR3110996B1 (en) * 2020-05-26 2022-12-09 Continental Automotive Construction of images seen from above of a section of road
US20220373473A1 (en) * 2020-10-05 2022-11-24 Novi Llc Surface defect monitoring system
DE102020213597A1 (en) * 2020-10-29 2022-05-05 Robert Bosch Gesellschaft mit beschränkter Haftung Masking method, computer program, storage medium and electronic control unit
CN113989450B (en) * 2021-10-27 2023-09-26 北京百度网讯科技有限公司 Image processing method, device, electronic equipment and medium
CN115830246B (en) * 2023-01-09 2023-04-28 中国地质大学(武汉) Spherical panoramic image three-dimensional reconstruction method based on incremental SFM

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120300019A1 (en) * 2011-05-25 2012-11-29 Microsoft Corporation Orientation-based generation of panoramic fields
CN105592294A (en) * 2014-10-21 2016-05-18 中国石油化工股份有限公司 VSP excited cannon group monitoring system
US20160150142A1 (en) * 2014-06-20 2016-05-26 nearmap australia pty ltd. Wide-area aerial camera systems
US20160229555A1 (en) * 2015-02-10 2016-08-11 nearmap australia pty ltd. Corridor capture

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU7939300A (en) * 1999-08-20 2001-03-19 Emaki, Inc. System and method for rectified mosaicing of images recorded by a moving camera
JP4341656B2 (en) * 2006-09-26 2009-10-07 ソニー株式会社 Content management apparatus, web server, network system, content management method, content information management method, and program
WO2008044911A1 (en) * 2006-10-09 2008-04-17 Tele Atlas B.V. Method and apparatus for generating an orthorectified tile
CN101893443B (en) * 2010-07-08 2012-03-21 上海交通大学 System for manufacturing road digital orthophoto map

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120300019A1 (en) * 2011-05-25 2012-11-29 Microsoft Corporation Orientation-based generation of panoramic fields
US20160150142A1 (en) * 2014-06-20 2016-05-26 nearmap australia pty ltd. Wide-area aerial camera systems
CN105592294A (en) * 2014-10-21 2016-05-18 中国石油化工股份有限公司 VSP excited cannon group monitoring system
US20160229555A1 (en) * 2015-02-10 2016-08-11 nearmap australia pty ltd. Corridor capture

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111678502A (en) * 2020-06-09 2020-09-18 中国科学院东北地理与农业生态研究所 Method for extracting frozen soil disaster information based on unmanned aerial vehicle aerial survey image
CN111678502B (en) * 2020-06-09 2022-06-14 中国科学院东北地理与农业生态研究所 Method for extracting frozen soil disaster information based on unmanned aerial vehicle aerial survey image

Also Published As

Publication number Publication date
WO2018104700A1 (en) 2018-06-14
GB2557398A (en) 2018-06-20
GB201620652D0 (en) 2017-01-18
EP3549094A1 (en) 2019-10-09
GB2557398B (en) 2021-08-04
GB202011002D0 (en) 2020-09-02
GB2584027A (en) 2020-11-18
GB201715491D0 (en) 2017-11-08

Similar Documents

Publication Publication Date Title
GB2561329A (en) Method and system for creating images
Carbonneau et al. Cost‐effective non‐metric photogrammetry from consumer‐grade sUAS: implications for direct georeferencing of structure from motion photogrammetry
CN107316325B (en) Airborne laser point cloud and image registration fusion method based on image registration
CN112367514B (en) Three-dimensional scene construction method, device and system and storage medium
EP2111530B1 (en) Automatic stereo measurement of a point of interest in a scene
Zhao et al. Alignment of continuous video onto 3D point clouds
Bok et al. Capturing village-level heritages with a hand-held camera-laser fusion sensor
CN104574393B (en) A kind of three-dimensional pavement crack pattern picture generates system and method
US20080181487A1 (en) Method and apparatus for automatic registration and visualization of occluded targets using ladar data
US20190020861A1 (en) High-speed and tunable scene reconstruction systems and methods using stereo imagery
WO2007133620A2 (en) System and architecture for automatic image registration
KR20140053870A (en) 3d streets
Guo et al. Mapping crop status from an unmanned aerial vehicle for precision agriculture applications
Maurer et al. Tapping into the Hexagon spy imagery database: A new automated pipeline for geomorphic change detection
Bagheri et al. A framework for SAR-optical stereogrammetry over urban areas
Moussa et al. An automatic procedure for combining digital images and laser scanner data
Toschi et al. Combining airborne oblique camera and LiDAR sensors: Investigation and new perspectives
Alba et al. Automatic registration of multiple laser scans using panoramic RGB and intensity images
Swart et al. Refined non-rigid registration of a panoramic image sequence to a LiDAR point cloud
Tang et al. Content-based 3-D mosaics for representing videos of dynamic urban scenes
Hoegner et al. Thermal leakage detection on building facades using infrared textures generated by mobile mapping
Feng et al. Registration of multitemporal GF-1 remote sensing images with weighting perspective transformation model
Ivelja et al. Improving vertical accuracy of UAV digital surface models by introducing terrestrial laser scans on a point-cloud level
Amini et al. Development of a new stereo‐panorama system based on off‐the‐shelf stereo cameras
Bang et al. Comprehensive analysis of alternative methodologies for true orthophoto generation from high resolution satellite and aerial imagery

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)