NL2027547B1 - Method of and apparatus for determining deformations of quay walls using a photogrammetric system - Google Patents

Method of and apparatus for determining deformations of quay walls using a photogrammetric system Download PDF

Info

Publication number
NL2027547B1
NL2027547B1 NL2027547A NL2027547A NL2027547B1 NL 2027547 B1 NL2027547 B1 NL 2027547B1 NL 2027547 A NL2027547 A NL 2027547A NL 2027547 A NL2027547 A NL 2027547A NL 2027547 B1 NL2027547 B1 NL 2027547B1
Authority
NL
Netherlands
Prior art keywords
images
image
quay wall
quay
corresponding common
Prior art date
Application number
NL2027547A
Other languages
Dutch (nl)
Other versions
NL2027547A (en
Inventor
Constantijn Lodder Hendrik
Treffers Niels
Benjamin Staats Sjoerd
Pieter Kodde Martinus
Original Assignee
Ingb Geodelta B V
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ingb Geodelta B V filed Critical Ingb Geodelta B V
Priority to NL2027547A priority Critical patent/NL2027547B1/en
Priority to EP21839700.8A priority patent/EP4291856A1/en
Priority to PCT/NL2021/050789 priority patent/WO2022173285A1/en
Application granted granted Critical
Publication of NL2027547A publication Critical patent/NL2027547A/en
Publication of NL2027547B1 publication Critical patent/NL2027547B1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • G01C11/08Interpretation of pictures by comparison of two or more pictures of the same area the pictures not being supported in the same relative position as when they were taken
    • G01C11/10Interpretation of pictures by comparison of two or more pictures of the same area the pictures not being supported in the same relative position as when they were taken using computers to control the position of the pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/30Interpretation of pictures by triangulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

MODUPS -24- Abstract An apparatus, system and method for monitoring quay walls is disclosed. The system is arranged to record image data of a quay wall. The image data is recorded from a plurality of angles, providing a plurality of perspectives, creating multi-perspective image data. The multi- 5 perspective image data is analysed to determine deformations and/or displacement of the quay wall to be monitored.

Description

MODUPS -1- Method of and apparatus for determining deformations of quay walls using a photogrammetric system Field of the invention
[0001] The present invention relates to a method of and apparatus for determining the three- dimensional deformations and/or displacements of a quay wall between two or more observation epochs using a photogrammetric system mounted on a waterborne or airborne vehicle, such as a boat or a drone, with high statistical quality analysis of the computed results and an interactive three-dimensional visualisation. Background art
[0002] Systems for detecting defects in quay walls are known in the prior art. Patent document US2020/0312019 describes an automated benthic ecology system comprising a remotely operated vehicle upon which an environmental sensor package and photomosaicing technology are mounted, the remotely operated vehicle configured to operate in benthic habitats, the photomosaicing technology comprising a high-resolution still camera, a high-resolution video camera, and a stereoscopic camera, the environmental sensor package comprising a plurality of sensors. The Automated Benthic Ecology System (ABES) is a small, portable remotely operated vehicle (ROV) used to conduct photomosaicing surveys of: (1) biological communities inhabiting vertical structures such as piers and quay walls, (2) biological communities in areas of known UXO and buried munitions, (3) pier and quay wall integrity to investigate for cracks, leaks and other structural issues, and (4) compromised integrity of a ship's hull for planning purposes of the salvage operation as well as pre- and post-salvage surveys of biological impacts. The ABES obtains high- resolution imagery of the site, along with water quality information to provide a more complete ecological understanding of areas of interest that are inaccessible and/or areas that pose human health or safety access issues. Adding a stereoscopic payload and three-dimensional model generation capability has made the ABES capable of collapsing four surveys into one survey and providing a holistic view of the area of interest.
[0003] The system described in US2020/0312019 therefore relies on biological communities inhabiting quay walls in order to be able to detect defects in said quay walls. In particular, US2020/0312019 is concerned with determining cracks, leaks and structural issues of submerged quay walls by means of a tethered underwater remotely operated vehicle ROV, see [0013].
[0004] The present invention differs from this known method in that the present invention is concerned with quay walls above the waterline. Summary of the invention
[0005] Waterways, for example canals, ports, riverbanks and so forth, are embedded by quay walls. Said quay walls might deform under the influence of varying loads on the quay walls from either changing water levels or loads on the quay walls itself, for example from a road close to the
MODUPS -2- quay wall. Usually, but not generally, waterways are embedded between quay walls on either side of the waterway.
[0006] According to the present invention, a method and system for determining the three- dimensional deformations of a quay wall as defined above is provided, in which one or more imaging devices are used to capture the entire length of the quay wall from a multitude of directions. The images are taken in such a way that every point on the quay wall is imaged from at least three significantly different directions. Images are captured in epochs with an interval defined by the expected movement of the quay wall. After every epoch said image data is input to a processing unit. Said processing unit will statistically test the presence of any movement. If a movement is statistically significant, the deformation is computed with full statistical quality description. Methods for performing such tests in traditional land survey are previously published in the papers “Testing Methods for Adjustment Models with Constraints” by H. Velsink and “A risk evaluation method for deformation monitoring systems” by S. Zaminpardaz et. al.
[0007] One aspect of the present invention relates to measuring deformation of quay walls.
[0008] A further aspect of the present invention relates to measuring displacement of quay walls.
[0009] The method of the present invention involves the use of one or more reference points and/or markers in order to determine deformation and/or displacement of the quay walls.
[0010] In one embodiment, the images are taken from a waterbome vehicle. This feature allows for surveying quay walls of waterways which do not have quay walls on both sides on the waterway.
[0011] In another embodiment, the images are taken from a manned or unmanned airborne vehicle. This feature allows for surveying quay walls of waterways which do not have quay walls on both sides on the waterway. Moreover, this feature allows the quay walls to be assessed when the waterway is not accessible by water, for example under legal restrictions.
[0012] In another embodiment, the images are taken from a tripod on the opposing side of the quay wall of a waterway.
[0013] In another embodiment, the images are taken from a land-based vehicle on the opposing side of the quay wall of a waterway.
[0014] An automatic process automatically finds corresponding common points between images from all epochs. Corresponding common points are points which represent the exact same position on the quay wall in the image. Corresponding common points are detected by finding similar contrast and/or brightness structures in the images. Document US6711293B1 discloses such a process.
[0015] In order to improve the number of detected corresponding common points, the method can be extended by an iterative approach. In the first iteration an approximate relative position of all images from all epochs is computed. In the subsequent iterations the number of corresponding common points is increased by searching specifically in subregions with known overlap based on the results from the previous iteration.
[0016] The system can be extended with a multitude of reference objects in case stability of the quay wall cannot be guaranteed for any point along the quay wall. The reference objects contain
MODUPS -3- unique markers. The three-dimensional coordinates of the markers in the applicable coordinate system are measured by means of land survey.
[0017] Alternatively, the reference objects can be extended with a minimum of three retro-reflective prisms per object, each having a known offset to all the markers on the reference object. The prisms allow for more accurate observation of the location. By using retro-reflective prisms, the centre of the point can be observed more accurately and from a greater distance.
[0018] Alternatively, the reference objects can be extended to use one or more Global Navigation Satellite Systems (GNSS). To do this, a GNSS-antenna and receiver with a known offset between the phase centre of the antenna and all the markers on the reference object.
[0019] Alternatively, the reference objects can be extended with a corner reflector for Interferometric Synthetic-Aperture Radar (InSAR) deformation monitoring of the reference object.
[0020] In another embodiment of the system unique markers are placed om the quay wall to improve the reliability of the detection of common corresponding points.
[0021] A virtual interactive three-dimensional environment is available which displays the computed three-dimensional points from one or more of the epochs. The deformations in each epoch can be visualized in this environment, as well as the computed accuracy, test values and minimum detectable bias. Visualization is achieved by displaying vectors or colours.
Brief description of the drawings
[0022] The present invention will be discussed in more detail below, with reference to the attached drawings, in which:
[0023] Fig. 1 depicts the general measurement setup along a quay wall.
[0024] Fig. 2 depicts the basic pinhole camera model that is used in all computations.
[0025] Fig. 3 depicts the reference object.
[0026] Fig. 4 depicts the reference marker design.
[0027] Fig. 5 depicts a flow chart of the method of the present invention, covering the first part.
[0028] Fig. 6 depicts a flow chart of the method of the present invention, covering the second part.
[0029] Fig. 7 depicts a system according to the present invention.
Best mode of the invention
[0030] Quay walls (1004) are typically constructed as vertical structures built from stones, rocks, bricks or concrete slabs. The vertical structure is placed on a horizontal foundation slab or bar, which in turn rests on a foundation of pillars made of wood or concrete or other suitable materials. One side of the vertical structure faces a waterway and the other side of the vertical structure faces the land next to the waterway. The intent of the vertical structure is to retain the soil behind the land- facing side of the structure.
[0031] Failure of the quay wall is common and can be caused by improper design, excessive load on the topsoil surface behind the quay wall, tree roots or structural degradation due to rot, salinity or corrosion. Early detection of failures in a quay wall is possible by recognizing displacement or deformation of the outer surface of the quay wall.
MODUPS -4-
[0032] Deformation of quay walls are best determined by a set of images taken with a camera (1001) equipped with a sensor (2002), typically a Charge-coupled device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) with a fixed lens (1003) from a boat or another platform (1002) at a distance of approximately 1 to 50 meters, preferably 2 to 20 meters, more preferably 3 to 10 meters from the quay wall. The images should be taken in at least four different directions relative to the quay wall: first relative direction: essentially perpendicular; second relative direction: at angle of about 45 degrees to the left; third relative direction: at an angle of about 45 degrees to the right; fourth relative direction: perpendicular but up-side-down relative to the first image. Each perpendicular image should have an overlap of at least 60% but preferably 80% with the closest perpendicular image to the left and the closest perpendicular image to the right.
[0033] All images could be taken in either visible, near-infrared or ultra-violet light. Images taken in visible light can be either recorded and processed as colour images or as grey-scale images.
[0034] The focal distance (2005) of the lens should remain fixed throughout the collection process.
[0035] The focus of the camera should be fixed throughout the collection process of one set of images of a quay wall as to avoid changes in the image geometry, also known as focus pumping.
[0036] The shutter speed of the camera should be as fast as possible and such that any form of motion blur is avoided.
[0037] Prior to the survey, the operator of the camera should take care that the relations between the aperture (2003), shutter speed and sensor gain is set in such a way that sharp, non blurred images are obtained with a high degree of contrast. Said settings should be updated on the camera if the light conditions change significantly. Alternatively, auto-exposure with a minimum shutter speed and a maximum aperture can be employed as well.
[0038] Prior to the survey a multitude of reference objects (3013) are placed along the quay wall (1004) at an interval suitable for the desired accuracy. Markers (3003, 3005, 3007, 3008) are fixed to said reference objects. The positions of the markers on the reference objects are measured by means of traditional land survey methods using for example a theodolite and/or spirit level instrument. These measurements are performed in a local survey network with at least three permanent reference points of which a minimum of 7 coordinates is fixed.
[0039] The reference markers can be measured directly or indirectly by using a set of retro reflective prisms (3004, 3006, 3009) with known offsets to the reference markers. The use of retro reflective prisms allow for a higher accuracy and measurement over longer distances.
[0040] The reference objects are placed on top of the quay wall (3012) using a mounting plate (3010) that is adjustable in height. By changing the height of the mounting plate, the bottom marker can be positioned right above the water level independent from the construction height of the quay wall. The reference object is stabilized with a counterweight (3011).
[0041] The recorded set of images is then analysed by a central processing unit.
[0042] Within the overlaps of all images, potentially corresponding common points are searched using a process as described in document US6711293B1. Corresponding common points are natural points, that are not signalized in the environment with markers. This results in a multitude 40 of corresponding common points between all images. Due to the nature of this process, up to 80%
MODUPS -5- ofthese potential corresponding common points could be wrongly identified and therefore constitute outliers.
[0043] The reference markers are used to signal a certain recognizable location in the environment. The reference markers are designed to facilitate automatic detection and identification. The target design consists of a white circle (4001) with a concentric ring (4002) on a black base. Around this there is another concentric ring in which a 10-bit target identification is coded (4004-4013). The central circle is slightly oversized and has a small black dot in the middle to facilitate manual measurement. The rings are sized to allow for a realistic amount of defocus without losing the contrast to a significant extent.
[0044] For full automation of the procedure the targets present in the images have to be recognized automatically. The recognition results in approximate positions of the targets in the images. The first step in this procedure is the estimation of the background. Objects consist of excess density in an area of the size of the Point Spread Function (PSF) or slightly larger. The PSF describes the response of an imaging system to a point source or point object and is used to take the correct amount of blurring of point objects in the image into account for further processing. The algorithm detects sets of adjacent pixels with a pre-set density difference above the background value.
[0045] To distinguish targets amongst all objects detected, a set of parameters is computed for every object. The parameters are used for a comparison with the characteristics of a target image. The main characteristics used are the following: 1. within limits set by the size of the target images two objects should be found at the same location, being the images of the circle and the ring of the target; 2. the size of these two objects should have a fixed ratio within bounds set empirically; 3. the absolute size of the two objects should be within limits that depend on the scale of the photograph, which in turn is to be adapted to the physical size of the targets photographed.
[0046] The search method described in the previous paragraph can be seen as the first step in the position estimation process. The final improvement of the position estimate is made with the “least squares template matching" method. This matching method requires start positions with a precision of about 2 pixels in order to assure convergence of the iterative procedure. Although the search procedure is expected to fulfil this requirement, an algorithm for start position improvement has been implemented. This algorithm is based on the principle of autocorrelation and results in the symmetry point of the digital image. The idea is to find the symmetry point by finding the maximum correlation of the image with the image mirrored with respect to the approximate symmetry point. This is done through minimizing the two-dimensional function C(x, Yo): C(x, yo) = 2206 y) — g(x — x,2y0 — 3)" with CC, Vo) as the correlation function, g(x,y) the density at x, y (the image) and x, yo the position of the symmetry point.
[0047] Without interpolation, the correlation function can be evaluated on a grid with half pixel intervals. To obtain subpixel accuracy a 2-dimensional second order polynomial is fitted to a matrix of 3 by 3 datapoints of the correlation function. In this way the correlation function is approximated in an area the size of a pixel. The position of the minimum of the function gives the best estimate of
MODUPS -6- the position of the symmetry point. The value of the minimum is an indication for the density noise in combination with the symmetry or lack thereof present in the image.
[0048] To assure convergence of the matching procedure, apart from a relatively accurate position, the scale of the target should be approximated. An indication for the scale is obtained through the autocorrelation procedure described above: the size of the so-called correlation window, a circular area for which the autocorrelation function is evaluated, is chosen in such a way that a symmetry- criterion and a contrast-criterion is met. The measure for symmetry is defined by the minimum of the correlation function relative to the assumed image noise. The contrast-criterion ensures a minimum size of the correlation window. Starting from a maximum size correlation window, depending on photo scale and size of the targets used, the size is decreased until both criteria are met. In this way the correlation window is fitted to the symmetric part of the target.
[0049] For the final step of locating the marker, template matching is applied. The template is an artificial digital image based on the target design. It contains only the circle and the ring of a target. The template image is smoothed with a 3 by 3 averaging filter to simulate convolution with the PFS.
The template is designed for the image of a slightly defocused target with average photo scale. The eight-parameter model for the least squares matching can be written as follows: g(x) =dy-t(A- (x —x0,y Yo) + go with g(x,y) the density at x,y (the image), ¢(x,y) the template, d,, go the radiometric scale and offset, A the affine transformation as a 2 x 2 matrix and x,, yo the position of the template.
[0050] An affine transformation is chosen to model the geometric transformation because this is a good local approximation of the projective transformation from object to photo. This holds for a target on a flat surface. For the radiometric part of the model, matching a linear model is chosen.
[0051] To automate the identification of a target, each target has, next to its number printed in the upper right corner a circular binary code containing the same number. The binary code consists of 10 bits allowing for 1024 different numbers.
[0052] Reading the circular binary code consists of three steps: 1. sampling the code;
2. determining the start point of the code; 3. reading the code. Using the previously solved affine deformation of the target image 200 or more samples of the binary code are taken from the image using bilinear interpolation for resampling. The sampling is done at regular angular distances. In the case of a total number of 200 samples this equates to 10 samples per bit along the centre of the bar code ring. The location of the least significant bit is determined by correlating the sampled code with a step function over half the circle. This is to detect the reference half of the circle (4003,4014). The other half of the circle contains the 10-bit code (4004-4013). Reading this code is done in a straightforward manner by comparing the densities in the bit centre with a threshold density level. This threshold depends on the densities found in the binary code.
[0053] While the images observed from the camera sensor (2002) equate a negative plane, all images are converted to a positive plane (2004). All image coordinates of markers and corresponding common points are stored with respect to this virtual positive plane.
[0054] Based on the computed corresponding common points as well as the detected reference 40 markers, approximate values should be computed for the position and orientation of every image
MODUPS -7- as well as the three-dimensional coordinates of the points which are represented in the terrain by the corresponding points and the reference markers.
[0055] The image pairs are ranked on the usability as a start pair for further computation. A start pair has at least three corresponding common points with at least seven known three dimensional coordinates. The best start pair has a lot of corresponding common points between two images. The coordinates of the three corresponding common points to be used as start can be defined in a local system and do not have to be according to a certain scale or orientation. Each combination of linear independent coordinates will work. Using the initial three corresponding common points for at least two images the exterior orientation is computed to continue. Once two or more images are oriented, forward intersection is used to calculate the three-dimensional terrain coordinates of all corresponding common points in the known images. This results in an increased number of known terrain points, to be used as input for further resections.
[0056] For the resection a closed form is used which does not use iterations and circumvents the need for initial approximations. In the resection, six exterior parameters are calculated, namely the three-dimensional position and three-dimensional rotation using at least three corresponding common points with known three-dimensional terrain coordinates and image coordinates in the image to be oriented. The closed form resection is performed in three or more steps.
[0057] The first step is computing the distance (radii) between three corresponding common points with the terrain points and the image position. This involves solving a fourth-degree polynomial which is made using distances between the terrain points and angles between image and terrain points. The fourth-degree polynomial reads: Next +N, xn + Na xn? +N4xn+N =0 This equation needs to be solved for n. The values for Ny, N,, Ns, N, and Ns can be found through substitution as described in the publication “A General Solution of a Closed-Form Space Resection” by Zhuoqgiac Zeng and Xibo Wang.
[0058] Solving the fourth-degree equation results in one up to four real answers while complex results are discarded. For every real answer a set of radii can be calculated.
[0059] In the second step, for every set of radii, a three-dimensional trilateration is performed to calculate the three-dimensional coordinates of the image. Every trilateration gives two possible positions for the image. This increases the possible exterior parameter sets to 2-8.
[0060] In the third step, the three-dimensional rotation is calculated. The vectors of the image coordinates and terrain coordinates are centred on the image position calculated in the trilateration.
By calculating the rotation of the bisection vectors of image-to-terrain vectors of the same point combinations in three dimensions the quaternion elements x, y, z, w can be calculated. However, there is an ambiguity where the x, y and z elements can have the opposite sign, which leads to two possible three-dimensional rotations for every image position. Both results are stored as a possible outcome.
MODUPS -8-
[0061] In an optional fourth step, the possible exterior parameter sets from step three are reduced to one exterior parameter set. This is done using an extra common point not used in any of step one, two and three. For every possible exterior parameter, the projected image position of the extra common point is calculated. The correct exterior parameter gives the smallest difference between projected and measured image position of the extra common point.
[0062] The entire process as described for resection is repeated for a large number of random selections of corresponding common points. This novel variant on the Random Sample Consensus (RANSAC) approach computes an exterior orientation for each random set of at least three corresponding common points. The number of iterations needed depends on the available data and needs to be determined empirically for the environment. For each outcome a score is computed.
[0063] This score value of the outcome results from a forward intersection of all corresponding common points from the image pair. Only the correct exterior orientation will result in a good intersection point for every corresponding common point. The residual is the shortest distance between the two lines coming from both corresponding common points. A residual of 0 means a perfect intersection, larger values indicate that the intersection is not perfect. This is common under the presence of measurement noise. All intersection points with a residual smaller than a threshold value are counted. The threshold can be derived by scaling the a priori variance from the image coordinates to terrain coordinates. The total count of intersections with a residual smaller than the threshold is the score value.
[0064] Alternatively, this score value of the outcome results from the number of similar exterior parameter sets resulting from other random points.
[0065] The randomly selected set of corresponding common points resulting in the largest score value is the selected outcome of this process.
[0066] An intersection threshold is predefined at which the residuals of a forward intersection are small enough to use the result of that forward intersection as approximate value in a non-linear least squares adjustment.
[0067] Using the selected exterior orientation all corresponding common points from the image pair are position in the three-dimensional space by forward intersection. Only intersection points with residuals smaller than the predefined intersection threshold are used.
[0068] The selected exterior orientation is stored as the outcome together with the computed terrain points from the forward intersection. This result, together with all previously computed exterior orientations and terrain points, is input into a bundle block adjustment for further refinement. The working of the bundle block adjustment is described later in this document.
[0069] After completing the procedure described above for all image pairs, the outcome of the final bundle block adjustment is a set of exterior orientations and terrain points. A three-dimensional similarity transform is performed to translate, rotate and scale the computed exterior orientations and terrain points to a set of at least three points with known coordinates in the terrain system.
[0070] From all detected corresponding common points, the subset is now selected of points for which it is now known, based on the consensus in the computation of the approximate values, that 40 they are not an outlier. The other points, which are outliers, are discarded.
MODUPS -9-
[0071] In every image, the photo coordinates of the corresponding common points within that image, are compared to the photo coordinates of the corresponding common points in all other images for which it can be assumed that they are very similar. The homography transform is computed between the photo coordinates of every pair of images.
[0072] Using the homography, the corners of the first image are transformed into the plane of the second image. The average change of coordinates of the comer points is computed. If the average change is below a threshold value, the two images are very similar in position and orientation. The threshold value depends on the chosen setup, a threshold of 42 will generally yield good results.
[0073] Once it is established that two images are very similar, one of the images is discarded. This process reduces the risk on a singular matrix which occurs when two very similar images are added to the subsequent processing steps.
[0074] Approximate values for the focal length (2005) and the principal point of autocollimation (2006, 2007), the offset between the centre of the image (2008) and the intersection of the optical axis through the negative image plain (2009) or positive image plane (2010), should be obtained with an a-priori camera calibration. Calibrated values should have a precision of at least 0.01 mm.
[0075] With approximate values available for all terrain coordinates as well as the position and orientation of all images, the best unbiased values for these parameters can be estimated using a non-linear least squares adjustment. To do this, the image observations, expressed as row and column values in the images, are converted to x- and y-coordinates in millimetres on the camera sensor relative to the middle of the sensor (2008). Each image coordinate on the negative plane (2011) is optionally converted to a coordinate on the positive plane (2012). Each image coordinate is used as an observation in the least squares adjustment.
[0076] Each image coordinate (2011) relates to a terrain coordinate (2013) and an image position and orientation through the collinearity equations: x= cf (X= XD) +7, (0 —=Y) +r, (2-29) 3 =X) +r (Y = YO) +15, (Z — 29) TX =X) +1, (Y — YO) + 15, (Z — 29) YE iK SKO) Tra VO) Fra — 70) In this equation, {x0 Yy? z¢) are the coordinates of the image position while (X Y 2) are the three-dimensional terrain coordinates of the point (2013). The elements of the rotation matrix R of the camera attitude are denoted by r;;. The focal length of the camera is denoted as -f. All the elements are treated as unknown parameters in the least squares solution. The observed image coordinates are denoted by x and y, which represent the image pixel position converted to millimeters using the known sensor size.
[0077] The non-linear equations are linearized using an approximation from the first order of a Taylor expansion. This linearization is performed using the approximate values computed before. This results in a linear system of equations expressed as: y=A4x+e with y the vector with observations, x the vector with unknown parameters to be estimated, A the model matrix and e the vector with unknown residuals.
MODUPS -10-
[0078] The matrix 4 should be stored as a sparse matrix.
[0079] The system of equations is further extended with the terrestrial survey observations using their respective observation equations.
[0080] The system of equations is further extended with observed X, Y and Z coordinates of terrain points from GNSS-observations when available.
[0081] The system of equations is further extended with the calibrated focal distance (2005) and principal point of autocollimation (20068) when available.
[0082] Each observation in the y-vector is assigned an a-priori variance. This variance, which is the standard deviation squared, is used as a weight for each observation. The variance for each observation is expressed as £ fy} a matrix where all variances are stored on the diagonal. In order to improve computational efficiency, the median value of all variances is stored as 2. The variance matrix Q, can now be computed as: E {yl = 070.
[0083] For best statistical testing covariance between observations can optionally be added to the adjustment by adding covariance values to the non-diagonal elements of the a2Q, matrix. Covariance is to be assumed between the x- and y- image coordinates for best testing results.
[0084] The variance for all image coordinates should be approximately 1/3" of physical the pixel Size on the sensor.
[0085] The variance of the calibrated focal distance (2005) and principal point of autocollimation (2006) should be equal to the variance expressed on the calibration report.
[0086] The variance of land survey and GNSS-observations should be estimated from experimentation before commencing the survey.
[0087] The values for the unknown parameters can now be estimated using: 2=(4"Q;14) ATG.
[0088] The solution of this equation is computed using Cholesky decomposition for sparse matrices or other suitable solvers for linear systems of equations.
[0089] An iteration threshold is predefined at which the improvement in the non-linear least squares iteration is small enough to assume that the adjustment is converged and the result does not significantly improve anymore.
[0090] Due to the linearization with approximate values, the solution needs to be iterated with the outcome of one computation as approximate values for the next iteration. Iteration should continue until no improvement larger than the iteration threshold is obtained in the parameters.
[0091] After completing the final iteration, the @; matrix is computed. This matrix is computed as Qs = (4ro;t4) . The Q4 matrix contains the a posteriori variance and covariances of all computed unknown parameters. In essence this means that this matrix contains the variance for all coordinates as well as the correlation between the parameters. This is crucial information for assessing and testing the significance of a deformation later on.
[0092] While the (ArQ;!A) matrix is typically sparse, the Q matrix generally is not. Due to the large number of parameters, the Q4 matrix will not normally fit in the memory of a computer.
MODUPS -11- Therefore, a Sparse Cholesky decomposition is computed first. From the Cholesky decomposition each column is computed independently. This process can be performed in parallel using multiple cores on a multi-core computer machine.
[0093] On larger systems, the size of Q, can become too large, meaning that storage in memory is no longer possible. As the matrix is symmetric, only one triangle needs to be stored. This should be done in a storage structure that does not require a continuous block of memory, but individual memory blocks per column.
[0094] The matrix with variances and covariances of the adjusted observations is computed as Qy = AQA". This matrix is by definition even larger than Q;. However, only storage of the elements on the main diagonal is needed. The computation should proceed in a nested loop for both matrix multiplications to avoid excessive delay when computing large systems. The nested loop runs as shown in the pseudo code below. In this code, m represents the number of observations. RowP is the array containing the row indices of the existing rows in the matrix A. Each element in the array gives the index of the element in the array Values that is first non-zero element in a row. The array Columnind contains the column indices of the non-zero values. The elements correspond to the elements of the array Values. The Values array contains the actual values of the matrix A. for (i=0;i<m; i++) v_Aii=0 for (j = RowP [i]; j < RowP [i+1]; j++) v_AQij=0 for (k = RowP [i]; k < RowP [i+1]; k++) v_AMij += Values[k] * Q[ColumnlInd [j], Columnind [K]] v_Aii += v_AMij * Values[j] diagonal[i] = v_Aii;
[0095] An F-test is performed on the computed results and stored in a log file. The F-Test is computed as follows with m the number of observations and n the number of unknowns: é=y—AX Vert m=n+b Y; jg
[0096] For each observation a w-test is performed using the following equations: ê=y- A Q:=0Qy—0Qp è where 2; is the i-th element from the vector ê and o4, the square root of the i-th element from the main diagonal of the matrix o2Q;.
MODUPS -12-
[0097] The w-test threshold value can be determined from the 22 distribution function. Assuming an unreliability factor of 0.1%, the threshold values computed from this distribution is 3.28. Any other threshold can be applied as seen fit for the application.
[0098] If a w-test is larger than the threshold it means that the observation is rejected. Most probably due to a measurement error.
[0099] The observation with the largest w-test is eliminated from the observation vector after which the entire Bundle Block Adjustment is repeated. This process, called data snooping, is iterated until no other observations with a w-test larger than a certain threshold are present in the data.
[0100] For large systems a greater quantity of rejected observations than one can optionally be eliminated.
[0101] The process described above is repeated in its entirety for each subsequent epoch. The results for each epoch are stored in a database.
[0102] After every epoch, except for the first one, the deformation testing process is commenced.
In this step the search for corresponding terrain points is repeated, this time to find corresponding common points between two epochs. Again, a process such as the one described in US6711293B1 is used for this step.
[0103] If the reference objects or other objects other than the quay wall have moved between two epochs, these can optionally be masked from the images, thereby avoiding corresponding common points found on the reference object which do not relate to the same three-dimensional location in the terrain. Masking of the reference objects can be achieved by changing the colour values of all pixels in a bounding box around the reference object to black, or by keeping a mask polygon in memory in which all detected corresponding common points are subsequently deleted.
[0104] Alternatively, points on the reference objects and/or other objects other than the quay wall can be removed from the deformation analysis using the three-dimensional distance to the reference markers on the reference object.
[0105] The search for similar points between two epochs is generally harder because there are more changes in light conditions and terrain structure. This is alleviated by a process of guided feature matching. Each image is subdivided in a set of tiles. Any number of tiles is possible and the number of tiles depends on the application. A set of 16 tiles derived from a four-by-four raster generally works well. For each tile, the overlapping tiles from images in the other epochs are selected. This is done by a reprojection of the tile boundaries on all other images using the previously computed position and attitude. Corresponding common points are then searched only between the overlapping tiles.
[0106] If a corresponding point between two epochs was previously found as a corresponding point in one epoch, this point’s identifiers from all epochs are stored. This is a crucial step to maintain integrity in the adjustment result between subsequent epochs.
[0107] A new bundle block adjustment is defined comprising all epochs together, extended with the newly detected corresponding common points.
MODUPS -13-
[0108] The new combined bundle block adjustment is extended with constraints. Two or more constraints are defined for every corresponding coordinate between two epochs, at least a transformation constraint and a deformation constraint.
[0109] The transformation constraint states the transformation of a coordinate, which transforms the point to be in the coordinate system of a predefined epoch. This constraint is added as a new observation with a variance of zero or a value close to zero.
[0110] The deformation constraint states the expected three-dimensional movement of a coordinate between the two epochs. The deformation constraint equations can be any equation describing movement between epochs. This constraint is added as a new observation with a variance of zero or a value close to zero.
[0111] A hypothesis of the bundle block adjustment with deformation is formed by the assumptions and expectations of the deformation that are made in the deformation constraints.
[0112] The expected deformations can be entered based on expert assessment from quay wall engineers.
[0113] The expected deformations can also be computed using a Finite Element Analysis of the quay wall for various failure mechanisms of the quay wall.
[0114] To apply the Finite Element Analysis, a local three-dimensional analysis of deformation and stability is performed using Finite Element Analysis. Using a mesh, a realistic model with initial stress and pore pressures fields in equilibrium with the soil weight of the quay-wall is generated.
Subsequently, a geotechnical failure mechanism is induced in the model and the resulting deformation is computed using the Finite Element Method. The resulting deformed shape of the quay-wall is used as an alternative hypothesis for testing of deformations.
[0115] On quay walls with constant geotechnical conditions along the wall, the Finite Element Analysis can be performed once. Subsequently, the resulting deformation can be tested at an interval of for instance one meter along the quay wall.
[0116] In most situations there will be no a priori knowledge about the expected movement, in which case the deformation constraints should first be set to “no deformation”.
[0117] In situations without a priori knowledge of the expected movement, one can also opt to create a constraint with an infinitively large variance, in which case the estimated length of the constraint after adjustment is an indication of the movement of the quay wall. Alternatively, this constraint can also be modelled with a large variance close to infinity, or as an additional parameter without corresponding constraint.
[0118] The Bundle Block Adjustment is performed in the same way as described before. At the end of the adjustment, testing is performed using both the F-test and the w-test.
[0119] If the F-test of the Bundle Block Adjustment is accepted, then the said hypothesis of the deformation used in that adjustment is accepted. This means the assumptions and expectations of the deformations were accepted and the analysis can continue using this adjustment result. If the F-test of this Bundle Block Adjustment is rejected, then alternative hypotheses are defined with different assumptions and expectations of the deformations in the adjustment. The Bundle Block
MODUPS -14- Adjustments using these alternative hypotheses are performed. The statistically best Bundle Block Adjustment will be used to continue.
[0120] As all the observations in the bundle block adjustment have been tested in the adjustment per epoch, any w-test larger than the w-test threshold indicates that the expected movement as defined by one of the constraints is not valid.
[0121] The constraint with the largest w-test value above the threshold is removed from the adjustment. Optionally, a larger set of constraints with w-test values above the threshold can be removed as well.
[0122] Optionally, for every removed constraint, a distance vector between the coordinates is added as unknown parameter to the adjustment in order to estimate the deformation at this point between these epochs. This process is repeated until no more constraints have a w-test value larger than the threshold. The computed deformation vectors are stored separately, together with the estimated a posteriori variance for the deformation from the Q¢ matrix.
[0123] Equally important is the reliability of the results. This can be expressed as the Minimal Detectable Bias (MDB) of an observation. The MDB of an observation is computed as: MDB = of qr; with q;, the corresponding element from the Q; matrix, which is in turn computed as Qr = 0108077.
The value for A; is a constant. When selecting a value of 17.075 for 4, as is typically done in literature, the interpretation of the MDB is that it represents the smallest error that can still be detected using the w-test with a probability of 80%. Other values can be chosen when suitable for the application.
[0124] The MDB of the computed terrain points and the deformations can only be derived from previously computed MDBs of the observations. For every observation the effect of the minimal detectable error in the observation on the computed coordinates can be computed with: MDB; = QA" QV where V,,, is a vector with the same size as the y vector with only zeroes, except of the position of the current observation, where the MDB value of that observation is inserted. The equation above is repeated for every observation, resulting in an equal number of MDB, vectors. The maximum at each position in this vector is stored as the final MDB value for the estimated parameters. The interpretation is that this is the smallest error that can still be detected with a probability of 80% by performing w-testing on all the observations.
[0125] Within the data of the final epoch all images are reprojected to image planes parallel to the quay wall. Each plane is centred on the same point as the principal point (2008) of the image. Each image plane is rotated such that all rows in the images are horizontal and all rows of all images are pointing in the same direction.
[0126] Between each overlapping image pair, row-based contrast matching is performed.
MODUPS -15-
[0127] For every pixel this results in a set of one or more corresponding pixel locations in the other images. Contrary to the previous matching techniques, corresponding common points are now available for every pixel, rather than only the pixels with a high amount of contrast.
[0128] All matched corresponding common points are inserted again into a bundle block adjustment. This time all exterior orientation parameters are kept as non-stochastic observations. This means that they remain fixed and cannot be changed. Each observation gets an a priori standard deviation, a standard deviation of 0.5 times the pixel size generally works well. The three- dimensional coordinates for every matched pixel is the unknown property to be estimated.
[0129] Optionally, for every observation a w-test is performed to test for any outliers where the w- value is larger than the threshold. All observations with a w-test larger than the threshold are removed from the adjustment. The process is repeated until no observations with a w-value larger than the threshold remain.
[0130] The result of this process is a dense three-dimensional point cloud.
[0131] All coordinates from either the sparse and/or the dense point cloud are stored in an octree data structure. Levels of detail in the octree are generated by subsampling the entire point cloud from the most detailed level upwards.
[0132] A virtual three-dimensional environment is provided. All points from the octree data structure are visualized in this environment, coloured by the Red-Green-Blue (RGB) colour value of the corresponding pixel in the image.
[0133] The virtual environment utilises the octree such that only the parts of the octree are loaded with a level of detail that matches the amount of on-screen pixels available to display the points.
[0134] Additionally, an octree is created with the points from the deformation analysis. These points are visualized as well, coloured using a colour scale range going from one colour representing no deformation to another colour representing maximum deformation, using the subsequent colours from the colour spectrum.
[0135] The representation of deformations can be changed from colour to scaled vectors pointing in the direction of deformation with the length of the vector representing the magnitude of the deformation.
[0136] A user can click on every deformation point and obtain the absolute value of the deformation, the standard deviation, the Minimal Detectable Bias (MDB) and a graph with the deformation per epoch.
[0137] The user can change the scene to colour the points by standard deviation.
[0138] The user can change the scene to colour the points by Minimal Detectable Bias (MDB).
[0139] The representation of standard deviations can be changed from colour to three dimensional spheroids with the magnitude of each axis of the spheroid representing the scaled standard deviation in that direction.
[0140] The representation of minimum detectable biases can be changed from colour to three dimensional boxes with the size of the box in each direction representing the scaled minimum detectable bias in that direction.
40
MODUPS -16- Description of further embodiments
[0141] In another embodiment of the system, the images are taken from a helicopter or airplane.
[0142] In another embodiment of the system, the images are taken from a remotely piloted aircraft or vehicle.
[0143] In another embodiment of the system, the images are taken from a camera mounted on a tripod.
[0144] In another embodiment of the system, the platform is equipped with multiple cameras, each pointing at a different direction.
[0145] In another embodiment of the system, at least two cameras are rigidly mounted to each other, with each camera pointing in a different direction.
[0146] A fixed offset between the two perspective centres of the two cameras is added to the adjustment as a constraint.
[0147] In another embodiment of the system, at least two cameras are rigidly mounted to each other, with one camera pointing to the quay wall and one camera pointing to the opposing quay wal.
[0148] A fixed offset between the perspective centres of the cameras is added to the adjustment as a constraint.
[0149] Reference objects are positioned at the opposing quay wall as well and included in the computations. This will improve the accuracy of the deformation estimation as well as speed up the data collection process.
[0150] In another embodiment of the system a multitude of cameras is used which remain fixed at a single location along the quay wall and create images at a set interval.
[0151] In another embodiment, smaller reference objects containing just one marker are permanently mounted on the quay wall and used as a deformation point in every epoch.
[0152] In another embodiment of the system, a number of bits from the marker are used for error detection using a 2-bit checksum on all bits. If upon processing it is encountered that a marker was identified with the wrong number, the marker can be discarded, or the correct number can be reconstructed using other markers that have received the same three-dimensional position after the Bundle Block Adjustment.
[0153] In another embodiment of the system, the bits in the marker are assigned values for an Error Correcting Code, such that an error in reading the bits can be resolved automatically.
[0154] In another embodiment of the system, the markers are extended with additional bits. To this end the start data ring and the end data ring are also used to encode bits. To find the position of the least significant bit a cyclic search method is applied.
[0155] In another embodiment of the system, the reference objects are equipped with Quick Response codes (QR-code) to encode additional metadata including the calibrated reference values and the last date of calibration. These values can also be encoded through a unique identifier and a look-up database for these values. Upon processing the QR-codes are decoded and the data within the code is used as input for the Bundle Block Adjustment, removing the need for human 40 input during processing.
MODUPS -17-
[0156] In another embodiment of the system, the calibration parameters of the camera are added to the Bundle Block Adjustment as an adjustable parameter.
[0157] In another embodiment of the system, the calibration parameters of the camera are added to the Bundle Block Adjustment as an adjustable parameter based on an observation.
[0158] In another embodiment of the system, the reference objects are equipped with a GNSS- receiver and a GNSS-antenna. The distance between the phase centre and all markers on the reference marker is calibrated. The GNSS-receivers perform an observation while the reference object is stationary during the survey. The observed pseudo ranges are processed in a local GNSS- network with the other GNSS-stations on neighbouring reference markers. The computed GNSS- coordinates are added to the adjustment, together with distance constraints for the distance between the offset between the GNSS-antenna and the marker locations.
[0159] In another embodiment of the system, the reference objects are equipped with a GNSS- receiver and a GNSS-antenna. The distance between the phase centre and all markers on the reference marker is calibrated. The reference marker remains fixed at the current location and stays there in-between epochs.
[0160] The GNSS-receiver performs an observation with a set interval and transmits the observed data over a wired or wireless computer to a central computing device.
[0161] The observed pseudo ranges are processed in a local GNSS-network with the other GNSS- stations on neighbouring reference markers.
[0162] The computed GNSS-coordinates that are closest in time to the epoch of a photogrammetric registration are added to the adjustment, together with distance constraints for the distance between the offset between the GNSS-antenna and the marker locations.
[0163] In another embodiment of the system, the reference objects are equipped with a corner reflector for INSAR deformation monitoring. The distance between the corner reflector and all markers is accurately calibrated. The deformation of the reference marker is estimated in every pass of a Radar satellite. Using this method, deformations ofthe reference marker can be estimated with mm-precision. The estimated deformation between epochs is used as a special constraint with known deformation between two epochs.
[0164] In another embodiment of the system, the quay wall is permanently equipped with a fibre optic cable that is rigidly attached to the wall. Permanent markers are placed on the fibre optic cable. The fibre optic cable is capable of sensing deformations. The total observed deformation in the fibre optic cable between two reference markers is added to the Bundle Block Adjustment as a special constraint with known deformation between two epochs.
[0165] In another embodiment of the system, the reference objects are observed with a Mobile Mapping System (MMS). The MMS is equipped with a GNSS-receiver and an Inertial Measurement Unit (IMU) for positioning. The system additionally has one or more lidar scanning systems and one or more cameras. The reference objects are scanned or photographed by the MMS. This yields a position of the reference object which can be translated to a position of the markers. These positions are added to the bundle block adjustment as observations.
MODUPS -18-
[0166] In another embodiment of the system, the position and attitude of each image is determined by creating a fixed connection between the camera and a positioning device. This positioning device could be a combined GNSS-receiver and IMU, or any other device that gives a position or attitude or both. Using a fixed offset between the positioning device and the projection centre of the camera as well as a fixed rotation between these two systems, the position and attitude of the image can be computed from the observations. For additional geometrical reliability these observations should be added as additional observations in the Bundle Block Adjustment which is described in this document.
[0167] In another embodiment of the system the Bundle Block Adjustment is performed independently per epoch. A grid is overlaid on the quay wall. For each grid cell a plane is computed directly from the Bundle Block Adjustment by adding the plane parameters as unknowns to the adjustment. Now the constraints are applied on the corresponding planes rather than the corresponding common points.
[0168] In another embodiment of the system, planes are used as described above. The plane boundaries are defined by a crack detection algorithm that is run on all images. Each identified crack location is idealised to a line which represents the edge of one plane to be detected. The planes for the estimation of the deformation are derived from the detected crack lines.
[0169] In another embodiment of the system, the operator received real-time feedback on the quality of the observations. Each image is, once made, instantly processed at low resolution by the central processing unit to determine its position and orientation. The user receives feedback if the image was sufficiently sharp, if the image was well oriented, if accurate positioning of the image was possible and if additional images are needed to capture the entire quay wall with sufficient accuracy.
[0170] List of reference signs 1001 = Camera 1002 = Observation platform 1003 = Objective lens 1004 = Quay wall 2001 = Camera housing 2002 = Image sensor 2003 = Aperture 2004 = Mathematical positive plane 2005 = Focal length 2006 = Principal point of auto collimation in negative plane 2007 = Principal point of auto collimation in positive plane 2008 = Perspective centre of the camera 2009 = Perspective centre of the camera projected on the negative plane 2010 = Perspective centre of the camera projected on the positive plane 40 2011 = Point P projected on the negative plane
MODUPS -19- 2012 = Point P projected on the positive plane 2013 = Point P in the terrain as a three-dimensional point 3001 = Front 3002 = Left 3003 = Top reference marker 3004 = Retro reflective prism 3005 = Left reference marker 3006 = Left retro reflective prism 3007 = Bottom reference marker 3008 = Right reference marker 3009 = Right retro reflective prism 3010 = Mounting bracket 3011 = Counterweight 3012 = Top of quay wall 3013 = Reference object 4001 = Centre point of reference marker 4002 = Alignment ring 4003 = Start data ring 4004 = First bit data ring 4005 = Second bit data ring 4006 = Third bit data ring 4007 = Fourth bit data ring 4008 = Fifth bit data ring 4009 = Sixth bit data ring 4010 = Seventh bit data ring 4011 = Eighth bit data ring 4012 = Nineth bit data ring 4013 = Tenth bit data ring 4014 = End data ring 5001 = Start Epoch. 5002 = Place reference objects on quay wall 5003 = Determine positions of markers on reference objects 5004 = Capture images along quay wall 5005 = Overlapping images 5006 = Search corresponding common points 5007 = Detect reference markers in images 5008 = Resection to achieve approximate values using random resection consensus 5009 = Bundle Block Adjustment 5010 = Statistical testing of observations 40 5011 = Observations rejected?
MODUPS -20- 5012 = Yes, one or more observations are rejected 5013 = Data snooping 5014 = No, there are no rejected observations 5015 = Adjusted observations of current epoch 5016 = Constraint testing, see figure 6 6001 = Constraint testing 6002 = Adjusted observations of epoch 1 6003 = Adjusted observations of epoch 2 6004 = Adjusted observations of epoch ... 6005 = Adjusted observations of epoch n 6006 = Search corresponding common points between epochs using tiled subregions 6007 = Generate constraints between corresponding common points between epochs 6008 = Bundle Block Adjustment 6009 = Statistical testing of constraints 6010 = Constraints rejected? 6011 = Yes, one or more of the constraints are rejected. 6012 = Remove constraint or update constraint to represent predicted deformation 6013 = No, there are no rejected constraints 6014 = Statistically tested deformations for each corresponding common point 6015 = Convert corresponding common points to three-dimensional point cloud 6016 = Generate dense point cloud and generate octree from sparse and dense point cloud 6017 = Colour points by deformation magnitude, standard deviation or MDB 6018 = Visualise point cloud in web based interactive virtual three-dimensional environment 7001 = Image recording device 7002 = Image sensor 7003 = Image recording device processing unit 7004 = Storage 7005 = Central Processing Unit 7006 = File storage 7007 = Microprocessor 7008 = Memory 7009 = Database 7010 = Network 7011 = Client computer device 7012 = Microprocessor 7013 = Storage 7014 = Memory 7015 = Display 7016 = Human input device 40 7020 = Attitude and or positional sensing unit

Claims (1)

MODUPS -21- ConclusiesMODUPS -21- Conclusions 1. Een apparaat voor het monitoren van kademuren (1004), het apparaat omvat een montagevoorziening; een beeldopnameapparaat (1001,7001) dat aan de montagevoorziening is bevestigd; waarbij het beeldopnameapparaat een behuizing heeft (2001); waarbij het beeldopnameapparaat verder voorzien is van een objectief (1003) die aan de behuizing is bevestigd; het objectief bevat een diafragma (2003); waarbij het beeldopnameapparaat een beeldsensor (2002, 7002) omvat die in de behuizing is geplaatst; waarbij het beeldopnameapparaat verder een verwerkingseenheid (7003) omvat voor de verwerking van beeldgegevens van de beeldsensor; een opslagapparaat (7004) dat is aangesloten op het beeldopnameapparaat voor het opslaan van beeldgegevens; een rekenapparaat (7005); waarbij deze rekeneenheid zodanig is ingericht dat communicatie met een geheugen (7008), waarop een computerprogramma met instructies en gegevens zijn opgeslagen, mogelijk wordt gemaakt, waarbij het computerprogramma door de rekeneenheid kan worden uitgevoerd; waarin dit apparaat is ingericht om beeldgegevens van een te monitoren kademuur vast te leggen.A quay wall monitoring device (1004), the device comprising a mounting arrangement; an imaging device (1001,7001) attached to the mounting fixture; the imaging device having a housing (2001); the imaging device further comprising a lens (1003) attached to the housing; the lens contains an aperture (2003); the imaging device comprising an image sensor (2002, 7002) disposed in the housing; the image pickup device further comprising a processing unit (7003) for processing image data from the image sensor; a storage device (7004) connected to the image capture device for storing image data; a calculator (7005); said computing unit being arranged to enable communication with a memory (7008) storing a computer program containing instructions and data, the computer program being executable by said computing unit; in which this device is arranged to record image data of a quay wall to be monitored. 2. Het apparaat volgens conclusie 1 verder bestaande uit een sensoreenheid (7020), waarin de sensoreenheid ten minste bestaat uit een versnellingsmeter of traagheidsmeeteenheid voor het meten van de stand van het beeldopnameapparaat, een GNSS-ontvanger voor het meten van de positie van het beeldopnameapparaat.The apparatus of claim 1 further comprising a sensor unit (7020), wherein the sensor unit comprises at least an accelerometer or inertial measurement unit for measuring the position of the imaging device, a GNSS receiver for measuring the position of the imaging device . 3. Het apparaat volgens conclusie 1 of 2, aangepast om deformaties en/of verplaatsing van de te monitoren kademuur te bepalen.The apparatus according to claim 1 or 2, adapted to determine deformations and/or displacement of the quay wall to be monitored. 4. Een referentieobject (3013) voor het bieden van een referentiepositie, het referentieobject bestaande uit: een langwerpig lid met een voor- en achterkant; een montagebeugel (3010) die verstelbaar aan de achterzijde van het langwerpige lid is bevestigd.4. A reference object (3013) for providing a reference position, the reference object comprising: an elongated member having a front and a back; a mounting bracket (3010) adjustable to the rear of the elongated member. 5. Het referentieobject volgens conclusie 4, het referentieobject verder bestaand uit een dwarsligger dat op het langwerpige lid is aangebracht.The reference object of claim 4, the reference object further comprising a crossbeam mounted on the elongate member. 6. Het referentieobject volgens conclusie 4, waarbij het referentieobject verder ten minste één referentiemarkering (3003) en/of een retroreflecterende inrichting (3004) aan de voorzijde van het langwerpige lid bevat.The reference object according to claim 4, wherein the reference object further includes at least one reference mark (3003) and/or a retro-reflective device (3004) on the front side of the elongate member. 7. Het referentieobject volgens conclusie 5, waarbij het referentieobject verder ten minste één referentiemarkering (3003) en/of een retroreflecterende inrichting (3004) aan de voorzijde van het langwerpige lid en/of dwarsligger en/of montagebeugel omvat.The reference object according to claim 5, wherein the reference object further comprises at least one reference mark (3003) and/or a retro-reflective device (3004) on the front side of the elongate member and/or crossbeam and/or mounting bracket. MODUPS -22-MODUPS -22- 8. Het referentieobject volgens de conclusies 4-7, waarin het referentieobject verder een GNSS- ontvanger omvat voor het bepalen van de positie van het referentieobject.The reference object of claims 4-7, wherein the reference object further comprises a GNSS receiver for determining the position of the reference object. 9. Een systeem voor het monitoren van kademuren, het systeem bestaande uit: een montageplatform (1002); een apparaat volgens conclusie 1 dat aan gencemd montageplatform is bevestigd;9. A quay wall monitoring system, the system comprising: a mounting platform (1002); an apparatus according to claim 1 attached to a fixed mounting platform; 10. Het systeem volgens conclusie 9, waarbij het montageplatform bestaat uit een statief, een watergebonden oppervlaktevoertuig, een landvoertuig of een luchtvoertuig.The system of claim 9, wherein the mounting platform is a tripod, a waterborne surface vehicle, a land vehicle, or an air vehicle. 11. Het systeem van conclusie 9, dat verder bestaat uit één of meer referentieobjecten (3013) volgens conclusie 4-8, waarbij één of meer referentieobjecten op of dicht bij de te monitoren kademuur (1004) worden geplaatst.The system of claim 9, further comprising one or more reference objects (3013) according to claims 4-8, wherein one or more reference objects are placed on or close to the quay wall (1004) to be monitored. 12. Een werkwijze voor het monitoren van kademuren, de methode omvat de stappen: bepalen van de coördinaten van de referentiemarkeringen als invoer voor de vereffening (5003); het vastleggen van beeldgegevens van een te monitoren kademuur (5004); zoeken naar overeenkomstige gemeenschappelijke punten in de overlappende afbeeldingen (5006); het detecteren van de referentiemarkeringen in de afbeeldingen (5007); berekening van de benaderde positie en stand van de beelden en de terreincoördinaten van de overeenkomstige gemeenschappelijke punten (5008); berekening van beeldpositie, beeldstand en terreincoördinaten met behulp van een bundelblokvereffening (5009); statistische toetsing van de waarnemingen (5010) en verwijderen van verworpen waarnemingen (5013); zoeken naar overeenkomstige gemeenschappelijke punten tussen afbeeldingen van verschillende epochen (6006); het genereren van voorwaarden tussen overeenkomstige gemeenschappelijke punten tussen epochen (6007); bundelblokvereffening van meerdere epochen (6008); statistische toetsing van de voorwaarden (6009); het verwerpen of bijwerken van voorwaarden om daadwerkelijke deformatie te verkrijgen (6010); het wegschrijven van statistisch geteste deformaties (6014).A method for monitoring quay walls, the method comprising the steps of: determining the coordinates of the reference marks as input to the adjustment (5003); capturing image data of a quay wall to be monitored (5004); search for corresponding common points in the overlapping images (5006); detecting the reference marks in the images (5007); calculating the approximate position and attitude of the images and the terrain coordinates of the corresponding common points (5008); calculation of image position, image attitude and terrain coordinates using a beam block adjustment (5009); statistical testing of the observations (5010) and removal of rejected observations (5013); search for corresponding common points between images of different epochs (6006); generating conditions between corresponding common points between epochs (6007); multi-epoch bundle block settlement (6008); statistical testing of the conditions (6009); rejecting or updating conditions to obtain actual deformation (6010); writing statistically tested deformations (6014). 13. De werkwijze voor het monitoren van kademuren volgens conclusie 12, waarbij beeldgegevens worden vastgelegd vanuit een veelvoud aan posities, om een veelvoud van perspectieven van de te monitoren kademuur te creëren.The quay wall monitoring method of claim 12, wherein image data is captured from a plurality of positions to create a plurality of perspectives of the quay wall to be monitored. 14. De werkwijze volgens conclusie 13, waarin het veelvoud van perspectieven ten minste één schuin gezichtsveld omvat.The method of claim 13, wherein the plurality of perspectives comprises at least one oblique field of view. 15. De werkwijze van conclusie 13, waarin het veelvoud van perspectieven een loodrecht gezichtsveld omvat.The method of claim 13, wherein the plurality of perspectives comprises a perpendicular field of view. 18. De werkwijze van conclusie 13, waarin het veelvoud van perspectieven een omgekeerd loodrecht gezichtsveld omvat.The method of claim 13, wherein the plurality of perspectives comprises an inverted perpendicular field of view. 17. De werkwijze van conclusies 12 tot en met 16, waarin de werkwijze verder de stap omvat van het detecteren van zeer vergelijkbare afbeeldingen om zeer vergelijkbare afbeeldingen uit de berekening te verwijderen.The method of claims 12 to 16, wherein the method further comprises the step of detecting highly similar images to remove highly similar images from the calculation. 18. De werkwijze van conclusies 12 tot en met 17, waarin de werkwijze verder de stap guided feature matching omvat, waarbij bekende overlapping tussen afbeeldingen wordt gebruikt om 40 overeenkomstige gemeenschappelijke punten te vinden.The method of claims 12 to 17, wherein the method further comprises the step of guided feature matching, wherein known overlap between images is used to find 40 corresponding common points. MODUPS -23-MODUPS -23- 19. De werkwijze van conclusies 12 tot en met 18, waarin de werkwijze verder de stap van het berekenen van een dichte puntwolk uit de beelden van één epoche omvat.The method of claims 12 to 18, wherein the method further comprises the step of computing a dense point cloud from the images of one epoch. 20. De werkwijze van conclusies 12 tot en met 19, waarin de werkwijze verder de stappen van vereffening en statistische toetsing van de dichte puntenwolk omvat.The method of claims 12 to 19, wherein the method further comprises the steps of smoothing and statistically testing the dense point cloud. 21. De werkwijze van conclusies 12 tot en met 20, waarin de werkwijze verder bestaat uit 3D visualisatie van de puntenwolk in een webomgeving en/of de interactieve visualisatie van de deformaties en de toetsresultaten.The method of claims 12 to 20, wherein the method further comprises 3D visualization of the point cloud in a web environment and/or the interactive visualization of the deformations and the test results. 22. De werkwijze van conclusies 12 tot en met 21, waarbij meerdere camera's worden gebruikt met een vaste verbinding tussen elkaar.The method of claims 12 to 21, wherein multiple cameras are used with a hard connection between each other. 23. De werkwijze van conclusies 12 tot en met 22 waarin meerdere camera's worden gebruikt om beide zijden van een kanaal tegelijkertijd te meten.The method of claims 12 to 22 wherein multiple cameras are used to measure both sides of a channel simultaneously. 24. De werkwijze van conclusies 12 tot en met 23 waarbij meerdere camera's op een vaste locatie blijven om beelden met een vast interval op te nemen.The method of claims 12 to 23 wherein multiple cameras remain at a fixed location to record images at a fixed interval. 25. De werkwijze van conclusies 12 tot en met 24 waarbij sommige markeringen permanent op de kademuur blijven staan als vaste deformatiepunten.The method of claims 12 to 24 wherein some markings remain permanently on the quay wall as fixed points of deformation. 26. De werkwijze van conclusies 12 tot en met 25 waarbij een foutcorrigerende code is gecodeerd in de markering.The method of claims 12 to 25 wherein an error correcting code is encoded in the marker. 27. De werkwijze van conclusies 12 tot en met 26 waarbij één of meerdere verwachtingen voor de deformatie worden geschat met behulp van een eindige-elementenmethode-analyse van de kademuur op basis van diverse geotechnische bezwijkmechanismen.The method of claims 12 to 26 wherein one or more deformation expectations are estimated using a finite element method analysis of the quay wall based on various geotechnical failure mechanisms. 28. Een werkwijze voor het monitoren van kademuren, de methode omvat de stappen: bepalen van de coördinaten van de referentiemarkeringen als invoer voor de vereffening (5003); het vastleggen van beeldgegevens van een te monitoren kademuur (5004); zoeken naar overeenkomstige gemeenschappelijke punten in de overlappende afbeeldingen (5006); het detecteren van de referentiemarkeringen in de afbeeldingen (5007); berekening van de benaderde positie en stand van de beelden en de terreincoördinaten van de overeenkomstige gemeenschappelijke punten (5008); berekening van beeldpositie, beeldstand en terreincoördinaten met behulp van een bundelblokvereffening (5009); statistische toetsing van de waamemingen (5010) en verwijdering van verworpen waarnemingen (5013); het coregistreren van een raster van kleine vlakken op de berekende punten op de kademuur; het schatten van de vlakparameters; het herhalen van dit proces voor elk tijdvak; het creëren van voorwaarden tussen overeenkomstige vlakken; statistische toetsing van de voorwaarden (6009); het verwerpen of bijwerken van voorwaarden om daadwerkelijke deformatie te representeren (6010); het wegschrijven van statistisch getoetste deformatievlakken (6014).28. A method of monitoring quay walls, the method comprising the steps of: determining the coordinates of the reference marks as input to the adjustment (5003); capturing image data of a quay wall to be monitored (5004); search for corresponding common points in the overlapping images (5006); detecting the reference marks in the images (5007); calculating the approximate position and attitude of the images and the terrain coordinates of the corresponding common points (5008); calculation of image position, image attitude and terrain coordinates using a beam block adjustment (5009); statistical testing of observations (5010) and removal of rejected observations (5013); co-registering a grid of small areas at the calculated points on the quay wall; estimating the plane parameters; repeating this process for each time slot; creating conditions between corresponding planes; statistical testing of the conditions (6009); rejecting or updating conditions to represent actual deformation (6010); writing statistically tested deformation planes (6014). 29. De werkwijze van conclusie 28 waarbij de vlakken niet worden gedefinieerd in een regelmatig raster, maar de vorm is afgeleid van een scheurpatroon dat in de afbeeldingen wordt gedetecteerd.The method of claim 28 wherein the patches are not defined in a regular grid, but the shape is derived from a crack pattern detected in the images. 30. Het systeem volgens conclusie 9 tot en met 11 met middelen om realtime feedback te geven over de kwaliteit van de beelden aan de operator.The system according to claims 9 to 11 with means to provide real time feedback on the quality of the images to the operator.
NL2027547A 2021-02-11 2021-02-11 Method of and apparatus for determining deformations of quay walls using a photogrammetric system NL2027547B1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
NL2027547A NL2027547B1 (en) 2021-02-11 2021-02-11 Method of and apparatus for determining deformations of quay walls using a photogrammetric system
EP21839700.8A EP4291856A1 (en) 2021-02-11 2021-12-24 Determining deformations of quay walls using a photogrammetric system
PCT/NL2021/050789 WO2022173285A1 (en) 2021-02-11 2021-12-24 Determining deformations of quay walls using a photogrammetric system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
NL2027547A NL2027547B1 (en) 2021-02-11 2021-02-11 Method of and apparatus for determining deformations of quay walls using a photogrammetric system

Publications (2)

Publication Number Publication Date
NL2027547A NL2027547A (en) 2022-09-12
NL2027547B1 true NL2027547B1 (en) 2022-09-12

Family

ID=79282880

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2027547A NL2027547B1 (en) 2021-02-11 2021-02-11 Method of and apparatus for determining deformations of quay walls using a photogrammetric system

Country Status (3)

Country Link
EP (1) EP4291856A1 (en)
NL (1) NL2027547B1 (en)
WO (1) WO2022173285A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152306B (en) * 2023-03-07 2023-11-03 北京百度网讯科技有限公司 Method, device, apparatus and medium for determining masonry quality

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6711293B1 (en) 1999-03-08 2004-03-23 The University Of British Columbia Method and apparatus for identifying scale invariant features in an image and use of same for locating an object in an image
JP2008527542A (en) * 2005-01-06 2008-07-24 シュルマン,アラン Navigation and inspection system
US20200312019A1 (en) 2019-03-28 2020-10-01 The United States Of America As Represented By The Secretary Of The Navy Automated Benthic Ecology System and Method for Photomosaic and 3-D Model Generation

Also Published As

Publication number Publication date
NL2027547A (en) 2022-09-12
WO2022173285A1 (en) 2022-08-18
EP4291856A1 (en) 2023-12-20

Similar Documents

Publication Publication Date Title
US9965870B2 (en) Camera calibration method using a calibration target
Fassi et al. Comparison between laser scanning and automated 3d modelling techniques to reconstruct complex and extensive cultural heritage areas
Murphy et al. Historic building information modelling (HBIM)
Yakar et al. Performance of photogrammetric and terrestrial laser scanning methods in volume computing of excavtion and filling areas
US20090154793A1 (en) Digital photogrammetric method and apparatus using intergrated modeling of different types of sensors
Barazzetti et al. 3D scanning and imaging for quick documentation of crime and accident scenes
Rumpler et al. Automated end-to-end workflow for precise and geo-accurate reconstructions using fiducial markers
CN113096183B (en) Barrier detection and measurement method based on laser radar and monocular camera
Bleier et al. Low-cost 3d laser scanning in air orwater using self-calibrating structured light
US10432915B2 (en) Systems, methods, and devices for generating three-dimensional models
Capra et al. 3D reconstruction of an underwater archaelogical site: Comparison between low cost cameras
Razali et al. A hybrid point cloud reality capture from terrestrial laser scanning and UAV-photogrammetry
Piniotis et al. Experimental assessment of 3D narrow space mapping technologies
NL2027547B1 (en) Method of and apparatus for determining deformations of quay walls using a photogrammetric system
Deliry et al. Accuracy evaluation of UAS photogrammetry and structure from motion in 3D modeling and volumetric calculations
Al-Rawabdeh et al. A robust registration algorithm for point clouds from UAV images for change detection
Nazari et al. Analysis of 3D Laser Scanning Data of Farabi Mosque Using Various Softwaren
Ostrowski et al. Analysis of point cloud generation from UAS images
Karasaka et al. Accuracy assessment toward merging of terrestrial laser scanner point data and unmanned aerial system point data
Jaafar Detection and localisation of structural deformations using terrestrial laser scanning and generalised procrustes analysis
Rodarmel et al. Rigorous error modeling for sUAS acquired image-derived point clouds
CN114266835A (en) Deformation monitoring control method and system for non-measuring camera
Markovic et al. Application of modern technologies in assessing facade condition of building structures
Lichti Geometric point cloud quality
Barba et al. a Method for Estimating the Accuracy of the Tie Points Cloud: the Case Study of Palazzo de Falco