GB2497517A - Reconstructing 3d surfaces using point clouds derived from overlapping camera images - Google Patents

Reconstructing 3d surfaces using point clouds derived from overlapping camera images Download PDF

Info

Publication number
GB2497517A
GB2497517A GB1120987.1A GB201120987A GB2497517A GB 2497517 A GB2497517 A GB 2497517A GB 201120987 A GB201120987 A GB 201120987A GB 2497517 A GB2497517 A GB 2497517A
Authority
GB
United Kingdom
Prior art keywords
images
camera
point cloud
overlap
rig
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1120987.1A
Other versions
GB201120987D0 (en
GB2497517B (en
Inventor
Riccardo Gheradi
Oliver Woodford
Bjorn Stenger
Minh-Tri Pham
Atsuto Maki
Frank Perbet
Roberto Cipolla
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Europe Ltd
Original Assignee
Toshiba Research Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Research Europe Ltd filed Critical Toshiba Research Europe Ltd
Priority to GB1120987.1A priority Critical patent/GB2497517B/en
Publication of GB201120987D0 publication Critical patent/GB201120987D0/en
Publication of GB2497517A publication Critical patent/GB2497517A/en
Application granted granted Critical
Publication of GB2497517B publication Critical patent/GB2497517B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B37/00Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
    • G03B37/005Photographing internal surfaces, e.g. of pipe
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B37/00Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
    • G03B37/02Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe with scanning movement of lens or cameras
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B37/00Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
    • G03B37/04Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/30Polynomial surface description
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis

Abstract

An apparatus for reconstructing the surface of a 3D object comprises at least one camera 101a configured to capture a plurality of images of different parts of the object from a plurality of different viewpoints. The camera(s) are configured such that there is overlap between adjacent images. A processor is adapted to:process the captured images and match them such that data derived from the overlap of images is used to form a 3D point cloud 103a of the object. A parameterised 3D surface is fitted to the point cloud, by optimising the parameters of said surface to fit the point cloud. Finally, a reconstruction model 105a of the object from the fitted surface is generated and output. This may be enhanced with texture and colour/shading information from the captured images.

Description

I
A Reconstruction System and Method
FIELD
Embodiments described herein generally refer to systems and methods for reconstructing 3D surfaces.
BACKGROUND
Often, it is desirable to recover the shape of a surface, for example, for inspection and monitoring of the surface itself, objects located on the surface such as pipes or to monitor an object which comprises the surface.
Often, such an inspection or monitoring will be performed manually. In surveying of structures such as tunnels or buildings, defects such as cracks in the wall or water leakages are identified by visual inspection and marking them in a paper log sheet..
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the present invention will now be described with reference to the accompanying figures in which: Figure Ia is a schematic of an apparatus in accordance with an embodiment of the present invention inspecting a tunnel, and Figure lb is a schematic of an analysis unit to be used with the apparatus of figure 1 a; Figthe 2 is a more detailed schematic of the apparatus of figure 1; Figure 3 is an apparatus in accordance with a further embodiment of the present invention with a moving camera; Figure 4 is a schematic of an apparatus in accordance with a further embodiment of the present invention with a spherical mirror; Figure 5 is a schematic of an apparatus in accordance with a further embodiment of the present invention with a facetted mirror; Figure 6 is a schematic demonstrating overlapping between adjacent views taken by apparatus in accordance with embodiments of the present invention; Figure 7 is a flow chart showing a method of reconstructing the surface in accordance with an embodiment of the present invention; Figure 8 is a flow chart showing a process for acquiring the images using a still camera in accordance with an embodiment of the present invention; Figure 9 is a flow chart showing a process for acquiring the images using a video camera in accordance with an embodiment of the present invention; Figure lOs is a flow diagram showing a process for generating a 3D point cloud from acquired images in accordance with an embodiment of the present invention; Figure lOb is a schematic of a tunnel used to describe the order in which images may be taken; Figure lOc is a schematic showing an arrangement of images taken of the tunnel of figure lOb; Figure lOd is a schematic of a binary tree used in the method described with reference to figure 1 Ga; Figure 11 is a flow diagram showing a basic method for fitting a surface to a 3D point cloud in accordance with an embodiment of the present invention; Figure 12 is a flow diagram showing a method for fitting a surface to a 3D point cloud in accordance with an embodiment of the present invention, the method fitting a limited number of known primitives; Figure 13 is a flow diagram showing a method for filling a surface to a 3D point cloud in accordance with an embodiment of the present invention, the method fitting a plurality of quadric surfaces with semi-global optimization; Figure 14 is a flow diagram showing a method for fitting a surface to a 3D point cloud in accordance with an embodiment of the present invention, the method fitting a plurality of geometric primitives with global optimization; and Figure 15 is a flow diagram showing a method of applying texture to the fitted surface.
DETAILED DESCRIPTION
Methods and systems in accordance with embodiments of the present invention can be used to capture 3-D objects in detail.
In an embodiment, an apparatus for reconstructing the surface of a 3D object is provided, the apparatus comprising: at least one camera configured to capture a plurality of images of different parts of the object from a plurality of different view points, the at least one camera being configured that there is overlap between adjacent images; a processor adapted to: process said captured images to match them such that data derived from the overlap of images is used to form a 3D point cloud of the object; fit a parameterised 3D surface to said point cloud, by optimising the parameters of said surface to fit said point cloud; and generate a reconstruction of said object from said fitted surface; and an output for outputting said reconstruction.
In an embodiment, the apparatus further comprises a rig, wherein the at least one camera is mounted on the rig, the camera being configured to image sections of the surface, the rig being configured to translate the camera in a first direction to image further sections of the surface.
The rig may be mounted on a moving platform, for example on rails or suspended from a beam on the ceiling.
In an embodiment, the apparatus is used to configured to reconstruct the inner surfaces of a tunnel, wherein the at least one camera is provided on a rig which is movable along the length of the tunnel to be reconstructed.
During tunnel inspection, the rig can be mounted on rails, which may already be present in the tunnel or on a beam suspended from the ceiling.
In an embodiment, the apparatus comprises a plurality of cameras attached to a rig, wherein the cameras are directed to allow a section of the surface to be imaged, the cameras being arranged such that there is overlap between the areas being imaged by each camera. In a further embodiment, the plurality of cameras are an array of radially distributed cameras.
In an embodiment, at least one camera is located on a rig which is configured to move the camera to allow a section of the surface to be imaged. For example, a single camera may be provided which rotates or oscillates along the gravity axis.
In a further embodiment, the apparatus further comprises mirrored member located on the rig, the at least one camera being directed towards the mirrored member and the mirrored member being located to allow a section of the surface to be imaged by the at least one camera. For example, a chrome ball may be used or a mirror with a facetted surface. Prisms may also be used in addition to or instead of mirrored surfaces.
The at least one camera may be a still camera or a video camera.
In a method in accordance with an embodiment, fitting the 3D surface to the point cloud comprises fitting at least one geometric primitive, primitive or quadric surface to said point cloud.
In a further embodiment, fitting the 3D surface to the point cloud comprises using random sampling and optimization techniques. For example, fitting a plurality of primitives using random sampling and EM optimization, where EM stands for Expectation-Maximization.
In a further embodiment, fitting the 3D surface to the point cloud comprises cost promoting the sparseness of a model used to fit the surface with optimization. For example, fitting a plurality of primitives by cost promoting the sparsness of the model under LM optimization. LM is Levemberg-Marquardt. "Promoting the sparsness of the model" refers to the fact that one of the design goals of this technique is to reduce the total number of primitives generated, to avoid for example cases in which a single plane is explained with two different coplanar primitives, each accounting for half of the points.
in an embodiment, the apparatus may be further configured to project the images onto the fitted 3D surface.
In a further embodiment, producing the 3D point cloud comprises matching features between images, wherein the order in which the images were taken is used to determine images which should have matching features. This allows the number of images searched to be reduced if neighbouring images can be determined from the order in which the images were taken.
Further, if the geometrical relationship between overlapping images is already known, then this can also be used when matching features. For example, roughly knowing that an image lies in a specific direction with respect to the one being considered helps in locating the correct point to point correspondences. If, for example, an image A is left of an image B, any match with a very large variation in the vertical direction is obviously wrong and it can be skipped without hami, accelerating the entire process.
In some embodiments, there is an overlap of at least 50% between adjacent images.
In further embodiments, the apparatus is configured such that each point in the 3D point cloud is reconstructed from at least 2 images. In some cases, 3 or more images are used to generate each point.
In a further embodiment, the apparatus is configured to allow further images to be added after the generation of the reconstruction. The geometry and pictures are processed together to produce a description of the surfaces of the scene. Thus, any new picture can be attached to an existing reconstructed scene for example for defect tracking, self localisation of personnel. Further a new picture can be added to an existing reconstructed scene for detecting changes since the last inspection.
A method of reconstructing the surface of a 3D object is provided in a further embodiment, the method comprising: receiving images of the surface of the object to be reconstructed, the images being of different pads of the object from a plurality of different view points, there being overlap between the images; processing said captured images to match them such that data derived from the overlap of images is used to form a 3D point cloud of the object; fining a parameterised 3D surface to said point cloud, by optimising the parameters of said surface to fit said point cloud; and generating and outputting a reconstruction of said object from said fitted surface.
In a yet further embodiment, the method further comprises: capturing a plurality of images of different parts of the object from a plurality of different view points, such that there is overlap between adjacent images.
Methods in accordance with embodiments of the present invention can be implemented either in hardware or on software in a general purpose computer. Further methods in accordance with embodiments of the present can be implemented in a combination of hardware and software. Methods in accordance with embodiments of the present invention can also be implemented by a single processing apparatus or a distributed network of processing apparatuses.
Since some methods in accordance with embodiments can be implemented by software, some embodiments encompass computer code provided to a general purpose computer on any suitable carrier medium. The carrier medium can comprise any storage medium such as a floppy disk, a CD ROM, a magnetic device or a programmable memory device, or any transient medium such as any signal e.g. an electrical, optical or microwave signal.
Figure Ia shows an apparatus in accordance with an embodiment of the present invention used for tunnel inspection. Here, a tunnel inspection apparatus 1 comprises a plurality of cameras. The tunnel inspection apparatus 1 moves along tunnel 3. The apparatus 1 moves in the direction indicated by arrow 5.
The data captured by the tunnel inspection apparatus 1 is provided to analysis unit 51.
Figure lb shows a possible basic architecture of a system analysis unit 51. The analysis unit 51 comprises a processor 53 which executes a program 55. Analysis unit 51 further comprises storage 57. The storage 57 stores data which is used by program to analyse the data received from the apparatus 1. The analysis unit 51 further comprises an input module 61 and an output module 63. The input module 61 is connected to a image input 65. Image input 65 receives image data from the apparatus 1. The image input 65 may simply receive data directly from the apparatus 1 or alternatively, image input 65 may receive image data from an external storage medium or a network.
Connected to the output module 63 is a display 67. The display 67 is used for displaying reconstructed 3D images generated from the image data received by the image input 65. Instead of a display 67, the output module 63 may output to a file or over the internet etc. In use, the analysis unit 51 receives camera data through image input 63. The program 55 executed on processor 53 analyses the image data using data stored in the storage 57 to produce 3D image data. The data is output via the output module 65 to display 67.
Figure 2 shows in more detail one example of the tunnel inspection apparatus 1. In this particular example, the tunnel inspection apparatus comprises seven cameras ha to 11g. The cameras are arranged in arc and mounted on a rig (not shown). The cameras face outwards towards the tunnel walls such that the cameras image an arc or a slice of the surface of the tunnel walls. The cameras ha to hg are also arranged so that the images of the tunnel walls from adjacent cameras overlap with one another. in the embodiment of figure 1, the cameras are moved along the tunnel. In one embodiment, the cameras are still cameras which are set to continually photograph the inner walls of the tunnel as the cameras move along the direction of arrow 5. In a further embodiment, the cameras are not automatic and photograph the tunnel wall in response to a command, the rig is moved and then the cameras photograph the next arc of tunnel wall.
If the cameras are set to continually photograph the tunnel, the camera speed is set so that as the rig moves along the tunnel, photographs taken along the tunnel will also overlap.
In a further embodiment, video cameras are used. When video cameras are used, the frame rate is set so that adjacent frames overlap.
Figure 3 shows a variation on the inspection apparatus described with reference to figures 1 and 2.
The apparatus of figure 3 comprises a single camera provided on a rotating rig (not shown). The rig rotates the single camera through different positions so as to image the inside of the tunnel walls along an arc or slice of the tunnel. The speed of rotation of the rig is set so that as in figure 2, images overlap one another. Further, the shutter speed of the cameras also set, in combination with the rotation speed of the rig so that images overlap one another in the same slice.
As explained with reference to figures 1 and 2, when the rig is moved along the direction of arrow 5 in figure 1, the speed is set so that the images taken along the length of the tunnel overlap. This way, images of the entire inside walls of the tunnel I are collected.
As for the apparatus of figure 2, the camera of figure 3 maybe a still camera or a video camera. If it is a video camera, the frame rate is set in order to allow overlap of adjacent images both as the camera swept across an arc of the tunnel and also as the camera swept along the tunnel.
A further type apparatus is shown with reference to figure 4. Here, there is a single camera 23 and a spherical mirror 25. The spherical mirror is positioned in the centre of the rig as explained with reference to figures 2 and 3. The camera 23 is directed towards the spherical mirror 25. In this arrangement, the spherical mirror is positioned such that the camera can photograph an arc of the inside walls of the tunnel. The camera is set to fire automatically and its speed would be geared to that of the rig such that images of adjacent slices overlap with one another. Further, if a video camera is used, the frame rate of the video camera will be set so that as the rtg is moved, photographs of adjacent slices or arcs overlap.
Figure 5 shows a yet further variation on the system of figure 4. Here, a multiple mirror arrangement 27 is used. The multiple mirror arrangement 27 has a plurality of mirrored facets arranged to give the mirror a generally arced shaped profile. A camera 29 is directed towards the multiple mirror arrangement 27. The multiple mirror arrangement 27 is located on the rig (not shown) and the camera 29 is positioned such that when the mirror 27 and camera are located on the rig, the camera can image the who'e of an arc or slice of the inside of the tunnel. The mirror and camera are then moved along the tunnel in the same direction 5 as previously described with reference to figure 1.
Further, the camera speed and the speed of the rig is selected so that if the camera is a still camera, images overlap. Further, if the camera is a video camera frames overlap from slice to slice.
It has been mentioned above that there is a need for the adjacent images or image frames to overlap. This is described in more detail with reference to figure 6. In an embodiment, for a point to be reconstructed, the point must be detected by at least two cameras or at least two images (if a single camera is used to take all images as explained with reference to figure 3). Further, there must be at least one point in each image which is visualised by three cameras.
In one embodiment as shown in figure 6, there is an overlap of at least 50% of the image area for each image both in the slice direction and in the longitudinal direction along the tunnel.
Figure 7 is a simple flow diagram which shows a process in accordance with an embodiment of the present invention. First, the images are acquired in step 8101 using the equipment lOla of the type described with reference to figures 2 to6.
The output of this step will leave a plurality of images which will overlap with one another. In the shape recovery step, 8103, these images are then processed to derive a 3D point cloud 103a, Next, in step 8105, a surface is modelled 105a using the 3-D point cloud derived in step 8103.
Then, the images derived in step 5101 point cloud are applied to the surface modelled in step 8105. As shown in 107a, this allows features such as the pipes 109 on the inside walls of the tunnel to be extracted.
Figure 8 shows a flow diagram for the picture acquisitions step SlOl as shown in figure 7. The process starts in step 8121.
Next, the cameras are set up in step 5123. During camera setup, the cameras positioned on the rig or the mirror and cameras are positioned on a rig to allow a slice of the tunnel to be imaged. Further, at this point, the cameras are adjusted so that there is a suitable overlap between the images. If there is one camera and a mirror arrangement, the mirror and camera are moved to ensure that the desired area of the tunnel is imaged-If there is a single moving camera, then the speed of the camera and the shutter speed of the camera will be set to ensure that there is overlap between images.
Next, the cameras are calibrated in step S125. In this step, for single cameras, a series of parameters are computed, fixed and stored. In an embodiment, these parameters may be, for example, the focal length, radial distortion, aspect ratio etc. In an embodiment, all of the parameters can be stored as a single 3x3 upper triangular matrix plus 1 to 4 radial distortion parameters. For the embodiment of figure 3 the radius of rotation is computed, fixed and recorded. For embodiments comprising mirrors, the relative position of mirrors and cameras, or equivalently the position of all the virtual cameras are computed and recorded.
Next, the rig is moved into position in step S127. Pictures are then taken of one section in step 5129. At step 5131, it is determined if these are the last pictures to be taken.
The pictures are taken to form one slice at a time of the tunnel. Next, if it is determined that the full tunnel has not been imaged, the rig is moved into the next position in step 8127 and the procedure is repeated until it is decided that is the surface of the tunnel which is to be imaged has been finished and then the picture acquisition procedure ends in step S133.
In the above, the cameras can either be set to photograph the tunnel after the rig is moved into the desired position or the cameras can continually photograph the inside of the tunnel as the rig is moved.
Figure 9 is a flow diagram for when a video camera is used. Here, the process starts in step S141. The camera is set up in step 8143 in the same manner as described with reference to figure 8. The camera is then calibrated as described with reference to figure 8 in step 8145.
The rig is then moved into position in step S147 and the video acquisition is started in step S149. The rig is then removed to the next position in step S151. It is determined in step S153 if the volume of the tunnel which isto be imaged has been imaged.
Video acquisition is then stopped in step S155 and the process ends at step 5157.
The output of both of figures 8 and 9 is a plurality of images. Features of these images then need to be matched to form a 3D point cloud of the surface of the tunnel. It should be noted, that there are other methods for matching features and deriving the 3D point cloud which could be used in methods and systems in accordance with embodiments of the present invention.
The process starts at step S201. If the images are obtained by a video camera, the frames are obtained as images in step S203.
Next, in step 8205 features and descriptors are extracted. The features are regions which are a repeatable region in an image: concretely the location (x,y), scale and orientation of an area which is likely to be easy to recognize in several images (usually corners). Each feature is represented with a descriptor, which is a collection of values which should be independent to usual image variations like viewpoint changes, illumination changes, camera type and configuration. The descriptor used is a SIFT type descriptor which is constructed taking histograms of the gradients of the aforementioned image region.
In step S207 images for matching are derived. If the images are collected in a pre-determined order, as is the case for tunnel inspection, then the order in which the images are taken is known and therefore images with overlapping areas are known and identified in step S207.
Features are then matched between images where overlap is expected in step S209.
In an embodiment, in steps S207 and S209 the order in which the images were taken is used. I-low this may be achieved will be described with reference to figures lOb and 1 Oc.
The rig geometry or the acquisition modalities described earlier guarantee the acquired image dataset to have a sufficient overlap to be able to successfully reconstruct the shape of the captured environment.
The acquired images can be roughly located on a two dimensional grid, corresponding to a regular sampling of the tunnel surface in polar coordinates. In this representation, one direction corresponds to the tunnel direction while the other spans the surface in the perpendicular direction as shown in figure lOb. Figure bc shows the images on the polar manifold: it's clear for example that image I is neighbouring (has overlap) with image 2 and 3 along direction A and B respectively. (It should be noted that figure bc is for illustrative purposes and does not show the true extent of overlap between adjacent images. To show full overlap, figure bc would become to complex to act as a useful aid for explanation, therefore in figure 1 Oc the overlap has been reduced.) These proximity relations can be used to accelerate the matching process: correspondences can be searched only in direct neighbours, instead of having to explore a larger number of candidates; also, since the rough relative direction of motion is known, this can be used to constrain the search for matches themselves, which are bound to lie at approximately known coordinates. The improvements in ternis of speed come therefore from two distinct sources: the reduced number of image to image tests needed and the ability to make each one of them faster.
It should be noted that the images in figure bc are not rectangle because in general their shape on the polar manifold wilt not be preserved.
In step 5211, the transform model is computed by finding the set of matching features consistent with a valid camera motion. Matches coming from step S209 can be incorrect; they are removed in this step. There are three cases: (1) no valid camera motion found, meaning that most matches are incorrect or that there are too few of them and therefore the two images which overlap or couple are not suitable for reconstruction, (2) there is a homographical relationship, which is generated for example by a camera rotating in place, which does not provide enough information for the 3D reconstruction process but vaUdates the matches and (3) a fundamental relationship which is the case for two pictures under general motion.
The third type (3) of relationship enables 3D reconstruction for the matched points. In summary, in step S21 I invalid matches are removed and only those matches which contain enough information to proceed with the reconstruction process are allowed.
Up to this point, the images are organized our image collection into couples (case 3 above) and additional images (case 2 above). The basic idea is to start from these couples to obtain 2 view reconstruction and then add the additional images to the couples, forming progressively larger and larger clusters. Clusters can also be merged together forming bigger reconstructions, until all images have been registered or no further merges are successful.
The previous matching steps produce a collection of on point correspondences in two images. In order to have maximum reliability, only the completely unambiguous matches are collected; this ensures that the models computed in step 5211 can be correctly computed but greatly reduces the number of recovered correspondences. In step 5213, additional matches are computed (densification) using the homographical or fundamental relationship computed in S211 to verify their accuracy. The process is similar to the matching steps already described, but with the additional constraints for the matches to be coherent with the camera motion.
In step S215, matches of features between views or images are aggregated into tracks.
A track is a coflection of coordinates in different images corresponding to a single three dimensional point. In one embodiment, matches between more than single couples of images are used to produce a complete 3D reconstruction. A track can be thought of as the precursor of a 3D point: By knowing the projection of a 3D point in many images it is possible to recover its position with far greater accuracy than it would be possible using just single pairs of pictures. Tracks also help in detecting inconsistencies, leading to the removal of any surviving wrong correspondences.
In step S217, the images are clustered into a binary tree. Using all the information computed so far, it is possible to define a distance function between images that takes into account three things, the number of common matches, the total area the matches cover in the image and if the matches are consistent with a fundamental or homographical model. Given this distance, a tree can be constructed using a technique called agglomerative clustering: given a set of objects (images in this case) and a distance between them, the two closest clusters will be merged.
Initially, couples are formed from pairs of images and afterwards single images will be added to them. Eventually, entire clusters of images will be joined together, forming a tree which has images as leaves and partial reconstruction as internal nodes.
A schematic example of such a tree is shown in figure lOd. Three types of nodes are shown: cluster ÷ cluster node 255; view + cluster node 253; and view + view node 251.
The nodes are then processed bottom-up starting in step S219. In step S221, the types of node is identified if a node joins two images as shown in step S223 (see 251 of figure lOb) a two view reconstruction will be performed to obtain points in 3D space which will form part of the 3D point cloud.
If the node is recognised to be the type of node shown in S225 (see 253 of figure 1 Ob) where a new view is to be added to an already formed cluster here the 3D structure from the already processed cluster is updated using the tracks from the new view.
If the node is recognised to be the type of node shown in S227 (see 251 of figure 1 Ob) then two clusters are merged. The two clusters which are to be merged will have two different reference systems. The two clusters will have some points in common and these common points are used to transform the orientations so that the clusters are merged to form one cluster with a common orientation.
S When all nodes have been processed the process ends at step S229.
The output of the above is a 3D point cloud with points in 3D space.
Figure 11 is a flow diagram showing the general surface fitting procedure. In step 9301, the process starts. In step S303, a check is made to see if any points have not been assigned. The process loops back to step S303, the first time that step S303 is encountered no points have been assigned.
In step S305, some points are sampled in a local neighbourhood, these points may be selected randomly, in accordance with a selection criteria or by a user of the system.
In step 8307, a surface is then fitted to the sample points. This surface may be based on a quadric surface or on a geometric primitive. The parameters of the surface are then optimised and the surface is refined in step S309.
The process then loops back to step S303 and the process proceeds again to step S305 if there are more sample points to assign. Once all points have been assigned, the process progresses to step S31 I to ensure that each point is assigned to a single surface. Any global constraints are then enforced in step S31 3. For example, when there are multiple primitives, global constraints like parallelism, perpendicular relationships1 co-planarity can be enforced. These features are common in man made environments, and step 8311 allows them to be promoted then in the final result.
The process ends at step S315. More detailed surface fitting processes will now be described with reference to figures 12 to 14.
In an embodiment, a surface fitting procedure is used following figure 12. In this procedure, the surface to be fitted is a cylinder, other type of geometric primitive, for example, spheres; cubes or boxes; toroids or pyramids etc. The surface to be fitted may also be any previously known shape which is easy to characterize, for example a tunnel with a rectangular section. A primitive is not necessarily a rigid, basic shape. It can be for example a cylinder capable of bending, therefore able to fit longer section of a tunnel. Otherwise, the data may be divided into several shorter overlapping sections and then optimize each section separately.
A primitive is selected on based on the type of structure to be fitted, in this case a general tunnel shape. In this example a primitive will be discussed, but the procedure could be used for fitting a quadric surface. The procedure starts in step S351.
In step S353 a possible primitive is selected. In this particular example only one primitive is used.
Next, in step S355, a rough initialisation of the parameters of the primitive are set. This can be automatic optimisation or setting parameters by hand Not necessary for example when fitting a single cylinder or tunnel shape because when there is just a single primitive the optimization will converge to its rough location. This is not true in general if there is more than one primitive in a general configuration.
In step 3357, the distance between the reconstructed points and the surface of the 2D primitive to be fitted is minimised. In one embodiment, the Levenberg-Marquardt algorithm is used, but any other optimiser could be used.
In step S359, outliers are detected. Outliers are points not on the surface. In one embodiment, these are determined using a statistical test. For example, a normal distribution is assumed for the errors and points which have a probability of being an outlier larger that a pre-specified threshold. In one embodiment, this threshold is set as 0.99%. These outliers are removed in step S361. The procedure then returns to step S357 to optimise the parameters in the absence of the removed outliers.
Once a fit is achieved where there are no more outliers detected in step S359, the process returns to step S351 where the next geometric primitive is selected. If further primitives exist then the procedure will be repeated for the next primitive. If there is just one primitive or the process is running for the last primitive a plurality of primitives, the process ends at step S363.
A further embodiment is discussed in relation to the process of figure 13. Here, the 3D point cloud is filled to a quadric surface. Although the same procedure could be applied to fitting a plurality of primitives.
The process starts in step S401. In step S403, a region of unlabelled points will be identified, In the first pass through the process, all points will be unlabelled.
If a region of unlabelled points is found in step 8405, the region is fitted with a new quadric in step 8407. In one embodiment, this is achieved by minimizing the sum of squared distances between the points and the fitted quadric. The procedure is solvable in closed form, meaning that the parameters of the quadric can be computed as a direct function of the input data.
In step 8409, every point is labelled to its nearest quadric. In step S41 1, points whose distances to their quadrics are greater than a threshold are considered outliers and remain unlabelled or are assigned to an outlier label. In a further embodiment, quadrics with too few points are removed.
After the removal of the outliers in step 8411, the quadrics are refined in step S413 by minimizing the sum of squared distances again.
In step S415, it is determined if any point has changed its label, the labelling is determined to be stable if all points have maintained the same labels. In the first pass of the process where all points will be labelled for the first time then all labels will not be stable and the process will return to step 8430 to select the points which were not labelled in the previous pass through the process.
If unlabelled points are found, these are fitted with a new quadric in step S407. In step S409 each point (including the ones which have been previously labelled) are then re-labelled to the nearest quadric.
If no unlabelled points were identified in step S405, the process jumps to step S40 so that all points are relabelled with the refined quadrics derived in step 8413.
In both scenarios, outliers are removed at step 8411 as before and the quadrics are refined in step 8413.
It is then checked again to see if there has been stable labelling in step 8415. If some points have been relabelled to a new quadric, then the process loops back to step 5403 and is repeated. If there is stable labelling, then the process progresses to step $417 where a list of quadrics and their points is returned. The process ends at step 5419.
Figure 14 shows a further method of filling a surface by computing the parameters of a set (of unknown size) of parameterized surfaces (or primitives), and assigning each point to the surface which generated it, or to an outUer class.
The process starts in step 8551. In step S553, primitives are initialised, one method for achieving this is to select a set of neighbouring points and the optimal primitive is fit to these points, this is proposed primitive. The points may be selected at random or by a user.
In step S555, the parameters of the proposed primitive are optimized using an algorithm, for example Levenberg Marquadt to minimize the sum of squared distances of all points to the primitive, those distances being truncated to (i.e. set to be no larger than) the distance of each point to its current primitive.
In step 8557, the model cost with and without the proposed primitive is computed. This cost is the sum the squares of shortest distances (hereby, simply "distance") from each point to its assigned surface (or a fixed cost for outliers), plus a inodel cost for each primitive in the set. Unassigned points add a fixed amount to the total cost of the current solution. The proposed primitive is accepted if it lowers that cost.
In step 8559, if the proposed primitive reduces the total model cost then it is added to the model. In step S561, for each point, its distance to each primitive is computed, and the point is assigned to the closest primitive.
Then the process loops around to step 8553 and a further set of points are selected and a new proposed primitive is assigned to these points. The parameters are then optimised in step 8555 the same as before on just the new ones. This process creates new tentative primitive for the points to be assigned to.
It is then determined in step S557 if the new primitive improves the model. If this is the case, then the process forwards to step 5559 as explained before.
However, if the new primitive does not improve the model then it is not added to the model and the parameters of the every primitive are optimized using Levenberg Marquadt in step 5563 to minimize the sum of squared distances from the primitive to all points currently assigned to the primitive.
In step S565, the model costs after step 5563 are compared with those before step S563. If the model has improved, then the process proceeds to step S561 where for each point, its distance to each primitive is computed, and the point is assigned to the closest primitive. The process then loops back to step S553 and a new set of points is selected for fitting to a primitive.
If at step 5565 it is determined that the model has not improved, then at step S567, the model costs with each primitive removed in turn are computed. At step S569, it is checked if one of the costs computed in step S567 is lower than the current cost. If so, remove the associated primitive is removed in step S571. Then the process proceeds to step S561 where for each point, its distance to each primitive is computed, and the point is assigned to the closest primitive. The process then loops back to step 5553 and a new set of points is selected for fitting to a primitive.
If at step S569, one of the costs computed in step S567 all of the costs are greater than the current cost, then the process proceeds to the convergence step S573. Here it will be checked if some convergence criteria has been satisfied. If the convergence criteria has not been satisfied then the process will proceed to step 8553 and a new primitive will be selected.
If the convergence criteria is satisfied, the process ends at step 5575. A suitable convergence criteria would be to see if no model improvements have been made in the last n iterations, where n is an integer of 2 or more. In one example, n = 5.
The above process receives an input point cloud, and computes the parameters of a set (of unknown size) of parameterized surfaces (primitives), and assigns each point to the surface which generated it, or to an outlier class.
The algorithm minimizes this cost monotonically, by iteratively updating the set of primitives, their parameters, and inliers.
The output of the surface fitting algorithms explained with references to figures 11 to 14 is a surface which is fitted to the point cloud obtained after the shape recovery process, an example of which is described with reference to figure 10.
The next process is for the images acquired to be applied to the fitted surface. The process is described with reference to figure 15. The process starts at step S501 and progresses to step S503 where it is determined if the process has been completed for all surfaces.
First, the surface boundary of the surface to be processed is computed in step S505.
The surface is then back projected onto the images from which the surface was derived in step S507.
A single texture for the surface is then synthesised in step S509. In an embodiment, since every reconstructed point is recovered by at least two images, there are always two or more pictures that could be used to assign a colour or shading to every point of a surface. The problem of generating a single consistent texture (texture being the colour assignment for each location of a surface) is not trivial because the colours in each image depend on the position, illumination direction and sUrface characteristics that generated any particular picture. The problem is solved following by dividing the content of images in low and high frequency inforrriation, talking a robust mean of the low frequency information while blending the high frequency part.
The process then loops back to step S503 and the process is repeated for all surfaces.
The process ends at step S51 1.
Methods and apparatus in accordance with the above embodiments, allow the shape of tunnels and other structures both internal and external to be reconstructed. The methods and apparatus allow recovery of the shape of the premises and quantification of deviation from blueprints/plans.
When used for tunnel/station inspection they allow localisation of defects, asset inventory and analysing routing of different cables along the tunnel.
For tunnels/station monitoring they allow tracking of defects over time and monitoring changes in geometry and texture.
The geometry and pictures are processed together to produce a description of the surfaces of the scene. Further, any new picture can be attached to an existing reconstructed scene for example for defect tracking, self localisation of personnel.
Further a new picture can be added to an existing reconstructed scene for detecting changes since the last inspection.
The above system does not suffer from alignment issues between scans as the system is robust to alignment problems. The collection of the images is structured and this structure is used in the reconstruction process to reduce processing time. Further, the system does not have to use markers or other types of calibration devices.
The system is also robust to the formation of blind spots. The texture is automatically attached, to the recovered shape and the acquired texture is free from markers.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended tO limit the scope of the inventions. Indeed the novel methods and apparatus described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of methods and apparatus described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms of modifications as would fall within the scope and spirit of the inventions.

Claims (1)

  1. <claim-text>CLAIMS: 1. An apparatus for reconstructing the surface of a 3D object, the apparatus comprising: at least one camera configured to capture a plurality of images of different parts of the object from a plurality of different view points, the at least one camera being configured that there is overlap between adjacent images; a processor adapted to: process said captured images to match them such that data derived from the overlap of images is used to form a 3D point cloud of the object; fit a parameterised 3D surface to said point cloud, by optimising the parameters of said surface to fit said point cloud; and generate a reconstruction of said object from said fitted surface; and an output for outputting said reconstruction.</claim-text> <claim-text>2. An apparatus according to claim 1 further comprising a rig, wherein the at least one camera is mounted on the rig, the camera being configured to image sections of the surface, the rig being configured to translate the camera in a first direction to image further sections of the surface.</claim-text> <claim-text>3. An apparatus according to claim 1, configured to reconstruct the inner surfaces of a tunnel, wherein the at least one camera is provided on a rig which is movable along the length of the tunnel to be reconstructed.</claim-text> <claim-text>4. An apparatus according to claim 2, comprising a plurality of cameras attached to a rig, wherein the cameras are directed to allow a section of the surface to be imaged, the cameras being arranged such that there is overlap between the areas being imaged by each camera., 5. An apparatus according to claim 2, wherein at least one camera is located on a rig which is configured to move the camera to allow a section of the surface to be imaged.6. An apparatus according to claim 2, further comprising a mirrored member located on the rig, the at least one camera being directed towards the mirrored member and the mirrored member being located to allow a section of the surface to be imaged by the at least one camera.7. An apparatus according to claim 1, wherein the at least one camera is a still camera.8. An apparatus according to claim 1, wherein fitting the 3D surface to the point cloud comprises fitting at least one primitive or quadric surface to said point cloud.9. An apparatus according to claim 8, wherein fitting the 3D surface to the point cloud comprises using random sampling and optimization techniques.10. An apparatus according to claim 8, wherein fitting the 3D surface to the point cloud comprises using cost promoting the sparseness of a model used to fit the surface with optimization.11. An apparatus according to claim 1, configured to allow further images to be added after the generation of the reconstruction.12. An apparatus according to claim 1, further comprising projecting the captured images onto the fitted 3D surface to obtain a surface with synthesised texture.13. An apparatus according to claim 12, wherein projecting the captured images comprises assigning a colour or shading to each point from the images which were used to construct the point.14. An apparatus according to claim 1, wherein producing the 3D point cloud comprises matching features between images, wherein the order in which the images were taken is used to determine images which should have matching features.15. An apparatus according to claim 1, wherein producing the 3D point cloud comprises matching features between images, wherein the geometrical relationship between images is used when matching features.16. An apparatus according to claim 1, configured such that these is an overlap of at least 50% between adjacent images.17. An apparatus according to claim 1, configured such that each point in the 3D point cloud is reconstructed from at least 2 images.18. A method of reconstructing the surface of a 3D object, the method comprising: receiving images of the surface of the object to be reconstructed, the images being of different parts of the object from a plurality of different view points, there being overlap between the images; processing said captured images to match them such that data derived from the overlap of images is used to form a 3D point cloud of the object; fitting a parameterised 3D surface to said point cloud, by optimising the parameters of said surface to fit said point cloud; and generating and outputting a reconstruction of said object from said fitted surface.19. A method according to claim 18, further comprising: capturing a plurality of images of different parts of the object from a plurality of different view points, such that there is overlap between adjacent images.20. A carrier medium carrying computer readable instructions for controlling the coniputerto carry out the method of claim 18.</claim-text>
GB1120987.1A 2011-12-06 2011-12-06 A reconstruction system and method Active GB2497517B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1120987.1A GB2497517B (en) 2011-12-06 2011-12-06 A reconstruction system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1120987.1A GB2497517B (en) 2011-12-06 2011-12-06 A reconstruction system and method

Publications (3)

Publication Number Publication Date
GB201120987D0 GB201120987D0 (en) 2012-01-18
GB2497517A true GB2497517A (en) 2013-06-19
GB2497517B GB2497517B (en) 2016-05-25

Family

ID=45541306

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1120987.1A Active GB2497517B (en) 2011-12-06 2011-12-06 A reconstruction system and method

Country Status (1)

Country Link
GB (1) GB2497517B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517280A (en) * 2013-11-14 2015-04-15 广东朗呈医疗器械科技有限公司 Three-dimensional imaging method
CN104680579A (en) * 2015-03-02 2015-06-03 北京工业大学 Tunnel construction informatization monitoring system based on three-dimensional scanning point cloud
CN105701821A (en) * 2016-01-14 2016-06-22 福州华鹰重工机械有限公司 Stereo image surface detection matching method and apparatus thereof
CN105756711A (en) * 2016-03-02 2016-07-13 中交第二航务工程局有限公司 Tunnel construction primary support limit invasion monitoring analysis early-warning method based on three-dimensional laser scanning
CN106443813A (en) * 2016-08-31 2017-02-22 中国电建集团贵阳勘测设计研究院有限公司 Live-action shooting device for exploratory adit and high-resolution three-dimensional image reconstruction method
CN108053473A (en) * 2017-12-29 2018-05-18 北京领航视觉科技有限公司 A kind of processing method of interior three-dimensional modeling data
CN108230442A (en) * 2018-01-24 2018-06-29 上海岩土工程勘察设计研究院有限公司 A kind of shield tunnel three-dimensional emulation method
WO2019200422A1 (en) * 2018-04-20 2019-10-24 Dibit Messtechnik Gmbh Device and method for detecting surfaces
CN110726726A (en) * 2019-10-30 2020-01-24 中南大学 Quantitative detection method and system for tunnel forming quality and defects thereof
CN111023966A (en) * 2019-11-28 2020-04-17 中铁十八局集团第五工程有限公司 Tunnel measurement and control method based on combination of three-dimensional laser scanner and BIM
CN111473776A (en) * 2020-05-11 2020-07-31 中晋环境科技有限公司 Landslide crack monitoring method based on single-image close-range photogrammetry
CN111710027A (en) * 2020-05-25 2020-09-25 南京林业大学 Tunnel three-dimensional geometric reconstruction method considering data-driven segment segmentation and model-driven segment assembly
GB2601569A (en) * 2020-12-07 2022-06-08 Darkvision Tech Inc Precise Registration of Images of Tubulars

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651926A (en) * 2016-12-28 2017-05-10 华东师范大学 Regional registration-based depth point cloud three-dimensional reconstruction method
WO2020024144A1 (en) * 2018-08-01 2020-02-06 广东朗呈医疗器械科技有限公司 Three-dimensional imaging method, apparatus and terminal device
CN111709923B (en) * 2020-06-10 2023-08-04 中国第一汽车股份有限公司 Three-dimensional object detection method, three-dimensional object detection device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030058242A1 (en) * 2001-09-07 2003-03-27 Redlich Arthur Norman Method and system for 3-D content creation
GB2389500A (en) * 2002-04-20 2003-12-10 Virtual Mirrors Ltd Generating 3D body models from scanned data
US20060221072A1 (en) * 2005-02-11 2006-10-05 Se Shuen Y S 3D imaging system
US20110316978A1 (en) * 2009-02-25 2011-12-29 Dimensional Photonics International, Inc. Intensity and color display for a three-dimensional metrology system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112009000099T5 (en) * 2008-01-04 2010-11-11 3M Innovative Properties Co., St. Paul Image signatures for use in a motion-based three-dimensional reconstruction
US20110222757A1 (en) * 2010-03-10 2011-09-15 Gbo 3D Technology Pte. Ltd. Systems and methods for 2D image and spatial data capture for 3D stereo imaging

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030058242A1 (en) * 2001-09-07 2003-03-27 Redlich Arthur Norman Method and system for 3-D content creation
GB2389500A (en) * 2002-04-20 2003-12-10 Virtual Mirrors Ltd Generating 3D body models from scanned data
US20060221072A1 (en) * 2005-02-11 2006-10-05 Se Shuen Y S 3D imaging system
US20110316978A1 (en) * 2009-02-25 2011-12-29 Dimensional Photonics International, Inc. Intensity and color display for a three-dimensional metrology system

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517280B (en) * 2013-11-14 2017-04-12 广东朗呈医疗器械科技有限公司 Three-dimensional imaging method
CN104517280A (en) * 2013-11-14 2015-04-15 广东朗呈医疗器械科技有限公司 Three-dimensional imaging method
CN104680579A (en) * 2015-03-02 2015-06-03 北京工业大学 Tunnel construction informatization monitoring system based on three-dimensional scanning point cloud
CN104680579B (en) * 2015-03-02 2017-09-01 北京工业大学 Tunnel construction informatization monitoring system based on three-dimensional scanning point cloud
CN105701821B (en) * 2016-01-14 2018-07-24 福州华鹰重工机械有限公司 Stereo-picture surface detects matching process and device
CN105701821A (en) * 2016-01-14 2016-06-22 福州华鹰重工机械有限公司 Stereo image surface detection matching method and apparatus thereof
CN105756711A (en) * 2016-03-02 2016-07-13 中交第二航务工程局有限公司 Tunnel construction primary support limit invasion monitoring analysis early-warning method based on three-dimensional laser scanning
CN105756711B (en) * 2016-03-02 2018-03-16 中交第二航务工程局有限公司 Constructing tunnel based on 3 D laser scanning, which just props up, invades limit monitoring analysis and early warning method
CN106443813A (en) * 2016-08-31 2017-02-22 中国电建集团贵阳勘测设计研究院有限公司 Live-action shooting device for exploratory adit and high-resolution three-dimensional image reconstruction method
CN108053473A (en) * 2017-12-29 2018-05-18 北京领航视觉科技有限公司 A kind of processing method of interior three-dimensional modeling data
CN108230442A (en) * 2018-01-24 2018-06-29 上海岩土工程勘察设计研究院有限公司 A kind of shield tunnel three-dimensional emulation method
WO2019200422A1 (en) * 2018-04-20 2019-10-24 Dibit Messtechnik Gmbh Device and method for detecting surfaces
AT521199A1 (en) * 2018-04-20 2019-11-15 Dibit Messtechnik Gmbh DEVICE AND METHOD FOR DETECTING SURFACES
CN110726726A (en) * 2019-10-30 2020-01-24 中南大学 Quantitative detection method and system for tunnel forming quality and defects thereof
CN111023966A (en) * 2019-11-28 2020-04-17 中铁十八局集团第五工程有限公司 Tunnel measurement and control method based on combination of three-dimensional laser scanner and BIM
WO2021103433A1 (en) * 2019-11-28 2021-06-03 中铁十八局集团有限公司 Method for tunnel measurement and control on basis of combination of 3d laser scanner and bim
CN111473776A (en) * 2020-05-11 2020-07-31 中晋环境科技有限公司 Landslide crack monitoring method based on single-image close-range photogrammetry
CN111710027A (en) * 2020-05-25 2020-09-25 南京林业大学 Tunnel three-dimensional geometric reconstruction method considering data-driven segment segmentation and model-driven segment assembly
CN111710027B (en) * 2020-05-25 2021-05-04 南京林业大学 Tunnel three-dimensional geometric reconstruction method
GB2601569A (en) * 2020-12-07 2022-06-08 Darkvision Tech Inc Precise Registration of Images of Tubulars

Also Published As

Publication number Publication date
GB201120987D0 (en) 2012-01-18
GB2497517B (en) 2016-05-25

Similar Documents

Publication Publication Date Title
GB2497517A (en) Reconstructing 3d surfaces using point clouds derived from overlapping camera images
Pintore et al. State‐of‐the‐art in automatic 3D reconstruction of structured indoor environments
US10679361B2 (en) Multi-view rotoscope contour propagation
US11003956B2 (en) System and method for training a neural network for visual localization based upon learning objects-of-interest dense match regression
EP2272050B1 (en) Using photo collections for three dimensional modeling
EP3398164B1 (en) System for generating 3d images for image recognition based positioning
JP5487298B2 (en) 3D image generation
CN110400363A (en) Map constructing method and device based on laser point cloud
CN108961410B (en) Three-dimensional wire frame modeling method and device based on image
CN108053437B (en) Three-dimensional model obtaining method and device based on posture
WO2014151746A2 (en) Determining object volume from mobile device images
EP4040389A1 (en) Determining object structure using physically mounted devices with only partial view of object
WO2023093217A1 (en) Data labeling method and apparatus, and computer device, storage medium and program
CN111161347B (en) Method and equipment for initializing SLAM
US20190079158A1 (en) 4d camera tracking and optical stabilization
WO2018080533A1 (en) Real-time generation of synthetic data from structured light sensors for 3d object pose estimation
CN116051747A (en) House three-dimensional model reconstruction method, device and medium based on missing point cloud data
Özdemir et al. A multi-purpose benchmark for photogrammetric urban 3D reconstruction in a controlled environment
Dahlke et al. True 3D building reconstruction: Façade, roof and overhang modelling from oblique and vertical aerial imagery
KR20100065037A (en) Apparatus and method for extracting depth information
CN116863083A (en) Method and device for processing three-dimensional point cloud data of transformer substation
Barrile et al. 3D models of Cultural Heritage
Dong et al. Utilizing internet photos for indoor mapping and localization-opportunities and challenges
GB2537831A (en) Method of generating a 3D representation of an environment and related apparatus
Cheng et al. Texture mapping 3d planar models of indoor environments with noisy camera poses