WO2018155590A1 - Dispositif d'identification, procédé d'identification et programme d'identification de la position d'une surface de paroi à l'intérieur d'un tunnel apparaissant dans une image photographique - Google Patents

Dispositif d'identification, procédé d'identification et programme d'identification de la position d'une surface de paroi à l'intérieur d'un tunnel apparaissant dans une image photographique Download PDF

Info

Publication number
WO2018155590A1
WO2018155590A1 PCT/JP2018/006576 JP2018006576W WO2018155590A1 WO 2018155590 A1 WO2018155590 A1 WO 2018155590A1 JP 2018006576 W JP2018006576 W JP 2018006576W WO 2018155590 A1 WO2018155590 A1 WO 2018155590A1
Authority
WO
WIPO (PCT)
Prior art keywords
photographic images
image
wall surface
pixel
search image
Prior art date
Application number
PCT/JP2018/006576
Other languages
English (en)
Japanese (ja)
Inventor
緑川 克美
和田 智之
徳人 斎藤
加瀬 究
隆士 道川
祐一 小町
幸太郎 岡村
武晴 村上
亨男 坂下
繁 木暮
Original Assignee
国立研究開発法人理化学研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立研究開発法人理化学研究所 filed Critical 国立研究開発法人理化学研究所
Priority to JP2019501810A priority Critical patent/JP7045721B2/ja
Publication of WO2018155590A1 publication Critical patent/WO2018155590A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C7/00Tracing profiles
    • G01C7/06Tracing profiles of cavities, e.g. tunnels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an identification device, an identification method, and a program for identifying the position of a wall surface in a tunnel reflected in a photographic image.
  • Patent Document 1 Mount multiple video cameras facing the wall to the camera installation stand so that the distance to the wall is the same and the end of the field of view of the adjacent video camera overlaps on a plane perpendicular to the wall.
  • INS Inertial Navigation System
  • odometer odometer
  • laser scanner a technique has been proposed that enables joining and creates a developed image with a uniform scale.
  • Non-Patent Document 1 discloses SfM (Structure from Motion) technology that acquires 3D information of a target from a plurality of 2D images or 2D moving images taken from different positions of the 3D target. ing.
  • Non-Patent Document 2 By applying the technique disclosed in Non-Patent Document 2 to the wall surface image obtained by the technique disclosed in Patent Document 1, information on the three-dimensional shape of the wall surface in the tunnel can be obtained. Furthermore, by applying the technique disclosed in Non-Patent Document 3, a two-dimensional development view of the wall surface in the tunnel can be obtained.
  • GPS Global Positioning System
  • the inspection method should be such that the entire tunnel wall surface is observed at a low frequency (for example, every 5 or 10 years), and follow-up observations of previously discovered deformations are performed at a high frequency (for example, every year). There are also many. In the follow-up observation of the deformation, it is necessary to identify the position that the inspector is currently observing, grasp the positional relationship with the deformation to be observed, and guide the inspector to the target location.
  • This invention solves said subject, and relates to the identification apparatus, the identification method, and program which identify the position of the wall surface in the tunnel reflected in the photograph image.
  • the identification device is: Acquire multiple photographic images of the walls in the tunnel, Calculating the position of each feature point and local feature amount of each of the plurality of photographic images; A three-dimensional polygon model of the wall surface is constructed with reference to the calculated feature point positions and local feature amounts for each of the plurality of photographic images, and a wall map based on the constructed three-dimensional polygon model is generated.
  • an identification device it is possible to provide an identification device, an identification method, and a program for identifying the position of the wall surface in the tunnel reflected in the photographic image.
  • FIG. 5 is a drawing-substituting photograph that represents a three-dimensional polygon model in which a search image is searched in Experiment 1 with a gray scale of 256 gradations.
  • FIG. 3 is a drawing-substituting photograph that represents a three-dimensional polygon model in which a search image is searched in Experiment 1 in two gradations of monochrome.
  • it is a drawing substitute photograph that represents an example of a photographic image A of a crack to be searched in 256 gray scales.
  • it is a drawing substitute photograph that represents an example of a photographic image B of a crack to be searched in 256 gray scales.
  • Experiment 2 it is a drawing substitute photo that represents an example of a photographic image C of a crack to be searched in 256 gray scales.
  • FIG. 5 is a drawing-substituting photograph that represents an example of a cracked photographic image A to be searched in Experiment 2 in two-tone monochrome.
  • a photographic image B that matches the search image B is a drawing substitute photo that represents monochrome in two gradations.
  • a photographic image C matching the search image C is a drawing-substituting photograph that represents two-tone monochrome.
  • a photographic image D that matches the search image D is a drawing-substituting photograph that is expressed in two-tone monochrome.
  • a photographic image E that matches the search image E is a drawing-substituting photograph that represents two-tone monochrome.
  • a photographic image A that matches the search image A is a drawing substitute photo that represents a gray scale of 256 gradations.
  • a photographic image B that matches the search image B is a drawing substitute photo that represents a gray scale of 256 gradations.
  • a photographic image C matching the search image C is a drawing-substituting photo that represents a gray scale of 256 gradations.
  • a photographic image D that matches the search image D is a drawing substitute photo that represents a gray scale of 256 gradations.
  • a photographic image E that matches the search image E is a photo that substitutes for a drawing expressed in 256 gray scales.
  • FIG. 5 is a drawing-substituting photograph that represents a three-dimensional polygon model in which a search image A-E is searched in Experiment 2 with 256 gray scales.
  • FIG. 5 is a drawing-substituting photograph that represents a three-dimensional polygon model in which a search image A-E is searched in Experiment 2 in two-tone monochrome.
  • FIG. 1 is an explanatory diagram showing a schematic configuration of an identification apparatus according to an embodiment of the present invention. Hereinafter, an outline will be described with reference to this figure.
  • the identification apparatus 101 is realized by executing a predetermined program in a computer, and includes a first acquisition unit 111, a first calculation unit 112, a construction unit 113, and a first mapping unit. 114, a second acquisition unit 121, a second calculation unit 122, a second mapping unit 124, and an output unit 125.
  • the first acquisition unit 111 acquires a plurality of photographic images taken of the wall surface in the tunnel.
  • a plurality of photographic images are taken by a video camera or a still camera as disclosed in Patent Document 1, for example.
  • the first calculation unit 112 calculates the position of each feature point and the local feature amount of a plurality of photographic images.
  • Non-Patent Document 1 local feature quantities such as SIFT (Scale-Invariant Feature Transform) and SURF (Speeded Up Up Robust Feature) are used.
  • SIFT Scale-Invariant Feature Transform
  • SURF Speeded Up Up Robust Feature
  • the construction unit 113 constructs a 3D polygon model of the wall with reference to the feature point positions and local feature amounts calculated for each of the plurality of photographic images, and a wall map based on the constructed 3D polygon model Is generated.
  • Non-Patent Document 1 the SfM technology disclosed in Non-Patent Document 1 is used to construct a three-dimensional polygon model.
  • Non-Patent Document 2 since the distribution of the feature points included in the point cloud data is dense and dense, in order to specify the three-dimensional position with respect to an arbitrary pixel of the photographic image, for example, the curved surface reconstruction disclosed in Non-Patent Document 2 is performed. Do.
  • a 3D map expressed by the 3D polygon model can be adopted, or a 2D map expressed by an expanded view of the 3D polygon model can be adopted. it can.
  • the technique disclosed in Non-Patent Document 3 can be applied to the generation of the two-dimensional map.
  • the first mapping unit 114 associates each pixel of the plurality of photographic images with a position in the wall surface map. This association is called “first mapping”.
  • the processing performed by the first acquisition unit 111, the first calculation unit 112, the construction unit 113, and the first mapping unit 114 described above may be referred to as “initialization”. It is executed when the inspection is performed.
  • the position and local feature amount of each feature point in the plurality of photographic images calculated at the time of initialization and the wall surface map are used to search the search image and identify the position in the processing described later. It is stored in a recording medium, hard disk, database, etc.
  • the process for performing the inspection after narrowing down the target portion with a relatively high frequency is as follows: the second acquisition unit 121, the second calculation unit 122, the second mapping This is executed by the unit 124 and the output unit 125. These processes may be collectively referred to as “search”.
  • the second acquisition unit 121 acquires a search image in which a wall surface in the tunnel is newly photographed.
  • the plurality of photographic images acquired by the first acquisition unit 111 are photographed so as to cover the wall surface in the tunnel, but the search image acquired by the second acquisition unit 121 It is the photograph which image
  • the second calculation unit 122 calculates the position of the feature point and the local feature amount of the acquired search image.
  • the second calculation unit 122 calculates the position of the feature point and the local feature amount using the same algorithm as the first calculation unit 112.
  • the second mapping unit 124 compares the position of the feature point and the local feature amount calculated for the search image with the position of the feature point and the local feature amount calculated for each of the plurality of photographic images.
  • Each pixel of the search image is associated with any pixel of the plurality of photographic images.
  • the second mapping unit 124 selects feature points in a plurality of photographic images having local feature amounts similar to the local feature amount for each feature point in the search image, and each feature point in the search image Is associated with one of feature points in a plurality of photographic images. Positions other than feature points are associated by using Delaunay triangulation. This association is referred to as “second mapping”.
  • the output unit 125 outputs a position in the wall map in which any pixel of the plurality of photographic images associated with each pixel of the search image is associated.
  • the output unit 125 obtains positions in a plurality of photographic images taken at the time of initialization by applying the second mapping to the positions of the respective pixels of the search image. Then, by applying the first mapping to the position, the position on the wall surface map of each pixel of the search image is obtained.
  • the initialization process and the identification process can be executed at different times and frequencies. Therefore, the computer that executes the initialization process may be the same as or different from the computer that executes the identification process. Therefore, the program for executing the initialization process and the program for executing the identification process may be prepared together as one program, or may be prepared as independent programs.
  • the program is recorded on a recording medium, loaded into a memory included in the computer, and executed by a processor included in the computer.
  • the program is expressed as a collection of codes for realizing each unit of the identification apparatus 101.
  • the first mapping unit 114 converts the three-dimensional coordinates themselves of each point of the three-dimensional polygon model or the two-dimensional coordinates in the development view of each point into color information, and the obtained color information is converted to each point.
  • Build a three-dimensional colored model by assigning to the colors.
  • the first mapping unit 114 renders a three-dimensional color model from the shooting position and shooting direction estimated by SfM. If the rendering result is the same size as each photographic image, the color information of the pixel arranged at the same position as each pixel in each photographic image is obtained in the rendering result, and if the inverse conversion to coordinates is performed, Coordinate information can be obtained.
  • the color information of each pixel can be calculated with floating-point precision by using OpenGL FBO (Frame Buffer Object), etc., and can be calculated using GPU, so high precision coordinate values are fast and robust It becomes possible to calculate.
  • OpenGL FBO Frae Buffer Object
  • the program can be used as a design drawing of an electronic circuit.
  • an electronic circuit for performing initialization processing or an electronic circuit for performing identification processing is realized as hardware based on the program.
  • the identification apparatus 101 is realized as a whole by executing an initialization program and an identification program by a computer.
  • FIG. 2 is a flowchart showing the control flow of the initialization process.
  • a description will be given with reference to FIG.
  • the computer that executes the initialization program first acquires a plurality of photographic images obtained by photographing the wall surface in the tunnel (step S201).
  • FIG. 3 is a drawing substitute photo showing a plurality of photographic images taken on the wall of the tunnel in 256 gray scales.
  • FIG. 4 is a drawing-substituting photograph in which a plurality of photographic images taken on the wall surface of the tunnel are represented in two-tone monochrome.
  • the illustrated image is a part of a plurality of photographic images acquired in step S201.
  • each frame of a moving image taken while rotating a vehicle-mounted video camera may be used, or an image taken while moving and rotating a still camera may be used.
  • FIG. 5 is a drawing-substituting photograph showing the point group obtained by SfM in a gray scale of 256 gradations.
  • FIG. 6 is a drawing-substituting photograph in which the point group obtained by SfM is expressed in two-tone monochrome.
  • the computer uses the curved surface reconstruction method to generate polygons passing through the three-dimensional point group, construct a three-dimensional polygon model, and generate a wall surface map based on the three-dimensional polygon model (step S203). Since the point cloud is a three-dimensional representation of feature points in a photographic image, the three-dimensional coordinates of any point on the wall surface in the tunnel can be estimated by reconstructing the curved surface.
  • FIG. 7 is a drawing-substituting photograph that represents the three-dimensional polygon model in 256 gray scales.
  • FIG. 8 is a drawing-substituting photograph that represents a three-dimensional polygon model in monochrome with two gradations.
  • Non-Patent Document 2 a three-dimensional polygon model generated for a wall surface in a tunnel photographed in a photographic image by the Smooth-signed Distance-Surface-Reconstruction method disclosed in Non-Patent Document 2 is shown.
  • each polygon is constructed so as to pass through the three-dimensional point group, the three-dimensional point group and the polygon are associated with each other by identity transformation mapping.
  • the generated 3D polygon model is a 3D map that expresses the 3D coordinate values of each point on the wall in the tunnel as it is. This three-dimensional map can be used as a wall map.
  • a developed view of the wall surface in the tunnel can be used as a wall surface map.
  • the two-dimensional coordinate value of each point on the wall surface in the tunnel is expressed in the development view.
  • FIG. 9 is a drawing-substituting photograph showing vertices of a three-dimensional polygon model in a gray scale of 256 gradations.
  • FIG. 10 is a drawing-substituting photograph in which the vertex of the three-dimensional polygon model is represented in two-tone monochrome. On the left side of the figure, the cross section of the polygon P is drawn in an arc shape, and the end point corresponds to the boundary vertex.
  • the internal vertex v i is subjected to local parameterization by linear combination using its neighboring vertices v i, j .
  • FIG. 11 is an explanatory diagram for explaining how the points of the three-dimensional polygon model are projected onto a two-dimensional development view.
  • the imaging position C i of the image M i, and pixel m j, the line of sight is defined.
  • the position of the intersection of this line of sight and the polygon P is the three-dimensional coordinate p j .
  • the point itself is appropriately represented by coordinates.
  • FIG. 12 is a drawing-substituting photograph showing a development view of the wall surface in the tunnel in a gray scale of 256 gradations.
  • FIG. 13 is a drawing-substituting photograph in which the development of the wall surface in the tunnel is shown in two-tone monochrome.
  • the computer image M i in the pixel m j 3-dimensional coordinates p j in the three-dimensional polygon model for a 2-dimensional coordinate s j in or a two-dimensional development view is calculated to the first mapping Obtain (step S204).
  • the computer stores the association based on the calculation result as a first mapping that associates the wall surface map with each pixel in a memory or a hard disk, and also stores the positions of the feature points and local feature amounts of a plurality of photographic images in the memory or hard disk (Step S205), and the process is terminated.
  • FIG. 14 is a flowchart showing a flow of control of the identification process.
  • Identification processing is realized by a computer executing a program for identification processing.
  • the computer acquires a search image (step S301).
  • the search image is desirably acquired from a camera directly connected to a computer.
  • each time the searcher takes a search image with the camera the inspector can obtain which point in the wall map is currently being observed. Also, when a crack or the like is found, if it is photographed, it is possible to identify whether this is a newly generated deformation or a deformation that has been discovered in the past.
  • the computer calculates the position of the feature point and the local feature amount in the acquired search image (step S302).
  • the same algorithm is used in the initialization process and the identification process for calculating the position of the feature point and the local feature amount.
  • SIFT, SURF, etc. can be adopted.
  • the computer compares the feature point positions and local feature amounts calculated for the search image with the feature point positions and local feature amounts calculated for a plurality of photographic images in the initialization process. Then, a photographic image that matches the search image is searched from a plurality of photographic images (step S303). With this process, a second mapping representing the correspondence between the pixels of the search image and the matching image is obtained.
  • the bag search is performed as follows. First, a feature point pair obtained by combining each feature point calculated for a search image and any one of feature points calculated for a plurality of photographic images is searched.
  • one feature point pair is a pair of one feature point in the search image and one feature point in any one of a plurality of photographic images, and these two feature points.
  • the local feature values calculated for are similar to each other.
  • a photographic image in which many feature points obtained by the feature point pair appear is regarded as an image that matches the search image.
  • the feature point pair may include an illegal pair
  • the illegal pair can be removed by using the RANSAC method.
  • this coordinate transformation is applied to the other feature points remaining in one image, and the degree of success of the coordinate transformation in the vicinity of the corresponding feature point in the other image is obtained.
  • the random selection and the process for obtaining the success degree of coordinate transformation are repeatedly executed, and the coordinate transformation having the highest success degree is selected.
  • the coordinate transformation obtained in this process corresponds to a second mapping in which each pixel of the search image is associated with any pixel of any one of the plurality of photo images.
  • a Delaunay triangulation can be constructed using the feature points of the search image and applied to the feature points of the matched image to obtain a second mapping with higher accuracy.
  • a triangle mesh is generated using Delaunay triangulation for the feature points included in the search image, and the triangular region surrounded by the feature points is parameterized.
  • the obtained mesh phase is applied as it is to the match image, it can be constructed in a region where similar triangles match.
  • the computer calculates the coordinate value in the wall map by applying the second map and the first map to each pixel in the search image (step S304).
  • the coordinate value obtained here can be a three-dimensional coordinate or a two-dimensional coordinate.
  • the computer outputs the position in the wall map of the coordinate value calculated for each pixel in the search image (step S305), and ends this process.
  • an OpenGL FBO is drawn using a GPU, and a simple and high-speed calculation of a pixel is robustly realized by referring to the result.
  • the 3D coordinates of each point and the 2D coordinates when each point is shown in the development view are RGB (Red (Green Blue)
  • RGB Red (Green Blue)
  • the converted value is assigned as a color.
  • the three-dimensional polygon model is perspective-projected from the photographing position and photographing direction obtained as a result of SfM. Then, the rendering result is the same composition as the original photographic image, and the image of each pixel is drawn as a result of converting the coordinates in the wall map into colors.
  • the first mapping can be easily represented by this image.
  • the search image and the match image are divided into triangles, colors that do not overlap each other are given to the vertices of the triangles, and colors that are proportional to the vertex colors by the barycentric coordinates are given to the inside of the triangles.
  • a vertex color obtained by converting a two-dimensional coordinate value of the vertex into a color can be used.
  • each RGB component of a color is expressed as a floating point number between 0 and 1. Therefore, the three-dimensional coordinate value may be converted into a color by normalizing each three-dimensional element to a value between 0 and 1 and assigning it to each element of R, G, and B. In the case of a two-dimensional coordinate value, it is easiest to use only two components of R, G, and B.
  • a laser scanner conventionally used for tunnel wall inspection as disclosed in Patent Document 1 can measure cracks on the order of 1 mm or more from an automobile traveling at a speed of 50 km / h.
  • the resolution with a horizontal resolution of 0.2mm or less and a depth (ranging) resolution of 0.1mm or less is required. Necessary.
  • high-precision laser scanner enables more detailed information to be acquired than when shooting moving images or still images.
  • detection using a high-definition and high-precision laser scanner takes time, it is practically difficult to inspect the entire wall surface in the tunnel using a high-definition and high-precision laser scanner. .
  • the range measured by the laser scanner becomes a wall surface. You can get where in the map. This will be described below.
  • the camera when the retrieval image is photographed, the camera captures the wall surface in the tunnel continuously while maintaining the photographing position and the photographing direction within a predetermined error range.
  • the laser beam from the laser scanner is applied to the measurement range from the middle.
  • the photograph obtained in the first half of the continuous shooting shows that the wall surface is photographed as it is, and the photograph obtained in the second half shows that a part of the wall surface is brightened with laser light.
  • the photograph obtained in the first half of the continuous shooting is used as the search image.
  • the photograph obtained in the second half of the continuous shooting represents a scan area by the laser scanner, and is hereinafter referred to as a scan image.
  • the scan image can be almost overlapped with the search image, and the correspondence between the pixels can be easily determined.
  • the correspondence between each pixel of the search image and the scanned image can be obtained by using the feature point extraction and matching technique in SfM. This is called the third map.
  • the position in the wall map can be easily obtained by applying the third map, the second map, and the first map to each point of the measurement range in the scan image.
  • DMPanasonic DMC-GF6 was used as a camera, and 250 photographic images were taken to cover the entire wall surface in the tunnel. The resolution of each photographic image is 1148 x 862 pixels. A three-dimensional model was generated from this photographic image.
  • search image with a resolution of 4592 x 3598 pixels was taken.
  • the search image is a close-up image on the wall surface, and the resolution is higher than that of the photographic image taken by initialization, but the field of view taken is narrow.
  • FIG. 15 is a drawing-substituting photograph that represents an example of a search image in 256 gray scales in Experiment 1.
  • FIG. 16 is a drawing-substituting photograph that represents an example of a retrieval image in Experiment 1 in monochrome with two gradations. In the search images shown in these figures, wide T-shaped cracks are drawn with emphasis on the side.
  • FIG. 17 is a drawing-substituting photograph that represents, in Experiment 1, a photographic image that matches the search image in a gray scale of 256 gradations.
  • FIG. 18 is a drawing-substituting photograph that represents, in Experiment 1, a photographic image that matches the search image in two-tone monochrome.
  • This is a photographic image (match image) that matches the search image among the photographic images used at the time of initialization.
  • the match image there is a wide T-shaped crack next to it, which is drawn with emphasis. Further, the pattern of the wall arranged around the T-shaped crack matches the search image and the match image.
  • FIG. 19 is a drawing-substituting photograph showing the three-dimensional polygon model for which the search image is searched in Experiment 1 with a gray scale of 256 gradations.
  • FIG. 20 is a drawing-substituting photograph that represents the three-dimensional polygon model for which the search image is searched in Experiment 1 in two gradations.
  • the photographic image referenced at the time of initialization is pasted as a texture, and the position with the same pattern as the search image is identified as the search result between the U shape of the tunnel edge.
  • the T-shaped crack is mapped.
  • 572SfM reconstruction used 572 images. Each image had a size of 1124 ⁇ 750 pixels and was photographed with a Nikon (registered trademark) camera D5500. It took about an hour to take all the photos.
  • FIG. 21A is a drawing-substituting photograph showing an example of a cracked photographic image A to be searched in Experiment 2 in a gray scale of 256 gradations.
  • FIG. 21B is a drawing-substituting photograph showing an example of a cracked photographic image B to be searched in Experiment 2 in a gray scale of 256 gradations.
  • FIG. 21C is a drawing-substituting photograph that represents an example of a cracked photographic image C to be searched in Experiment 2 in a gray scale of 256 gradations.
  • FIG. 21A is a drawing-substituting photograph showing an example of a cracked photographic image A to be searched in Experiment 2 in a gray scale of 256 gradations.
  • FIG. 21B is a drawing-substituting photograph showing an example of a cracked photographic image B to be searched in Experiment 2 in a gray scale of 256 gradations.
  • FIG. 21C is a drawing-substituting photograph that represents an
  • 21D is a drawing-substituting photograph showing an example of a cracked photographic image D to be searched in Experiment 2 in a gray scale of 256 gradations.
  • FIG. 21E is a drawing-substituting photograph showing an example of a cracked photographic image E to be searched in Experiment 2 in a gray scale of 256 gradations.
  • FIG. 22A is a drawing-substituting photograph showing an example of a cracked photographic image A to be searched in Experiment 2 in monochrome of two gradations.
  • FIG. 22B is a drawing-substituting photograph showing an example of a cracked photographic image B to be searched in Experiment 2 in monochrome of two gradations.
  • FIG. 22C is a drawing-substituting photograph showing an example of a cracked photographic image C to be searched in Experiment 2 in monochrome of two gradations.
  • FIG. 22D is a drawing-substituting photograph showing an example of a cracked photographic image D to be searched in Experiment 2 in monochrome of two gradations.
  • FIG. 22E is a drawing-substituting photograph that represents an example of a cracked photographic image E to be searched in Experiment 2 in monochrome with two gradations. The photograph actually taken is a color image.
  • FIG. 23A is a drawing-substituting photograph that represents a search image for the photograph image A in Experiment 2 in monochrome with two gradations.
  • FIG. 23B is a drawing-substituting photograph that represents the search image for the photograph image B in Experiment 2 in monochrome with two gradations.
  • FIG. 23C is a drawing-substituting photograph in which the search image for the photographic image C in Experiment 2 is represented in monochrome with two gradations.
  • FIG. 23D is a drawing-substituting photograph that represents a search image corresponding to the photograph image D in Experiment 2 in monochrome with two gradations.
  • FIG. 23E is a drawing-substituting photograph in which the search image for the photographic image E in Experiment 2 is represented in monochrome with two gradations. As shown in these figures, the search image is a monochrome image.
  • FIG. 25A is a drawing-substituting photograph in which the photographic image A that matches the search image A in Experiment 2 is represented by a gray scale of 256 gradations.
  • FIG. 25B is a drawing-substituting photograph in which the photographic image B matched with the search image B in Experiment 2 is represented by a gray scale of 256 gradations.
  • FIG. 25C is a drawing-substituting photograph in which the photographic image C matching the search image C in Experiment 2 is represented by a gray scale of 256 gradations.
  • FIG. 25A is a drawing-substituting photograph in which the photographic image A that matches the search image A in Experiment 2 is represented by a gray scale of 256 gradations.
  • FIG. 25B is a drawing-substituting photograph in which the photographic image B matched with the search image B in Experiment 2 is represented by a gray scale of 256 gradations.
  • FIG. 25C is a drawing-substitu
  • 25D is a drawing-substituting photograph in which the photographic image D matched with the search image D in Experiment 2 is represented by 256 gray scales.
  • FIG. 25E is a drawing-substituting photograph in which the photographic image E that matches the search image E in Experiment 2 is represented by a gray scale of 256 gradations.
  • FIG. 24A is a drawing-substituting photograph in which the photographic image A that matches the search image A in Experiment 2 is represented in two-tone monochrome.
  • FIG. 24B is a drawing-substituting photograph in which the photographic image B that matches the search image B in Experiment 2 is represented in two-tone monochrome.
  • FIG. 24C is a drawing-substituting photograph in which the photographic image C that matches the search image C in Experiment 2 is represented in two-tone monochrome.
  • FIG. 24D is a drawing-substituting photograph that represents the photographic image D that matches the search image D in Experiment 2 in monochrome with two gradations.
  • FIG. 24E is a drawing-substituting photograph in which the photographic image E that matches the search image E in Experiment 2 is represented in two-tone monochrome. The photograph actually taken is a color image.
  • FIG. 27 is a drawing-substituting photograph that expands the search result of the search image A-E in the development of the wall surface in the tunnel in Experiment 2 and expresses it in two-tone monochrome.
  • FIG. 26 is a drawing-substituting photograph that is obtained by enlarging the result of searching for the search image A-E in the development of the wall surface in the tunnel in Experiment 2 and expressing it in 256 gray scales.
  • FIGS. 1-10 are developed views of the inner surface of the tunnel, and show a state in which the position of a photographic image as a search result is surrounded by an ellipse and a crack is enlarged beside it.
  • the search took 6.7 seconds to load the data, and the search itself took an average of 16.7 seconds. Therefore, it took 23.4 seconds on average to search for one search image, and it was found that the search can be performed in a practical time.
  • FIG. 29 is a drawing-substituting photograph in which the three-dimensional polygon model for which the search images A to E are searched in Experiment 2 is represented in two-tone monochrome.
  • FIG. 28 is a drawing-substituting photograph showing the three-dimensional polygon model in which the search images A to E are searched in Experiment 2 in a gray scale of 256 gradations. Since each point in these drawings is associated with each point in the development view, it is possible to confirm the state of the cracked portion in a three-dimensional manner. It can also be seen that the reconstruction of the 3D model is done in a reasonable time.
  • the identification device is A first acquisition unit for acquiring a plurality of photographic images of the walls in the tunnel; A first calculation unit for calculating a position of each feature point and a local feature amount of each of the plurality of photographic images; A three-dimensional polygon model of the wall surface is constructed with reference to the calculated feature point position and local feature amount for each of the plurality of photographic images, and a wall surface map based on the constructed three-dimensional polygon model is generated.
  • a first mapping unit that associates each pixel of the plurality of photographic images with a position in the wall surface map
  • a second acquisition unit for acquiring a search image in which a wall surface in the tunnel is newly photographed
  • a second calculator for calculating the position of the feature point of the acquired search image and the local feature amount;
  • a second mapping unit that associates each pixel with one of the pixels of the plurality of photographic images
  • An output unit that outputs a position in the wall map in which any pixel of the plurality of photographic images associated with each pixel of the search image is associated;
  • the second acquisition unit is a scan image that is continuously shot while the shooting position and the shooting direction are maintained within a predetermined error range following the shooting of the search image, and the scan area in the wall surface is scanned with a laser scanner. Acquire a scanned image of the scan taken by The second mapping unit associates each pixel of the scan image with any pixel of the search image by comparing the scan image with the search image. The output unit associates any pixel of the plurality of photographic images associated with any pixel of the search image associated with each pixel in the scan area captured in the scan image. The position in the wall surface map can be output.
  • the wall surface map can be configured to be a three-dimensional map that represents a three-dimensional coordinate value of the wall surface by the three-dimensional polygon model.
  • the wall surface map can be configured to be a two-dimensional map that represents a two-dimensional coordinate value of the wall surface by a developed view of the three-dimensional polygon model.
  • the first mapping unit includes Converting the coordinate value into color information; Assign the converted color information as the color of the point associated with the coordinate value of the three-dimensional polygon model, Rendering the three-dimensional polygon model to which the color is assigned from the shooting position and shooting direction at which each of the plurality of photographic images was taken, and generating a corresponding image having the same size as each of the plurality of photographic images , By inversely transforming the color drawn on each pixel in the generated corresponding image into coordinate values, Each pixel of the plurality of photographic images can be configured to correspond to a position in the wall surface map.
  • the rendering can be configured to be executed by a GPU (Graphics Processing Unit).
  • GPU Graphics Processing Unit
  • the identification device is Acquire multiple photographic images of the walls in the tunnel, Calculating the position of each feature point and local feature amount of each of the plurality of photographic images;
  • a three-dimensional polygon model of the wall surface is constructed with reference to the calculated feature point positions and local feature amounts for each of the plurality of photographic images, and a wall map based on the constructed three-dimensional polygon model is generated.
  • the program according to this embodiment is The first computer, A first acquisition unit for acquiring a plurality of photographic images of the walls in the tunnel; A first calculation unit for calculating a position of each feature point and a local feature amount of each of the plurality of photographic images; A three-dimensional polygon model of the wall surface is constructed with reference to the calculated feature point position and local feature amount for each of the plurality of photographic images, and a wall surface map based on the constructed three-dimensional polygon model is generated.
  • Construction Department A first program that functions as a first mapping unit that associates each pixel of the plurality of photographic images with a position in the wall surface map; A second computer or the first computer, A second acquisition unit for acquiring a search image in which a wall surface in the tunnel is newly photographed; A second calculator for calculating the position of the feature point of the acquired search image and the local feature amount; By comparing the calculated feature point position and local feature amount for the search image with the calculated feature point position and local feature amount for each of the plurality of photographic images, A second mapping unit that associates each pixel with one of the pixels of the plurality of photographic images; A second program that functions as an output unit that outputs a position in the wall map in which any pixel of the plurality of photographic images associated with each pixel of the search image is associated; Is provided.
  • the program can be recorded and distributed and sold on a non-transitory computer-readable information recording medium. It can also be distributed and sold via a temporary transmission medium such as a computer communication network.
  • an identification device it is possible to provide an identification device, an identification method, and a program for identifying the position of the wall surface in the tunnel reflected in the photographic image.

Abstract

L'invention concerne un dispositif d'identification (101) comprenant une première unité d'acquisition (111) qui acquiert une pluralité d'images photographiques obtenues par la capture d'images d'une surface de paroi à l'intérieur d'un tunnel. Une première unité de calcul (112) calcule une position d'un point caractéristique et une quantité de caractéristique locale dans les images photographiques. Une unité de construction (113) construit un modèle polygonal tridimensionnel de la surface de paroi afin de générer une carte de surface de paroi. Une première unité de mise en correspondance (114) associe chaque pixel dans les images photographiques à une position dans la carte de surface de paroi. Une seconde unité d'acquisition (121) acquiert une image de recherche obtenue en capturant à nouveau une image de la surface de paroi. Une seconde unité de calcul (122) calcule une position d'un point caractéristique et une quantité de caractéristique locale dans l'image de recherche. Une seconde unité de mise en correspondance (124) compare les positions des points caractéristiques et les quantités de caractéristique locale dans les images photographiques et l'image de recherche afin d'associer chaque pixel dans l'image de recherche à l'un des pixels dans les images photographiques. Une unité de sortie (125) émet en sortie les positions dans la carte de surface de paroi à laquelle les pixels dans les images photographiques associées à chaque pixel dans l'image de recherche sont associés.
PCT/JP2018/006576 2017-02-24 2018-02-22 Dispositif d'identification, procédé d'identification et programme d'identification de la position d'une surface de paroi à l'intérieur d'un tunnel apparaissant dans une image photographique WO2018155590A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2019501810A JP7045721B2 (ja) 2017-02-24 2018-02-22 写真画像に映ったトンネル内の壁面の位置を同定する同定装置、同定方法、ならびに、プログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017033771 2017-02-24
JP2017-033771 2017-02-24

Publications (1)

Publication Number Publication Date
WO2018155590A1 true WO2018155590A1 (fr) 2018-08-30

Family

ID=63253896

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/006576 WO2018155590A1 (fr) 2017-02-24 2018-02-22 Dispositif d'identification, procédé d'identification et programme d'identification de la position d'une surface de paroi à l'intérieur d'un tunnel apparaissant dans une image photographique

Country Status (2)

Country Link
JP (1) JP7045721B2 (fr)
WO (1) WO2018155590A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6584735B1 (ja) * 2019-03-25 2019-10-02 三菱電機株式会社 画像生成装置、画像生成方法、および画像生成プログラム
WO2020158726A1 (fr) * 2019-01-31 2020-08-06 富士フイルム株式会社 Dispositif de traitement d'image, procédé de traitement d'image et programme
JP2020153873A (ja) * 2019-03-20 2020-09-24 株式会社リコー 診断処理装置、診断システム、診断処理方法、及びプログラム
CN114692272A (zh) * 2022-03-25 2022-07-01 中南大学 基于二维设计图纸自动生成三维参数化隧道模型的方法
CN114943706A (zh) * 2022-05-27 2022-08-26 宁波艾腾湃智能科技有限公司 一种绝对二维空间状态下的平面作品或制品的防伪鉴证
JP7197218B1 (ja) 2021-06-15 2022-12-27 ジビル調査設計株式会社 構造物点検装置
JP2023503426A (ja) * 2019-11-19 2023-01-30 サクミ コオペラティヴァ メッカニチ イモラ ソシエタ コオペラティヴァ 衛生陶器の光学検査のための装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1176393A2 (fr) * 2000-07-17 2002-01-30 Inco Limited Système de cartographie et de positionnement autonome utilisant des données en nuages de points
CN102564393A (zh) * 2011-12-28 2012-07-11 北京工业大学 隧道全断面三维激光监控量测方法
JP2017503100A (ja) * 2014-01-14 2017-01-26 サンドヴィック マイニング アンド コンストラクション オーワイ 採掘坑車両及び採掘坑作業タスクの開始方法
JP2017020972A (ja) * 2015-07-14 2017-01-26 東急建設株式会社 三次元形状計測装置、三次元形状計測方法、及びプログラム
JP2017129508A (ja) * 2016-01-22 2017-07-27 三菱電機株式会社 自己位置推定システム、自己位置推定方法、モバイル端末、サーバおよび自己位置推定プログラム

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005070840A (ja) * 2003-08-25 2005-03-17 East Japan Railway Co 三次元モデル作成装置、三次元モデル作成方法、及び三次元モデル作成プログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1176393A2 (fr) * 2000-07-17 2002-01-30 Inco Limited Système de cartographie et de positionnement autonome utilisant des données en nuages de points
CN102564393A (zh) * 2011-12-28 2012-07-11 北京工业大学 隧道全断面三维激光监控量测方法
JP2017503100A (ja) * 2014-01-14 2017-01-26 サンドヴィック マイニング アンド コンストラクション オーワイ 採掘坑車両及び採掘坑作業タスクの開始方法
JP2017020972A (ja) * 2015-07-14 2017-01-26 東急建設株式会社 三次元形状計測装置、三次元形状計測方法、及びプログラム
JP2017129508A (ja) * 2016-01-22 2017-07-27 三菱電機株式会社 自己位置推定システム、自己位置推定方法、モバイル端末、サーバおよび自己位置推定プログラム

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020158726A1 (fr) * 2019-01-31 2020-08-06 富士フイルム株式会社 Dispositif de traitement d'image, procédé de traitement d'image et programme
JP2020153873A (ja) * 2019-03-20 2020-09-24 株式会社リコー 診断処理装置、診断システム、診断処理方法、及びプログラム
JP7205332B2 (ja) 2019-03-20 2023-01-17 株式会社リコー 診断処理装置、診断システム、診断処理方法、及びプログラム
JP6584735B1 (ja) * 2019-03-25 2019-10-02 三菱電機株式会社 画像生成装置、画像生成方法、および画像生成プログラム
WO2020194470A1 (fr) * 2019-03-25 2020-10-01 三菱電機株式会社 Dispositif de génération d'image, procédé de génération d'image et programme de génération d'image
JP2023503426A (ja) * 2019-11-19 2023-01-30 サクミ コオペラティヴァ メッカニチ イモラ ソシエタ コオペラティヴァ 衛生陶器の光学検査のための装置
JP7450032B2 (ja) 2019-11-19 2024-03-14 サクミ コオペラティヴァ メッカニチ イモラ ソシエタ コオペラティヴァ 衛生陶器の光学検査のための装置
JP7197218B1 (ja) 2021-06-15 2022-12-27 ジビル調査設計株式会社 構造物点検装置
JP2023002856A (ja) * 2021-06-15 2023-01-11 ジビル調査設計株式会社 構造物点検装置
CN114692272A (zh) * 2022-03-25 2022-07-01 中南大学 基于二维设计图纸自动生成三维参数化隧道模型的方法
CN114943706A (zh) * 2022-05-27 2022-08-26 宁波艾腾湃智能科技有限公司 一种绝对二维空间状态下的平面作品或制品的防伪鉴证
CN114943706B (zh) * 2022-05-27 2023-04-07 宁波艾腾湃智能科技有限公司 一种绝对二维空间状态下的平面作品或制品的防伪鉴证

Also Published As

Publication number Publication date
JPWO2018155590A1 (ja) 2019-12-12
JP7045721B2 (ja) 2022-04-01

Similar Documents

Publication Publication Date Title
WO2018155590A1 (fr) Dispositif d'identification, procédé d'identification et programme d'identification de la position d'une surface de paroi à l'intérieur d'un tunnel apparaissant dans une image photographique
Lee et al. Skeleton-based 3D reconstruction of as-built pipelines from laser-scan data
Lattanzi et al. 3D scene reconstruction for robotic bridge inspection
Meister et al. When can we use kinectfusion for ground truth acquisition
Huang et al. Semantics-aided 3D change detection on construction sites using UAV-based photogrammetric point clouds
US20160133008A1 (en) Crack data collection method and crack data collection program
KR102113068B1 (ko) 수치지도 및 도로정밀지도 구축 자동화를 위한 방법
CN104574393A (zh) 一种三维路面裂缝图像生成系统和方法
Guarnieri et al. Digital photogrammetry and laser scanning in cultural heritage survey
JP6937642B2 (ja) 表面評価方法及び表面評価装置
JP2016090547A (ja) ひび割れ情報収集装置及びひび割れ情報を収集するためのサーバ装置
JP4568845B2 (ja) 変化領域認識装置
JP2016217941A (ja) 3次元データ評価装置、3次元データ測定システム、および3次元計測方法
US20220405878A1 (en) Image processing apparatus, image processing method, and image processing program
CN103424087B (zh) 一种大尺度钢板三维测量拼接方法
Dufour et al. 3D surface measurements with isogeometric stereocorrelation—application to complex shapes
WO2021014807A1 (fr) Appareil de traitement d'informations, procédé de traitement d'informations et programme
Yilmazturk et al. Geometric evaluation of mobile-phone camera images for 3D information
Zhang et al. Structure-from-motion based image unwrapping and stitching for small bore pipe inspections
US11423611B2 (en) Techniques for creating, organizing, integrating, and using georeferenced data structures for civil infrastructure asset management
JP7427615B2 (ja) 情報処理装置、情報処理方法およびプログラム
JP4747293B2 (ja) 画像処理装置および画像処理方法並びにこれらに用いるプログラム
JP6822086B2 (ja) シミュレーション装置、シミュレーション方法およびシミュレーションプログラム
JP2006172099A (ja) 変化領域認識装置および変化認識システム
Kolyvas et al. Application of photogrammetry techniques for the visual assessment of vessels’ cargo hold

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18756805

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019501810

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18756805

Country of ref document: EP

Kind code of ref document: A1