WO2018155590A1 - Identifying device, identifying method and program for identifying position of wall surface inside tunnel appearing in photographic image - Google Patents

Identifying device, identifying method and program for identifying position of wall surface inside tunnel appearing in photographic image Download PDF

Info

Publication number
WO2018155590A1
WO2018155590A1 PCT/JP2018/006576 JP2018006576W WO2018155590A1 WO 2018155590 A1 WO2018155590 A1 WO 2018155590A1 JP 2018006576 W JP2018006576 W JP 2018006576W WO 2018155590 A1 WO2018155590 A1 WO 2018155590A1
Authority
WO
WIPO (PCT)
Prior art keywords
photographic images
image
wall surface
pixel
search image
Prior art date
Application number
PCT/JP2018/006576
Other languages
French (fr)
Japanese (ja)
Inventor
緑川 克美
和田 智之
徳人 斎藤
加瀬 究
隆士 道川
祐一 小町
幸太郎 岡村
武晴 村上
亨男 坂下
繁 木暮
Original Assignee
国立研究開発法人理化学研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立研究開発法人理化学研究所 filed Critical 国立研究開発法人理化学研究所
Priority to JP2019501810A priority Critical patent/JP7045721B2/en
Publication of WO2018155590A1 publication Critical patent/WO2018155590A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C7/00Tracing profiles
    • G01C7/06Tracing profiles of cavities, e.g. tunnels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an identification device, an identification method, and a program for identifying the position of a wall surface in a tunnel reflected in a photographic image.
  • Patent Document 1 Mount multiple video cameras facing the wall to the camera installation stand so that the distance to the wall is the same and the end of the field of view of the adjacent video camera overlaps on a plane perpendicular to the wall.
  • INS Inertial Navigation System
  • odometer odometer
  • laser scanner a technique has been proposed that enables joining and creates a developed image with a uniform scale.
  • Non-Patent Document 1 discloses SfM (Structure from Motion) technology that acquires 3D information of a target from a plurality of 2D images or 2D moving images taken from different positions of the 3D target. ing.
  • Non-Patent Document 2 By applying the technique disclosed in Non-Patent Document 2 to the wall surface image obtained by the technique disclosed in Patent Document 1, information on the three-dimensional shape of the wall surface in the tunnel can be obtained. Furthermore, by applying the technique disclosed in Non-Patent Document 3, a two-dimensional development view of the wall surface in the tunnel can be obtained.
  • GPS Global Positioning System
  • the inspection method should be such that the entire tunnel wall surface is observed at a low frequency (for example, every 5 or 10 years), and follow-up observations of previously discovered deformations are performed at a high frequency (for example, every year). There are also many. In the follow-up observation of the deformation, it is necessary to identify the position that the inspector is currently observing, grasp the positional relationship with the deformation to be observed, and guide the inspector to the target location.
  • This invention solves said subject, and relates to the identification apparatus, the identification method, and program which identify the position of the wall surface in the tunnel reflected in the photograph image.
  • the identification device is: Acquire multiple photographic images of the walls in the tunnel, Calculating the position of each feature point and local feature amount of each of the plurality of photographic images; A three-dimensional polygon model of the wall surface is constructed with reference to the calculated feature point positions and local feature amounts for each of the plurality of photographic images, and a wall map based on the constructed three-dimensional polygon model is generated.
  • an identification device it is possible to provide an identification device, an identification method, and a program for identifying the position of the wall surface in the tunnel reflected in the photographic image.
  • FIG. 5 is a drawing-substituting photograph that represents a three-dimensional polygon model in which a search image is searched in Experiment 1 with a gray scale of 256 gradations.
  • FIG. 3 is a drawing-substituting photograph that represents a three-dimensional polygon model in which a search image is searched in Experiment 1 in two gradations of monochrome.
  • it is a drawing substitute photograph that represents an example of a photographic image A of a crack to be searched in 256 gray scales.
  • it is a drawing substitute photograph that represents an example of a photographic image B of a crack to be searched in 256 gray scales.
  • Experiment 2 it is a drawing substitute photo that represents an example of a photographic image C of a crack to be searched in 256 gray scales.
  • FIG. 5 is a drawing-substituting photograph that represents an example of a cracked photographic image A to be searched in Experiment 2 in two-tone monochrome.
  • a photographic image B that matches the search image B is a drawing substitute photo that represents monochrome in two gradations.
  • a photographic image C matching the search image C is a drawing-substituting photograph that represents two-tone monochrome.
  • a photographic image D that matches the search image D is a drawing-substituting photograph that is expressed in two-tone monochrome.
  • a photographic image E that matches the search image E is a drawing-substituting photograph that represents two-tone monochrome.
  • a photographic image A that matches the search image A is a drawing substitute photo that represents a gray scale of 256 gradations.
  • a photographic image B that matches the search image B is a drawing substitute photo that represents a gray scale of 256 gradations.
  • a photographic image C matching the search image C is a drawing-substituting photo that represents a gray scale of 256 gradations.
  • a photographic image D that matches the search image D is a drawing substitute photo that represents a gray scale of 256 gradations.
  • a photographic image E that matches the search image E is a photo that substitutes for a drawing expressed in 256 gray scales.
  • FIG. 5 is a drawing-substituting photograph that represents a three-dimensional polygon model in which a search image A-E is searched in Experiment 2 with 256 gray scales.
  • FIG. 5 is a drawing-substituting photograph that represents a three-dimensional polygon model in which a search image A-E is searched in Experiment 2 in two-tone monochrome.
  • FIG. 1 is an explanatory diagram showing a schematic configuration of an identification apparatus according to an embodiment of the present invention. Hereinafter, an outline will be described with reference to this figure.
  • the identification apparatus 101 is realized by executing a predetermined program in a computer, and includes a first acquisition unit 111, a first calculation unit 112, a construction unit 113, and a first mapping unit. 114, a second acquisition unit 121, a second calculation unit 122, a second mapping unit 124, and an output unit 125.
  • the first acquisition unit 111 acquires a plurality of photographic images taken of the wall surface in the tunnel.
  • a plurality of photographic images are taken by a video camera or a still camera as disclosed in Patent Document 1, for example.
  • the first calculation unit 112 calculates the position of each feature point and the local feature amount of a plurality of photographic images.
  • Non-Patent Document 1 local feature quantities such as SIFT (Scale-Invariant Feature Transform) and SURF (Speeded Up Up Robust Feature) are used.
  • SIFT Scale-Invariant Feature Transform
  • SURF Speeded Up Up Robust Feature
  • the construction unit 113 constructs a 3D polygon model of the wall with reference to the feature point positions and local feature amounts calculated for each of the plurality of photographic images, and a wall map based on the constructed 3D polygon model Is generated.
  • Non-Patent Document 1 the SfM technology disclosed in Non-Patent Document 1 is used to construct a three-dimensional polygon model.
  • Non-Patent Document 2 since the distribution of the feature points included in the point cloud data is dense and dense, in order to specify the three-dimensional position with respect to an arbitrary pixel of the photographic image, for example, the curved surface reconstruction disclosed in Non-Patent Document 2 is performed. Do.
  • a 3D map expressed by the 3D polygon model can be adopted, or a 2D map expressed by an expanded view of the 3D polygon model can be adopted. it can.
  • the technique disclosed in Non-Patent Document 3 can be applied to the generation of the two-dimensional map.
  • the first mapping unit 114 associates each pixel of the plurality of photographic images with a position in the wall surface map. This association is called “first mapping”.
  • the processing performed by the first acquisition unit 111, the first calculation unit 112, the construction unit 113, and the first mapping unit 114 described above may be referred to as “initialization”. It is executed when the inspection is performed.
  • the position and local feature amount of each feature point in the plurality of photographic images calculated at the time of initialization and the wall surface map are used to search the search image and identify the position in the processing described later. It is stored in a recording medium, hard disk, database, etc.
  • the process for performing the inspection after narrowing down the target portion with a relatively high frequency is as follows: the second acquisition unit 121, the second calculation unit 122, the second mapping This is executed by the unit 124 and the output unit 125. These processes may be collectively referred to as “search”.
  • the second acquisition unit 121 acquires a search image in which a wall surface in the tunnel is newly photographed.
  • the plurality of photographic images acquired by the first acquisition unit 111 are photographed so as to cover the wall surface in the tunnel, but the search image acquired by the second acquisition unit 121 It is the photograph which image
  • the second calculation unit 122 calculates the position of the feature point and the local feature amount of the acquired search image.
  • the second calculation unit 122 calculates the position of the feature point and the local feature amount using the same algorithm as the first calculation unit 112.
  • the second mapping unit 124 compares the position of the feature point and the local feature amount calculated for the search image with the position of the feature point and the local feature amount calculated for each of the plurality of photographic images.
  • Each pixel of the search image is associated with any pixel of the plurality of photographic images.
  • the second mapping unit 124 selects feature points in a plurality of photographic images having local feature amounts similar to the local feature amount for each feature point in the search image, and each feature point in the search image Is associated with one of feature points in a plurality of photographic images. Positions other than feature points are associated by using Delaunay triangulation. This association is referred to as “second mapping”.
  • the output unit 125 outputs a position in the wall map in which any pixel of the plurality of photographic images associated with each pixel of the search image is associated.
  • the output unit 125 obtains positions in a plurality of photographic images taken at the time of initialization by applying the second mapping to the positions of the respective pixels of the search image. Then, by applying the first mapping to the position, the position on the wall surface map of each pixel of the search image is obtained.
  • the initialization process and the identification process can be executed at different times and frequencies. Therefore, the computer that executes the initialization process may be the same as or different from the computer that executes the identification process. Therefore, the program for executing the initialization process and the program for executing the identification process may be prepared together as one program, or may be prepared as independent programs.
  • the program is recorded on a recording medium, loaded into a memory included in the computer, and executed by a processor included in the computer.
  • the program is expressed as a collection of codes for realizing each unit of the identification apparatus 101.
  • the first mapping unit 114 converts the three-dimensional coordinates themselves of each point of the three-dimensional polygon model or the two-dimensional coordinates in the development view of each point into color information, and the obtained color information is converted to each point.
  • Build a three-dimensional colored model by assigning to the colors.
  • the first mapping unit 114 renders a three-dimensional color model from the shooting position and shooting direction estimated by SfM. If the rendering result is the same size as each photographic image, the color information of the pixel arranged at the same position as each pixel in each photographic image is obtained in the rendering result, and if the inverse conversion to coordinates is performed, Coordinate information can be obtained.
  • the color information of each pixel can be calculated with floating-point precision by using OpenGL FBO (Frame Buffer Object), etc., and can be calculated using GPU, so high precision coordinate values are fast and robust It becomes possible to calculate.
  • OpenGL FBO Frae Buffer Object
  • the program can be used as a design drawing of an electronic circuit.
  • an electronic circuit for performing initialization processing or an electronic circuit for performing identification processing is realized as hardware based on the program.
  • the identification apparatus 101 is realized as a whole by executing an initialization program and an identification program by a computer.
  • FIG. 2 is a flowchart showing the control flow of the initialization process.
  • a description will be given with reference to FIG.
  • the computer that executes the initialization program first acquires a plurality of photographic images obtained by photographing the wall surface in the tunnel (step S201).
  • FIG. 3 is a drawing substitute photo showing a plurality of photographic images taken on the wall of the tunnel in 256 gray scales.
  • FIG. 4 is a drawing-substituting photograph in which a plurality of photographic images taken on the wall surface of the tunnel are represented in two-tone monochrome.
  • the illustrated image is a part of a plurality of photographic images acquired in step S201.
  • each frame of a moving image taken while rotating a vehicle-mounted video camera may be used, or an image taken while moving and rotating a still camera may be used.
  • FIG. 5 is a drawing-substituting photograph showing the point group obtained by SfM in a gray scale of 256 gradations.
  • FIG. 6 is a drawing-substituting photograph in which the point group obtained by SfM is expressed in two-tone monochrome.
  • the computer uses the curved surface reconstruction method to generate polygons passing through the three-dimensional point group, construct a three-dimensional polygon model, and generate a wall surface map based on the three-dimensional polygon model (step S203). Since the point cloud is a three-dimensional representation of feature points in a photographic image, the three-dimensional coordinates of any point on the wall surface in the tunnel can be estimated by reconstructing the curved surface.
  • FIG. 7 is a drawing-substituting photograph that represents the three-dimensional polygon model in 256 gray scales.
  • FIG. 8 is a drawing-substituting photograph that represents a three-dimensional polygon model in monochrome with two gradations.
  • Non-Patent Document 2 a three-dimensional polygon model generated for a wall surface in a tunnel photographed in a photographic image by the Smooth-signed Distance-Surface-Reconstruction method disclosed in Non-Patent Document 2 is shown.
  • each polygon is constructed so as to pass through the three-dimensional point group, the three-dimensional point group and the polygon are associated with each other by identity transformation mapping.
  • the generated 3D polygon model is a 3D map that expresses the 3D coordinate values of each point on the wall in the tunnel as it is. This three-dimensional map can be used as a wall map.
  • a developed view of the wall surface in the tunnel can be used as a wall surface map.
  • the two-dimensional coordinate value of each point on the wall surface in the tunnel is expressed in the development view.
  • FIG. 9 is a drawing-substituting photograph showing vertices of a three-dimensional polygon model in a gray scale of 256 gradations.
  • FIG. 10 is a drawing-substituting photograph in which the vertex of the three-dimensional polygon model is represented in two-tone monochrome. On the left side of the figure, the cross section of the polygon P is drawn in an arc shape, and the end point corresponds to the boundary vertex.
  • the internal vertex v i is subjected to local parameterization by linear combination using its neighboring vertices v i, j .
  • FIG. 11 is an explanatory diagram for explaining how the points of the three-dimensional polygon model are projected onto a two-dimensional development view.
  • the imaging position C i of the image M i, and pixel m j, the line of sight is defined.
  • the position of the intersection of this line of sight and the polygon P is the three-dimensional coordinate p j .
  • the point itself is appropriately represented by coordinates.
  • FIG. 12 is a drawing-substituting photograph showing a development view of the wall surface in the tunnel in a gray scale of 256 gradations.
  • FIG. 13 is a drawing-substituting photograph in which the development of the wall surface in the tunnel is shown in two-tone monochrome.
  • the computer image M i in the pixel m j 3-dimensional coordinates p j in the three-dimensional polygon model for a 2-dimensional coordinate s j in or a two-dimensional development view is calculated to the first mapping Obtain (step S204).
  • the computer stores the association based on the calculation result as a first mapping that associates the wall surface map with each pixel in a memory or a hard disk, and also stores the positions of the feature points and local feature amounts of a plurality of photographic images in the memory or hard disk (Step S205), and the process is terminated.
  • FIG. 14 is a flowchart showing a flow of control of the identification process.
  • Identification processing is realized by a computer executing a program for identification processing.
  • the computer acquires a search image (step S301).
  • the search image is desirably acquired from a camera directly connected to a computer.
  • each time the searcher takes a search image with the camera the inspector can obtain which point in the wall map is currently being observed. Also, when a crack or the like is found, if it is photographed, it is possible to identify whether this is a newly generated deformation or a deformation that has been discovered in the past.
  • the computer calculates the position of the feature point and the local feature amount in the acquired search image (step S302).
  • the same algorithm is used in the initialization process and the identification process for calculating the position of the feature point and the local feature amount.
  • SIFT, SURF, etc. can be adopted.
  • the computer compares the feature point positions and local feature amounts calculated for the search image with the feature point positions and local feature amounts calculated for a plurality of photographic images in the initialization process. Then, a photographic image that matches the search image is searched from a plurality of photographic images (step S303). With this process, a second mapping representing the correspondence between the pixels of the search image and the matching image is obtained.
  • the bag search is performed as follows. First, a feature point pair obtained by combining each feature point calculated for a search image and any one of feature points calculated for a plurality of photographic images is searched.
  • one feature point pair is a pair of one feature point in the search image and one feature point in any one of a plurality of photographic images, and these two feature points.
  • the local feature values calculated for are similar to each other.
  • a photographic image in which many feature points obtained by the feature point pair appear is regarded as an image that matches the search image.
  • the feature point pair may include an illegal pair
  • the illegal pair can be removed by using the RANSAC method.
  • this coordinate transformation is applied to the other feature points remaining in one image, and the degree of success of the coordinate transformation in the vicinity of the corresponding feature point in the other image is obtained.
  • the random selection and the process for obtaining the success degree of coordinate transformation are repeatedly executed, and the coordinate transformation having the highest success degree is selected.
  • the coordinate transformation obtained in this process corresponds to a second mapping in which each pixel of the search image is associated with any pixel of any one of the plurality of photo images.
  • a Delaunay triangulation can be constructed using the feature points of the search image and applied to the feature points of the matched image to obtain a second mapping with higher accuracy.
  • a triangle mesh is generated using Delaunay triangulation for the feature points included in the search image, and the triangular region surrounded by the feature points is parameterized.
  • the obtained mesh phase is applied as it is to the match image, it can be constructed in a region where similar triangles match.
  • the computer calculates the coordinate value in the wall map by applying the second map and the first map to each pixel in the search image (step S304).
  • the coordinate value obtained here can be a three-dimensional coordinate or a two-dimensional coordinate.
  • the computer outputs the position in the wall map of the coordinate value calculated for each pixel in the search image (step S305), and ends this process.
  • an OpenGL FBO is drawn using a GPU, and a simple and high-speed calculation of a pixel is robustly realized by referring to the result.
  • the 3D coordinates of each point and the 2D coordinates when each point is shown in the development view are RGB (Red (Green Blue)
  • RGB Red (Green Blue)
  • the converted value is assigned as a color.
  • the three-dimensional polygon model is perspective-projected from the photographing position and photographing direction obtained as a result of SfM. Then, the rendering result is the same composition as the original photographic image, and the image of each pixel is drawn as a result of converting the coordinates in the wall map into colors.
  • the first mapping can be easily represented by this image.
  • the search image and the match image are divided into triangles, colors that do not overlap each other are given to the vertices of the triangles, and colors that are proportional to the vertex colors by the barycentric coordinates are given to the inside of the triangles.
  • a vertex color obtained by converting a two-dimensional coordinate value of the vertex into a color can be used.
  • each RGB component of a color is expressed as a floating point number between 0 and 1. Therefore, the three-dimensional coordinate value may be converted into a color by normalizing each three-dimensional element to a value between 0 and 1 and assigning it to each element of R, G, and B. In the case of a two-dimensional coordinate value, it is easiest to use only two components of R, G, and B.
  • a laser scanner conventionally used for tunnel wall inspection as disclosed in Patent Document 1 can measure cracks on the order of 1 mm or more from an automobile traveling at a speed of 50 km / h.
  • the resolution with a horizontal resolution of 0.2mm or less and a depth (ranging) resolution of 0.1mm or less is required. Necessary.
  • high-precision laser scanner enables more detailed information to be acquired than when shooting moving images or still images.
  • detection using a high-definition and high-precision laser scanner takes time, it is practically difficult to inspect the entire wall surface in the tunnel using a high-definition and high-precision laser scanner. .
  • the range measured by the laser scanner becomes a wall surface. You can get where in the map. This will be described below.
  • the camera when the retrieval image is photographed, the camera captures the wall surface in the tunnel continuously while maintaining the photographing position and the photographing direction within a predetermined error range.
  • the laser beam from the laser scanner is applied to the measurement range from the middle.
  • the photograph obtained in the first half of the continuous shooting shows that the wall surface is photographed as it is, and the photograph obtained in the second half shows that a part of the wall surface is brightened with laser light.
  • the photograph obtained in the first half of the continuous shooting is used as the search image.
  • the photograph obtained in the second half of the continuous shooting represents a scan area by the laser scanner, and is hereinafter referred to as a scan image.
  • the scan image can be almost overlapped with the search image, and the correspondence between the pixels can be easily determined.
  • the correspondence between each pixel of the search image and the scanned image can be obtained by using the feature point extraction and matching technique in SfM. This is called the third map.
  • the position in the wall map can be easily obtained by applying the third map, the second map, and the first map to each point of the measurement range in the scan image.
  • DMPanasonic DMC-GF6 was used as a camera, and 250 photographic images were taken to cover the entire wall surface in the tunnel. The resolution of each photographic image is 1148 x 862 pixels. A three-dimensional model was generated from this photographic image.
  • search image with a resolution of 4592 x 3598 pixels was taken.
  • the search image is a close-up image on the wall surface, and the resolution is higher than that of the photographic image taken by initialization, but the field of view taken is narrow.
  • FIG. 15 is a drawing-substituting photograph that represents an example of a search image in 256 gray scales in Experiment 1.
  • FIG. 16 is a drawing-substituting photograph that represents an example of a retrieval image in Experiment 1 in monochrome with two gradations. In the search images shown in these figures, wide T-shaped cracks are drawn with emphasis on the side.
  • FIG. 17 is a drawing-substituting photograph that represents, in Experiment 1, a photographic image that matches the search image in a gray scale of 256 gradations.
  • FIG. 18 is a drawing-substituting photograph that represents, in Experiment 1, a photographic image that matches the search image in two-tone monochrome.
  • This is a photographic image (match image) that matches the search image among the photographic images used at the time of initialization.
  • the match image there is a wide T-shaped crack next to it, which is drawn with emphasis. Further, the pattern of the wall arranged around the T-shaped crack matches the search image and the match image.
  • FIG. 19 is a drawing-substituting photograph showing the three-dimensional polygon model for which the search image is searched in Experiment 1 with a gray scale of 256 gradations.
  • FIG. 20 is a drawing-substituting photograph that represents the three-dimensional polygon model for which the search image is searched in Experiment 1 in two gradations.
  • the photographic image referenced at the time of initialization is pasted as a texture, and the position with the same pattern as the search image is identified as the search result between the U shape of the tunnel edge.
  • the T-shaped crack is mapped.
  • 572SfM reconstruction used 572 images. Each image had a size of 1124 ⁇ 750 pixels and was photographed with a Nikon (registered trademark) camera D5500. It took about an hour to take all the photos.
  • FIG. 21A is a drawing-substituting photograph showing an example of a cracked photographic image A to be searched in Experiment 2 in a gray scale of 256 gradations.
  • FIG. 21B is a drawing-substituting photograph showing an example of a cracked photographic image B to be searched in Experiment 2 in a gray scale of 256 gradations.
  • FIG. 21C is a drawing-substituting photograph that represents an example of a cracked photographic image C to be searched in Experiment 2 in a gray scale of 256 gradations.
  • FIG. 21A is a drawing-substituting photograph showing an example of a cracked photographic image A to be searched in Experiment 2 in a gray scale of 256 gradations.
  • FIG. 21B is a drawing-substituting photograph showing an example of a cracked photographic image B to be searched in Experiment 2 in a gray scale of 256 gradations.
  • FIG. 21C is a drawing-substituting photograph that represents an
  • 21D is a drawing-substituting photograph showing an example of a cracked photographic image D to be searched in Experiment 2 in a gray scale of 256 gradations.
  • FIG. 21E is a drawing-substituting photograph showing an example of a cracked photographic image E to be searched in Experiment 2 in a gray scale of 256 gradations.
  • FIG. 22A is a drawing-substituting photograph showing an example of a cracked photographic image A to be searched in Experiment 2 in monochrome of two gradations.
  • FIG. 22B is a drawing-substituting photograph showing an example of a cracked photographic image B to be searched in Experiment 2 in monochrome of two gradations.
  • FIG. 22C is a drawing-substituting photograph showing an example of a cracked photographic image C to be searched in Experiment 2 in monochrome of two gradations.
  • FIG. 22D is a drawing-substituting photograph showing an example of a cracked photographic image D to be searched in Experiment 2 in monochrome of two gradations.
  • FIG. 22E is a drawing-substituting photograph that represents an example of a cracked photographic image E to be searched in Experiment 2 in monochrome with two gradations. The photograph actually taken is a color image.
  • FIG. 23A is a drawing-substituting photograph that represents a search image for the photograph image A in Experiment 2 in monochrome with two gradations.
  • FIG. 23B is a drawing-substituting photograph that represents the search image for the photograph image B in Experiment 2 in monochrome with two gradations.
  • FIG. 23C is a drawing-substituting photograph in which the search image for the photographic image C in Experiment 2 is represented in monochrome with two gradations.
  • FIG. 23D is a drawing-substituting photograph that represents a search image corresponding to the photograph image D in Experiment 2 in monochrome with two gradations.
  • FIG. 23E is a drawing-substituting photograph in which the search image for the photographic image E in Experiment 2 is represented in monochrome with two gradations. As shown in these figures, the search image is a monochrome image.
  • FIG. 25A is a drawing-substituting photograph in which the photographic image A that matches the search image A in Experiment 2 is represented by a gray scale of 256 gradations.
  • FIG. 25B is a drawing-substituting photograph in which the photographic image B matched with the search image B in Experiment 2 is represented by a gray scale of 256 gradations.
  • FIG. 25C is a drawing-substituting photograph in which the photographic image C matching the search image C in Experiment 2 is represented by a gray scale of 256 gradations.
  • FIG. 25A is a drawing-substituting photograph in which the photographic image A that matches the search image A in Experiment 2 is represented by a gray scale of 256 gradations.
  • FIG. 25B is a drawing-substituting photograph in which the photographic image B matched with the search image B in Experiment 2 is represented by a gray scale of 256 gradations.
  • FIG. 25C is a drawing-substitu
  • 25D is a drawing-substituting photograph in which the photographic image D matched with the search image D in Experiment 2 is represented by 256 gray scales.
  • FIG. 25E is a drawing-substituting photograph in which the photographic image E that matches the search image E in Experiment 2 is represented by a gray scale of 256 gradations.
  • FIG. 24A is a drawing-substituting photograph in which the photographic image A that matches the search image A in Experiment 2 is represented in two-tone monochrome.
  • FIG. 24B is a drawing-substituting photograph in which the photographic image B that matches the search image B in Experiment 2 is represented in two-tone monochrome.
  • FIG. 24C is a drawing-substituting photograph in which the photographic image C that matches the search image C in Experiment 2 is represented in two-tone monochrome.
  • FIG. 24D is a drawing-substituting photograph that represents the photographic image D that matches the search image D in Experiment 2 in monochrome with two gradations.
  • FIG. 24E is a drawing-substituting photograph in which the photographic image E that matches the search image E in Experiment 2 is represented in two-tone monochrome. The photograph actually taken is a color image.
  • FIG. 27 is a drawing-substituting photograph that expands the search result of the search image A-E in the development of the wall surface in the tunnel in Experiment 2 and expresses it in two-tone monochrome.
  • FIG. 26 is a drawing-substituting photograph that is obtained by enlarging the result of searching for the search image A-E in the development of the wall surface in the tunnel in Experiment 2 and expressing it in 256 gray scales.
  • FIGS. 1-10 are developed views of the inner surface of the tunnel, and show a state in which the position of a photographic image as a search result is surrounded by an ellipse and a crack is enlarged beside it.
  • the search took 6.7 seconds to load the data, and the search itself took an average of 16.7 seconds. Therefore, it took 23.4 seconds on average to search for one search image, and it was found that the search can be performed in a practical time.
  • FIG. 29 is a drawing-substituting photograph in which the three-dimensional polygon model for which the search images A to E are searched in Experiment 2 is represented in two-tone monochrome.
  • FIG. 28 is a drawing-substituting photograph showing the three-dimensional polygon model in which the search images A to E are searched in Experiment 2 in a gray scale of 256 gradations. Since each point in these drawings is associated with each point in the development view, it is possible to confirm the state of the cracked portion in a three-dimensional manner. It can also be seen that the reconstruction of the 3D model is done in a reasonable time.
  • the identification device is A first acquisition unit for acquiring a plurality of photographic images of the walls in the tunnel; A first calculation unit for calculating a position of each feature point and a local feature amount of each of the plurality of photographic images; A three-dimensional polygon model of the wall surface is constructed with reference to the calculated feature point position and local feature amount for each of the plurality of photographic images, and a wall surface map based on the constructed three-dimensional polygon model is generated.
  • a first mapping unit that associates each pixel of the plurality of photographic images with a position in the wall surface map
  • a second acquisition unit for acquiring a search image in which a wall surface in the tunnel is newly photographed
  • a second calculator for calculating the position of the feature point of the acquired search image and the local feature amount;
  • a second mapping unit that associates each pixel with one of the pixels of the plurality of photographic images
  • An output unit that outputs a position in the wall map in which any pixel of the plurality of photographic images associated with each pixel of the search image is associated;
  • the second acquisition unit is a scan image that is continuously shot while the shooting position and the shooting direction are maintained within a predetermined error range following the shooting of the search image, and the scan area in the wall surface is scanned with a laser scanner. Acquire a scanned image of the scan taken by The second mapping unit associates each pixel of the scan image with any pixel of the search image by comparing the scan image with the search image. The output unit associates any pixel of the plurality of photographic images associated with any pixel of the search image associated with each pixel in the scan area captured in the scan image. The position in the wall surface map can be output.
  • the wall surface map can be configured to be a three-dimensional map that represents a three-dimensional coordinate value of the wall surface by the three-dimensional polygon model.
  • the wall surface map can be configured to be a two-dimensional map that represents a two-dimensional coordinate value of the wall surface by a developed view of the three-dimensional polygon model.
  • the first mapping unit includes Converting the coordinate value into color information; Assign the converted color information as the color of the point associated with the coordinate value of the three-dimensional polygon model, Rendering the three-dimensional polygon model to which the color is assigned from the shooting position and shooting direction at which each of the plurality of photographic images was taken, and generating a corresponding image having the same size as each of the plurality of photographic images , By inversely transforming the color drawn on each pixel in the generated corresponding image into coordinate values, Each pixel of the plurality of photographic images can be configured to correspond to a position in the wall surface map.
  • the rendering can be configured to be executed by a GPU (Graphics Processing Unit).
  • GPU Graphics Processing Unit
  • the identification device is Acquire multiple photographic images of the walls in the tunnel, Calculating the position of each feature point and local feature amount of each of the plurality of photographic images;
  • a three-dimensional polygon model of the wall surface is constructed with reference to the calculated feature point positions and local feature amounts for each of the plurality of photographic images, and a wall map based on the constructed three-dimensional polygon model is generated.
  • the program according to this embodiment is The first computer, A first acquisition unit for acquiring a plurality of photographic images of the walls in the tunnel; A first calculation unit for calculating a position of each feature point and a local feature amount of each of the plurality of photographic images; A three-dimensional polygon model of the wall surface is constructed with reference to the calculated feature point position and local feature amount for each of the plurality of photographic images, and a wall surface map based on the constructed three-dimensional polygon model is generated.
  • Construction Department A first program that functions as a first mapping unit that associates each pixel of the plurality of photographic images with a position in the wall surface map; A second computer or the first computer, A second acquisition unit for acquiring a search image in which a wall surface in the tunnel is newly photographed; A second calculator for calculating the position of the feature point of the acquired search image and the local feature amount; By comparing the calculated feature point position and local feature amount for the search image with the calculated feature point position and local feature amount for each of the plurality of photographic images, A second mapping unit that associates each pixel with one of the pixels of the plurality of photographic images; A second program that functions as an output unit that outputs a position in the wall map in which any pixel of the plurality of photographic images associated with each pixel of the search image is associated; Is provided.
  • the program can be recorded and distributed and sold on a non-transitory computer-readable information recording medium. It can also be distributed and sold via a temporary transmission medium such as a computer communication network.
  • an identification device it is possible to provide an identification device, an identification method, and a program for identifying the position of the wall surface in the tunnel reflected in the photographic image.

Abstract

In an identifying device (101), a first acquiring unit (111) acquires a plurality of photographic images obtained by capturing images of a wall surface inside a tunnel. A first calculating unit (112) calculates a position of a feature point and a local feature quantity in the photographic images. A constructing unit (113) constructs a three-dimensional polygonal model of the wall surface to generate a wall surface map. A first mapping unit (114) associates each pixel in the photographic images with a position in the wall surface map. A second acquiring unit (121) acquires a search image obtained by newly capturing an image of the wall surface. A second calculating unit (122) calculates a position of a feature point and a local feature quantity in the search image. A second mapping unit (124) compares the positions of the feature points and the local feature quantities in the photographic images and the search image to associate each pixel in the search image with one of the pixels in the photographic images. An output unit (125) outputs the positions in the wall surface map to which the pixels in the photographic images associated with each pixel in the search image are associated.

Description

写真画像に映ったトンネル内の壁面の位置を同定する同定装置、同定方法、ならびに、プログラムIDENTIFICATION DEVICE, IDENTIFICATION METHOD, AND PROGRAM FOR IDENTIFYING THE LOCATION OF A WALLEN IN A TUNNEL IN A PHOTOGRAPH IMAGE
  本発明は、写真画像に映ったトンネル内の壁面の位置を同定する同定装置、同定方法、ならびに、プログラムに関する。 The present invention relates to an identification device, an identification method, and a program for identifying the position of a wall surface in a tunnel reflected in a photographic image.
  従来から、トンネルの壁面の状況を把握するための調査装置が提案されている。 調査 Conventionally, investigation devices for grasping the condition of the wall surface of the tunnel have been proposed.
  たとえば、特許文献1には、
  壁面と正対させた複数のビデオカメラを、壁面と略垂直な平面上に、壁面までの距離が各々等しく、かつ、隣り合うビデオカメラの視野の端部が重なり合うようにカメラ設置用架台に取り付け、計測車両を壁面と略一定の距離を保ちつつ移動させながら壁面を撮影することで、
  ビデオカメラを壁面と正対させ、壁面までの距離を等しくし、かつ撮像視野の端部を重なり合うようにすることにより、縮尺が均一な壁面画像を撮影し、
  画像のラップ率が低い場合や、撮像手段と辺面との距離が変化して画像の縮尺が変わるような場合には、INS(Inertial Navigation System)、走行距離計、レーザスキャナによる計測データを利用することで接合を可能とし、縮尺が均一な展開画像を作成する
  という技術が提案されている。
For example, in Patent Document 1,
Mount multiple video cameras facing the wall to the camera installation stand so that the distance to the wall is the same and the end of the field of view of the adjacent video camera overlaps on a plane perpendicular to the wall. By photographing the wall surface while moving the measurement vehicle while maintaining a substantially constant distance from the wall surface,
By facing the video camera directly to the wall surface, equalizing the distance to the wall surface and overlapping the ends of the imaging field of view, a wall image with a uniform scale is taken,
Use data measured by INS (Inertial Navigation System), odometer, and laser scanner when the image wrap rate is low or when the distance between the imaging means and the side surface changes and the scale of the image changes. Thus, a technique has been proposed that enables joining and creates a developed image with a uniform scale.
  また、非特許文献1には、3次元の対象を異なる位置から撮影した複数の2次元画像もしくは2次元動画像から、当該対象の3次元情報を取得するSfM(Structure from Motion)技術が開示されている。 Non-Patent Document 1 discloses SfM (Structure from Motion) technology that acquires 3D information of a target from a plurality of 2D images or 2D moving images taken from different positions of the 3D target. ing.
  特許文献1に開示される技術により得られる壁面画像に対して、非特許文献2に開示される技術を適用することにより、トンネル内の壁面の3次元形状の情報を得ることができる。さらに非特許文献3に開示される技術を適用することにより、トンネル内の壁面の2次元展開図を得ることができる。 By applying the technique disclosed in Non-Patent Document 2 to the wall surface image obtained by the technique disclosed in Patent Document 1, information on the three-dimensional shape of the wall surface in the tunnel can be obtained. Furthermore, by applying the technique disclosed in Non-Patent Document 3, a two-dimensional development view of the wall surface in the tunnel can be obtained.
  ここで、トンネル内のひび割れ、水漏れ、空洞等の変状を発見するには、定期的な点検を行い、適切に補修をする必要がある。 Here, in order to find cracks, water leaks, cavities, etc. in the tunnel, it is necessary to conduct periodic inspections and repair them appropriately.
特開2004-12152号公報JP 2004-12152 A
  しかしながら、トンネル内では、GPS(Global Positioning System)を利用することができないため、トンネルの壁面に変状が発見された位置を適切に同定する必要がある。特に、点検と補修が異なる日に行われる場合には、以前に点検者が発見した変状の位置と、現在補修者が観察している位置と、が、一致しているか否か、どのような位置関係にあるかを、高速に判定することで、作業の効率化を図ることが求められている。 However, since GPS (Global Positioning System) cannot be used in the tunnel, it is necessary to appropriately identify the position where the deformation is found on the wall of the tunnel. In particular, if the inspection and repair are performed on different days, whether or not the position of the deformity previously discovered by the inspector and the position currently observed by the repairer match. Therefore, it is required to improve work efficiency by determining at high speed whether there is a proper positional relationship.
  このほか、トンネル壁面全体の観察は低い頻度(たとえば5年あるいは10年毎)で行い、過去に発見された変状の経過観察は高い頻度(たとえば毎年)で行うような点検手法を採用することも多い。変状の経過観察においては、点検者が現在観察している位置を同定して、経過観察すべき変状との位置関係を把握させ、点検者を目的箇所まで導く必要がある。 In addition, the inspection method should be such that the entire tunnel wall surface is observed at a low frequency (for example, every 5 or 10 years), and follow-up observations of previously discovered deformations are performed at a high frequency (for example, every year). There are also many. In the follow-up observation of the deformation, it is necessary to identify the position that the inspector is currently observing, grasp the positional relationship with the deformation to be observed, and guide the inspector to the target location.
  また、トンネルの壁面において変状が発見された場合には、当該変状を撮影した画像と、以前に点検したときのトンネルの情報と、から、当該変状が発見された位置を高速に同定する技術が強く求められている。 In addition, when a deformation is found on the wall of the tunnel, the position where the deformation is found is identified at high speed from the image of the deformation and the information of the tunnel that was previously inspected. There is a strong demand for technology to do this.
  本発明は、上記の課題を解決するもので、写真画像に映ったトンネル内の壁面の位置を同定する同定装置、同定方法、ならびに、プログラムに関する。 This invention solves said subject, and relates to the identification apparatus, the identification method, and program which identify the position of the wall surface in the tunnel reflected in the photograph image.
  本発明に係る同定装置は、
  トンネル内の壁面が撮影された複数の写真画像を取得し、
  前記複数の写真画像のそれぞれの特徴点の位置ならびに局所特徴量を計算し、
  前記複数の写真画像のそれぞれについて前記計算された特徴点の位置ならびに局所特徴量を参照して前記壁面の3次元ポリゴンモデルを構築し、前記構築された3次元ポリゴンモデルに基づく壁面マップを生成し、
  前記複数の写真画像のそれぞれの各画素と、前記壁面マップ内の位置と、を対応付け、
  前記トンネル内の壁面が新たに撮影された検索画像を取得し、
  前記取得された検索画像の特徴点の位置ならびに局所特徴量を計算し、
  前記検索画像について前記計算された特徴点の位置ならびに局所特徴量と、前記複数の写真画像のそれぞれについて前記計算された特徴点の位置ならびに局所特徴量と、を対比することにより、前記検索画像の各画素と、前記複数の写真画像のいずれかの画素と、を対応付け、
  前記検索画像の各画素に対応付けられた前記複数の写真画像のいずれかの画素が対応付けられた前記壁面マップ内の位置を出力する。
The identification device according to the present invention is:
Acquire multiple photographic images of the walls in the tunnel,
Calculating the position of each feature point and local feature amount of each of the plurality of photographic images;
A three-dimensional polygon model of the wall surface is constructed with reference to the calculated feature point positions and local feature amounts for each of the plurality of photographic images, and a wall map based on the constructed three-dimensional polygon model is generated. ,
Associating each pixel of the plurality of photographic images with a position in the wall map,
Obtain a search image in which the wall surface in the tunnel is newly photographed,
Calculating the position of the feature point of the acquired search image and the local feature amount;
By comparing the calculated feature point position and local feature amount for the search image with the calculated feature point position and local feature amount for each of the plurality of photographic images, Associating each pixel with one of the pixels of the plurality of photographic images,
A position in the wall surface map in which any pixel of the plurality of photographic images associated with each pixel of the search image is associated is output.
  本発明によれば、写真画像に映ったトンネル内の壁面の位置を同定する同定装置、同定方法、ならびに、プログラムを提供することができる。 According to the present invention, it is possible to provide an identification device, an identification method, and a program for identifying the position of the wall surface in the tunnel reflected in the photographic image.
本発明の実施形態に係る同定装置の概要構成を示す説明図である。It is explanatory drawing which shows schematic structure of the identification apparatus which concerns on embodiment of this invention. 初期化処理の制御の流れを示すフローチャートである。It is a flowchart which shows the flow of control of an initialization process. トンネルの壁面を撮影した複数の写真画像を256階調のグレイスケールで表す図面代用写真である。It is a drawing-substituting photograph in which a plurality of photographic images taken on the wall of a tunnel are represented in 256 gray scales. トンネルの壁面を撮影した複数の写真画像を2階調のモノクロで表す図面代用写真である。It is a drawing-substituting photograph in which a plurality of photographic images taken on the wall of a tunnel are represented in two-tone monochrome. SfMにより得られた点群を256階調のグレイスケールで表す図面代用写真である。It is a drawing-substituting photograph showing a point group obtained by SfM in a gray scale of 256 gradations. SfMにより得られた点群を2階調のモノクロで表す図面代用写真である。It is a drawing-substituting photograph in which the point cloud obtained by SfM is expressed in two-tone monochrome. 3次元ポリゴンモデルを256階調のグレイスケールで表す図面代用写真である。It is a drawing substitute photograph that represents a three-dimensional polygon model in 256 gray scales. 3次元ポリゴンモデルを2階調のモノクロで表す図面代用写真である。It is a drawing substitute photograph that represents a three-dimensional polygon model in two-tone monochrome. 3次元ポリゴンモデルの頂点を256階調のグレイスケールで表す図面代用写真である。It is a drawing substitute photograph that represents the vertices of a 3D polygon model in 256 gray scales. 3次元ポリゴンモデルの頂点を2階調のモノクロで表す図面代用写真である。It is a drawing substitute photograph that represents the vertex of a 3D polygon model in two-tone monochrome. 3次元ポリゴンモデルの点を2次元の展開図に射影する様子を説明する説明図である。It is explanatory drawing explaining a mode that the point of a 3-dimensional polygon model is projected on a 2-dimensional expansion | deployment figure. トンネル内の壁面の展開図を256階調のグレイスケールで表す図面代用写真である。It is the drawing substitute photograph which represents the development of the wall surface in the tunnel in 256 gray scales. トンネル内の壁面の展開図を2階調のモノクロで表す図面代用写真である。It is a drawing-substituting photograph showing the development of the wall surface inside the tunnel in monochrome with two gradations. 同定処理の制御の流れを示すフローチャートである。It is a flowchart which shows the flow of control of an identification process. 実験1において、検索画像の例を256階調のグレイスケールで表す図面代用写真である。In Experiment 1, it is a drawing substitute photograph that represents an example of a search image in a gray scale of 256 gradations. 実験1において、検索画像の例を2階調のモノクロで表す図面代用写真である。In Experiment 1, it is a drawing substitute photograph that represents an example of a search image in monochrome with two gradations. 実験1において、検索画像にマッチする写真画像を256階調のグレイスケールで表す図面代用写真である。In Experiment 1, it is a drawing-substituting photograph that represents a photographic image that matches the search image with a gray scale of 256 gradations. 実験1において、検索画像にマッチする写真画像を2階調のモノクロで表す図面代用写真である。In Experiment 1, it is a drawing-substituting photograph that represents a photographic image that matches the search image in two-tone monochrome. 実験1において、検索画像が検索された3次元ポリゴンモデルを256階調のグレイスケールで表す図面代用写真である。FIG. 5 is a drawing-substituting photograph that represents a three-dimensional polygon model in which a search image is searched in Experiment 1 with a gray scale of 256 gradations. 実験1において、検索画像が検索された3次元ポリゴンモデルを2階調のモノクロで表す図面代用写真である。FIG. 3 is a drawing-substituting photograph that represents a three-dimensional polygon model in which a search image is searched in Experiment 1 in two gradations of monochrome. 実験2において、検索したいひび割れの写真画像Aの例を256階調のグレイスケールで表す図面代用写真である。In Experiment 2, it is a drawing substitute photograph that represents an example of a photographic image A of a crack to be searched in 256 gray scales. 実験2において、検索したいひび割れの写真画像Bの例を256階調のグレイスケールで表す図面代用写真である。In Experiment 2, it is a drawing substitute photograph that represents an example of a photographic image B of a crack to be searched in 256 gray scales. 実験2において、検索したいひび割れの写真画像Cの例を256階調のグレイスケールで表す図面代用写真である。In Experiment 2, it is a drawing substitute photo that represents an example of a photographic image C of a crack to be searched in 256 gray scales. 実験2において、検索したいひび割れの写真画像Dの例を256階調のグレイスケールで表す図面代用写真である。In Experiment 2, it is a drawing substitute photograph that represents an example of a photographic image D of a crack to be searched in 256 gray scales. 実験2において、検索したいひび割れの写真画像Eの例を256階調のグレイスケールで表す図面代用写真である。In Experiment 2, it is a drawing-substituting photograph in which an example of a cracked photographic image E to be searched is represented by a gray scale of 256 gradations. 実験2において、検索したいひび割れの写真画像Aの例を2階調のモノクロで表す図面代用写真である。FIG. 5 is a drawing-substituting photograph that represents an example of a cracked photographic image A to be searched in Experiment 2 in two-tone monochrome. 実験2において、検索したいひび割れの写真画像Bの例を2階調のモノクロで表す図面代用写真である。In Experiment 2, it is a drawing substitute photograph that represents an example of a photographic image B of a crack to be searched in monochrome of two gradations. 実験2において、検索したいひび割れの写真画像Cの例を2階調のモノクロで表す図面代用写真である。In Experiment 2, it is a drawing-substituting photograph that represents an example of a cracked photographic image C to be searched in two-tone monochrome. 実験2において、検索したいひび割れの写真画像Dの例を2階調のモノクロで表す図面代用写真である。In Experiment 2, it is a drawing substitute photograph that represents an example of a cracked photographic image D to be searched in two-tone monochrome. 実験2において、検索したいひび割れの写真画像Eの例を2階調のモノクロで表す図面代用写真である。In Experiment 2, it is a drawing-substituting photograph that represents an example of a cracked photographic image E to be searched in two-tone monochrome. 実験2において、写真画像Aに対する検索画像を2階調のモノクロで表す図面代用写真である。In Experiment 2, it is a drawing-substituting photograph in which the search image for the photograph image A is represented by monochrome of two gradations. 実験2において、写真画像Bに対する検索画像を2階調のモノクロで表す図面代用写真である。In Experiment 2, it is a drawing-substituting photograph in which the search image for the photographic image B is represented in monochrome with two gradations. 実験2において、写真画像Cに対する検索画像を2階調のモノクロで表す図面代用写真である。In Experiment 2, it is a drawing-substituting photograph in which the search image for the photograph image C is represented in two gradations of monochrome. 実験2において、写真画像Dに対する検索画像を2階調のモノクロで表す図面代用写真である。In Experiment 2, it is a drawing-substituting photograph in which the search image for the photographic image D is represented in monochrome with two gradations. 実験2において、写真画像Eに対する検索画像を2階調のモノクロで表す図面代用写真である。In Experiment 2, it is a drawing-substituting photograph in which the search image for the photographic image E is represented by two-tone monochrome. 実験2において、検索画像Aにマッチした写真画像Aを2階調のモノクロで表す図面代用写真である。In Experiment 2, a photographic image A that matches the search image A is a drawing-substituting photograph that is expressed in two-tone monochrome. 実験2において、検索画像Bにマッチした写真画像Bを2階調のモノクロで表す図面代用写真である。In Experiment 2, a photographic image B that matches the search image B is a drawing substitute photo that represents monochrome in two gradations. 実験2において、検索画像Cにマッチした写真画像Cを2階調のモノクロで表す図面代用写真である。In Experiment 2, a photographic image C matching the search image C is a drawing-substituting photograph that represents two-tone monochrome. 実験2において、検索画像Dにマッチした写真画像Dを2階調のモノクロで表す図面代用写真である。In Experiment 2, a photographic image D that matches the search image D is a drawing-substituting photograph that is expressed in two-tone monochrome. 実験2において、検索画像Eにマッチした写真画像Eを2階調のモノクロで表す図面代用写真である。In Experiment 2, a photographic image E that matches the search image E is a drawing-substituting photograph that represents two-tone monochrome. 実験2において、検索画像Aにマッチした写真画像Aを256階調のグレイスケールで表す図面代用写真である。In Experiment 2, a photographic image A that matches the search image A is a drawing substitute photo that represents a gray scale of 256 gradations. 実験2において、検索画像Bにマッチした写真画像Bを256階調のグレイスケールで表す図面代用写真である。In Experiment 2, a photographic image B that matches the search image B is a drawing substitute photo that represents a gray scale of 256 gradations. 実験2において、検索画像Cにマッチした写真画像Cを256階調のグレイスケールで表す図面代用写真である。In Experiment 2, a photographic image C matching the search image C is a drawing-substituting photo that represents a gray scale of 256 gradations. 実験2において、検索画像Dにマッチした写真画像Dを256階調のグレイスケールで表す図面代用写真である。In Experiment 2, a photographic image D that matches the search image D is a drawing substitute photo that represents a gray scale of 256 gradations. 実験2において、検索画像Eにマッチした写真画像Eを256階調のグレイスケールで表す図面代用写真である。In Experiment 2, a photographic image E that matches the search image E is a photo that substitutes for a drawing expressed in 256 gray scales. 実験2において、トンネル内の壁面の展開図において検索画像A-Eが検索された結果を拡大して、256階調のグレイスケールで表す図面代用写真である。In Experiment 2, it is a drawing-substituting photograph that is obtained by enlarging the search result of the search image A-E in the development of the wall surface in the tunnel and expressing it in 256 gray scales. 実験2において、トンネル内の壁面の展開図において検索画像A-Eが検索された結果を拡大して、2階調のモノクロで表す図面代用写真である。In Experiment 2, it is a drawing-substituting photograph that expands the search result of the search image A-E in the development view of the wall surface in the tunnel and expresses it in monochrome with two gradations. 実験2において、検索画像A-Eが検索された3次元ポリゴンモデルを256階調のグレイスケールで表す図面代用写真である。FIG. 5 is a drawing-substituting photograph that represents a three-dimensional polygon model in which a search image A-E is searched in Experiment 2 with 256 gray scales. 実験2において、検索画像A-Eが検索された3次元ポリゴンモデルを2階調のモノクロで表す図面代用写真である。FIG. 5 is a drawing-substituting photograph that represents a three-dimensional polygon model in which a search image A-E is searched in Experiment 2 in two-tone monochrome.
  以下に、本発明の実施形態を説明する。なお、本実施形態は、説明のためのものであり、本発明の範囲を制限するものではない。したがって、当業者であれば、本実施形態の各要素もしくは全要素を、これと均等なものに置換した実施形態を採用することが可能である。また、各実施例にて説明する要素は、用途に応じて適宜省略することも可能である。このように、本発明の原理にしたがって構成された実施形態は、いずれも本発明の範囲に含まれる。 Hereinafter, embodiments of the present invention will be described. In addition, this embodiment is for description and does not limit the scope of the present invention. Accordingly, those skilled in the art can employ an embodiment in which each element or all elements of the present embodiment are replaced with equivalent ones. In addition, the elements described in each embodiment can be omitted as appropriate according to the application. As described above, any embodiment configured according to the principle of the present invention is included in the scope of the present invention.
  (概要構成)
  図1は、本発明の実施形態に係る同定装置の概要構成を示す説明図である。以下、本図を参照して概要を説明する。
(Overview configuration)
FIG. 1 is an explanatory diagram showing a schematic configuration of an identification apparatus according to an embodiment of the present invention. Hereinafter, an outline will be described with reference to this figure.
  本図に示すように、本実施形態に係る同定装置101は、コンピュータにおいて所定のプログラムを実行することにより実現され、第1取得部111、第1計算部112、構築部113、第1マッピング部114、第2取得部121、第2計算部122、第2マッピング部124、出力部125を備える。 As shown in the figure, the identification apparatus 101 according to the present embodiment is realized by executing a predetermined program in a computer, and includes a first acquisition unit 111, a first calculation unit 112, a construction unit 113, and a first mapping unit. 114, a second acquisition unit 121, a second calculation unit 122, a second mapping unit 124, and an output unit 125.
  ここで、第1取得部111は、トンネル内の壁面が撮影された複数の写真画像を取得する。 Here, the first acquisition unit 111 acquires a plurality of photographic images taken of the wall surface in the tunnel.
  複数の写真画像は、たとえば特許文献1に開示されるようなビデオカメラやスチルカメラによって撮影される。 A plurality of photographic images are taken by a video camera or a still camera as disclosed in Patent Document 1, for example.
  一方、第1計算部112は、複数の写真画像のそれぞれの特徴点の位置ならびに局所特徴量を計算する。 On the other hand, the first calculation unit 112 calculates the position of each feature point and the local feature amount of a plurality of photographic images.
  非特許文献1に開示されるSfM技術を適用した態様では、特徴点におけるSIFT(Scale-Invariant Feature Transform)やSURF(Speeded Up Robust Features)等の局所特徴量を利用する。 In an aspect to which the SfM technology disclosed in Non-Patent Document 1 is applied, local feature quantities such as SIFT (Scale-Invariant Feature Transform) and SURF (Speeded Up Up Robust Feature) are used.
  さらに、構築部113は、複数の写真画像のそれぞれについて計算された特徴点の位置ならびに局所特徴量を参照して壁面の3次元ポリゴンモデルを構築し、構築された3次元ポリゴンモデルに基づく壁面マップを生成する。 Further, the construction unit 113 constructs a 3D polygon model of the wall with reference to the feature point positions and local feature amounts calculated for each of the plurality of photographic images, and a wall map based on the constructed 3D polygon model Is generated.
  3次元ポリゴンモデルの構築には、たとえば非特許文献1に開示されるSfM技術を利用する。 For example, the SfM technology disclosed in Non-Patent Document 1 is used to construct a three-dimensional polygon model.
  SfM技術では、複数の写真画像から、特徴点の3次元的な位置を表す点群データと、各写真画像がどの撮影位置でどの撮影方向に撮影されたものであるかを表す撮影データと、が推定されて、出力される。 In the SfM technology, from a plurality of photographic images, point cloud data representing the three-dimensional position of the feature points, photographic data representing which photographic image was photographed at which photographing position and in which photographing direction, Is estimated and output.
  ここで点群データに含まれる特徴点の分布には粗密があるため、写真画像の任意の画素に対する3次元的な位置を特定するために、たとえば非特許文献2に開示される曲面再構成を行う。 Here, since the distribution of the feature points included in the point cloud data is dense and dense, in order to specify the three-dimensional position with respect to an arbitrary pixel of the photographic image, for example, the curved surface reconstruction disclosed in Non-Patent Document 2 is performed. Do.
  ここで、壁面マップとしては、当該3次元ポリゴンモデルにより表現された3次元マップを採用することもできるし、当該3次元ポリゴンモデルを展開した展開図により表現された2次元マップを採用することもできる。2次元マップの生成には、たとえば非特許文献3に開示される技術を適用することができる。 Here, as the wall surface map, a 3D map expressed by the 3D polygon model can be adopted, or a 2D map expressed by an expanded view of the 3D polygon model can be adopted. it can. For example, the technique disclosed in Non-Patent Document 3 can be applied to the generation of the two-dimensional map.
  そして、第1マッピング部114は、複数の写真画像のそれぞれの各画素と、壁面マップ内の位置と、を対応付ける。この対応付けを、「第1写像」と呼ぶ。 Then, the first mapping unit 114 associates each pixel of the plurality of photographic images with a position in the wall surface map. This association is called “first mapping”.
  以上説明した第1取得部111、第1計算部112、構築部113、第1マッピング部114による処理は、「初期化」と呼ぶこともあり、トンネル内の壁面の全体を、低い頻度で時間をかけて点検する際に実行される。 The processing performed by the first acquisition unit 111, the first calculation unit 112, the construction unit 113, and the first mapping unit 114 described above may be referred to as “initialization”. It is executed when the inspection is performed.
  初期化の際に計算された複数の写真画像内のそれぞれの特徴点の位置ならびに局所特徴量、および、壁面マップは、後述する処理において検索画像を検索したり位置を同定したりするため、各種記録媒体やハードディスク、データベース等に保存される。 The position and local feature amount of each feature point in the plurality of photographic images calculated at the time of initialization and the wall surface map are used to search the search image and identify the position in the processing described later. It is stored in a recording medium, hard disk, database, etc.
  初期化が終わった後、比較的高い頻度で、対象となる箇所を絞り込んだ上で点検を行うための処理は、以下のように、第2取得部121、第2計算部122、第2マッピング部124、出力部125によって、実行される。これらの処理をまとめて「検索」と呼ぶことがある。 After the initialization is completed, the process for performing the inspection after narrowing down the target portion with a relatively high frequency is as follows: the second acquisition unit 121, the second calculation unit 122, the second mapping This is executed by the unit 124 and the output unit 125. These processes may be collectively referred to as “search”.
  すなわち、第2取得部121は、トンネル内の壁面が新たに撮影された検索画像を取得する。 That is, the second acquisition unit 121 acquires a search image in which a wall surface in the tunnel is newly photographed.
  第1取得部111にて取得される複数の写真画像は、トンネル内の壁面を覆うことができるように撮影されるが、第2取得部121にて取得される検索画像は、点検者が現在注目している壁面内の位置を撮影した写真である。 The plurality of photographic images acquired by the first acquisition unit 111 are photographed so as to cover the wall surface in the tunnel, but the search image acquired by the second acquisition unit 121 It is the photograph which image | photographed the position in the wall surface which is paying attention.
  一方、第2計算部122は、取得された検索画像の特徴点の位置ならびに局所特徴量を計算する。 On the other hand, the second calculation unit 122 calculates the position of the feature point and the local feature amount of the acquired search image.
  第2計算部122は、第1計算部112と同様のアルゴリズムにより特徴点の位置ならびに局所特徴量を計算する。 The second calculation unit 122 calculates the position of the feature point and the local feature amount using the same algorithm as the first calculation unit 112.
  さらに、第2マッピング部124は、検索画像について計算された特徴点の位置ならびに局所特徴量と、複数の写真画像のそれぞれについて計算された特徴点の位置ならびに局所特徴量と、を対比することにより、検索画像の各画素と、複数の写真画像のいずれかの画素と、を対応付ける。 Further, the second mapping unit 124 compares the position of the feature point and the local feature amount calculated for the search image with the position of the feature point and the local feature amount calculated for each of the plurality of photographic images. Each pixel of the search image is associated with any pixel of the plurality of photographic images.
  具体的には、第2マッピング部124は、検索画像内の各特徴点に対する局所特徴量と類似する局所特徴量を有する複数の写真画像内の特徴点を選び出して、検索画像内の各特徴点を複数の写真画像内のいずれかの特徴点に対応付ける。特徴点以外の位置については、Delaunay三角形分割を利用することによって、対応付けを行う。この対応付けを、「第2写像」と呼ぶ。 Specifically, the second mapping unit 124 selects feature points in a plurality of photographic images having local feature amounts similar to the local feature amount for each feature point in the search image, and each feature point in the search image Is associated with one of feature points in a plurality of photographic images. Positions other than feature points are associated by using Delaunay triangulation. This association is referred to as “second mapping”.
  そして、出力部125は、検索画像の各画素に対応付けられた複数の写真画像のいずれかの画素が対応付けられた壁面マップ内の位置を出力する。 Then, the output unit 125 outputs a position in the wall map in which any pixel of the plurality of photographic images associated with each pixel of the search image is associated.
  すなわち、出力部125は、検索画像の各画素の位置に対して第2写像を適用することにより、初期化時に撮影された複数の写真画像における位置を取得する。そして、当該位置に対して、第1写像を適用することにより、検索画像の各画素の壁面マップにおける位置を求めるのである。 That is, the output unit 125 obtains positions in a plurality of photographic images taken at the time of initialization by applying the second mapping to the positions of the respective pixels of the search image. Then, by applying the first mapping to the position, the position on the wall surface map of each pixel of the search image is obtained.
  初期化処理と同定処理は、異なる時期、頻度に実行することができる。したがって、初期化処理を実行するコンピュータは、同定処理を実行するコンピュータと、同一であっても良いし異なっても良い。したがって、初期化処理を実行するためのプログラムと、同定処理を実行するためのプログラムと、は、まとめて一つのプログラムとして用意しても良いし、独立したプログラムとして用意することも可能である。 The initialization process and the identification process can be executed at different times and frequencies. Therefore, the computer that executes the initialization process may be the same as or different from the computer that executes the identification process. Therefore, the program for executing the initialization process and the program for executing the identification process may be prepared together as one program, or may be prepared as independent programs.
  典型的には、当該プログラムは、記録媒体に記録され、コンピュータが有するメモリ内にロードされて、コンピュータが有するプロセッサにより実行される。当該プログラムは、同定装置101の各部を実現するためのコードの集合体として表現される。 Typically, the program is recorded on a recording medium, loaded into a memory included in the computer, and executed by a processor included in the computer. The program is expressed as a collection of codes for realizing each unit of the identification apparatus 101.
  なお、GPU(Graphics Processing Unit)を効率良く利用することで、計算の高速化を図ることもできる。たとえば、OpenGLのレンダリング機能を用いることで、複数の写真画像の各画素に対する座標を容易に計算することができる。 In addition, it is possible to speed up the calculation by efficiently using GPU (Graphics-Processing-Unit). For example, the coordinates for each pixel of a plurality of photographic images can be easily calculated by using the OpenGL rendering function.
  すなわち、第1マッピング部114は、3次元ポリゴンモデルの各点の3次元座標そのものや、当該各点の展開図における2次元座標を色情報に変換し、得られた色情報を、当該各点の色に割り当てることにより、3次元有色モデルを構築する。 That is, the first mapping unit 114 converts the three-dimensional coordinates themselves of each point of the three-dimensional polygon model or the two-dimensional coordinates in the development view of each point into color information, and the obtained color information is converted to each point. Build a three-dimensional colored model by assigning to the colors.
  そして、第1マッピング部114は、SfMによって推定された撮影位置、撮影方向から、3次元有色モデルをレンダリングする。レンダリング結果を各写真画像と同じサイズとすると、レンダリング結果において、各写真画像における各画素と同じ位置に配置された画素の色情報を取得し、座標への逆変換を行えば、当該各画素の座標情報を得ることができる。 Then, the first mapping unit 114 renders a three-dimensional color model from the shooting position and shooting direction estimated by SfM. If the rendering result is the same size as each photographic image, the color information of the pixel arranged at the same position as each pixel in each photographic image is obtained in the rendering result, and if the inverse conversion to coordinates is performed, Coordinate information can be obtained.
  各画素の色情報は、OpenGLのFBO(Frame Buffer Object)等を用いることで浮動小数点精度で計算することができ、GPUを利用して計算ができるので、高い精度の座標値を高速かつ頑健に計算することが可能となる。 The color information of each pixel can be calculated with floating-point precision by using OpenGL FBO (Frame Buffer Object), etc., and can be calculated using GPU, so high precision coordinate values are fast and robust It becomes possible to calculate.
  このほか、FPGA(Field Programmable Gate Array)などの技術を適用すれば、当該プログラムを、電子回路の設計図とすることも可能である。この態様においては、当該プログラムに基づいて、初期化処理を行うための電子回路、あるいは、同定処理を行うための電子回路がハードウェアとして実現されることになる。 In addition, if a technology such as FPGA (Field Programmable Gate Array) is applied, the program can be used as a design drawing of an electronic circuit. In this aspect, an electronic circuit for performing initialization processing or an electronic circuit for performing identification processing is realized as hardware based on the program.
  以下では、理解を容易にするため、コンピュータが初期化用プログラムや同定用プログラムを実行することにより、全体として同定装置101を実現する例について説明する。 In the following, for ease of understanding, an example will be described in which the identification apparatus 101 is realized as a whole by executing an initialization program and an identification program by a computer.
  (初期化処理)
  図2は、初期化処理の制御の流れを示すフローチャートである。以下、本図を参照して説明する。
(Initialization)
FIG. 2 is a flowchart showing the control flow of the initialization process. Hereinafter, a description will be given with reference to FIG.
  本処理が開始されると、初期化用プログラムを実行するコンピュータは、まず、トンネル内の壁面を撮影した複数の写真画像を取得する(ステップS201)。 When the copy process is started, the computer that executes the initialization program first acquires a plurality of photographic images obtained by photographing the wall surface in the tunnel (step S201).
  図3は、トンネルの壁面を撮影した複数の写真画像を256階調のグレイスケールで表す図面代用写真である。図4は、トンネルの壁面を撮影した複数の写真画像を2階調のモノクロで表す図面代用写真である。図示された画像は、ステップS201において取得される複数の写真画像の一部である。これらの写真画像は、車載のビデオカメラを回転させながら撮影した動画の各フレームを採用しても良いし、スチルカメラを移動、回転させながら撮影した画像を利用しても良い。 Fig. 3 is a drawing substitute photo showing a plurality of photographic images taken on the wall of the tunnel in 256 gray scales. FIG. 4 is a drawing-substituting photograph in which a plurality of photographic images taken on the wall surface of the tunnel are represented in two-tone monochrome. The illustrated image is a part of a plurality of photographic images acquired in step S201. As these photographic images, each frame of a moving image taken while rotating a vehicle-mounted video camera may be used, or an image taken while moving and rotating a still camera may be used.
  ついで、コンピュータは、SfMにより、複数の写真画像から、これらに含まれる特徴点群を用いて、3次元の点群および各写真画像におけるカメラの撮影位置ならびに撮影方向を、推定する(ステップS202)。図5は、SfMにより得られた点群を256階調のグレイスケールで表す図面代用写真である。図6は、SfMにより得られた点群を2階調のモノクロで表す図面代用写真である。 Next, the computer estimates the shooting position and shooting direction of the camera in the three-dimensional point group and each photographic image from the plurality of photographic images using the feature point group included in these by SfM (step S202). . FIG. 5 is a drawing-substituting photograph showing the point group obtained by SfM in a gray scale of 256 gradations. FIG. 6 is a drawing-substituting photograph in which the point group obtained by SfM is expressed in two-tone monochrome.
  SfMでは、推定結果として、3次元点群に含まれる各点に対して、当該各点の3次元の座標値と、当該各点が各写真画像において撮影されたときの撮影位置ならびに撮影方向と、が対応付けられて出力される。 In SfM, as an estimation result, for each point included in the three-dimensional point group, the three-dimensional coordinate value of each point, the shooting position and shooting direction when each point is shot in each photographic image, and Are output in association with each other.
  次に、コンピュータは、曲面再構成法を用いて、3次元点群を通過するポリゴンを生成して、3次元ポリゴンモデルを構築し、これに基づいて壁面マップを生成する(ステップS203)。点群は、写真画像内の特徴点を3次元的に表現したものであるから、曲面を再構成することによってトンネル内の壁面上の任意の点の3次元座標が推定できるようになる。 Next, the computer uses the curved surface reconstruction method to generate polygons passing through the three-dimensional point group, construct a three-dimensional polygon model, and generate a wall surface map based on the three-dimensional polygon model (step S203). Since the point cloud is a three-dimensional representation of feature points in a photographic image, the three-dimensional coordinates of any point on the wall surface in the tunnel can be estimated by reconstructing the curved surface.
  曲面再構成では、法線付きの点群データを入力とし、陰関数曲面によって符号付きのスカラ場を定義して、その等値面を抽出する手法が広く用いられている。図7は、3次元ポリゴンモデルを256階調のグレイスケールで表す図面代用写真である。図8は、3次元ポリゴンモデルを2階調のモノクロで表す図面代用写真である。 In saddle surface reconstruction, a method is widely used in which point cloud data with a normal is input, a scalar field with a sign is defined by an implicit function surface, and its isosurface is extracted. FIG. 7 is a drawing-substituting photograph that represents the three-dimensional polygon model in 256 gray scales. FIG. 8 is a drawing-substituting photograph that represents a three-dimensional polygon model in monochrome with two gradations.
  本図には、非特許文献2に開示されるSmooth Signed Distance Surface Reconstruction法によって、写真画像に撮影されたトンネル内の壁面に対して生成された3次元ポリゴンモデルが示されている。 In this figure, a three-dimensional polygon model generated for a wall surface in a tunnel photographed in a photographic image by the Smooth-signed Distance-Surface-Reconstruction method disclosed in Non-Patent Document 2 is shown.
  各ポリゴンは、3次元点群を通過するように構築させているため、3次元点群とポリゴンとは、恒等変換写像によって対応付けられることになる。 Since each polygon is constructed so as to pass through the three-dimensional point group, the three-dimensional point group and the polygon are associated with each other by identity transformation mapping.
  生成された3次元ポリゴンモデルは、そのまま、トンネル内の壁面上の各点の3次元座標値を表現する3次元マップである。この3次元マップを、壁面マップとして利用することができる。 The generated 3D polygon model is a 3D map that expresses the 3D coordinate values of each point on the wall in the tunnel as it is. This three-dimensional map can be used as a wall map.
  さらに、必要に応じて、トンネル内の壁面を展開した展開図を壁面マップとして利用することもできる。展開図には、トンネル内の壁面上の各点の2次元座標値が表現されることになる。 Furthermore, if necessary, a developed view of the wall surface in the tunnel can be used as a wall surface map. The two-dimensional coordinate value of each point on the wall surface in the tunnel is expressed in the development view.
  展開図の生成は、3次元ポリゴンを2次元へ埋め込むパラメータ化問題に帰着される。たとえば、非特許文献3に開示されるMean Value Coordinates法では、3次元ポリゴンの各頂点に2次元座標値を付与する。Mean Value Coordinates法では、ポリゴンデータの頂点を、境界頂点と内部頂点に分類してパラメータ化を行う。 Development generation results in a parameterization problem that embeds a 3D polygon in 2D. For example, in the Mean-Value-Coordinates method disclosed in Non-Patent Document 3, a two-dimensional coordinate value is assigned to each vertex of a three-dimensional polygon. In the Mean-Value-Coordinates method, polygon data vertices are classified into boundary vertices and internal vertices for parameterization.
  まず、境界頂点のうち、角となる頂点を多角形の角となる位置に配置する。それ以外の境界頂点は、境界稜線の長さを用いて直線上に配置する。図9は、3次元ポリゴンモデルの頂点を256階調のグレイスケールで表す図面代用写真である。図10は、3次元ポリゴンモデルの頂点を2階調のモノクロで表す図面代用写真である。本図左側には、ポリゴンPの断面が円弧状に描かれており、その端点が、境界頂点に相当する。 First, of the boundary vertices, the corner vertices are arranged at the positions of the polygon corners. Other boundary vertices are arranged on a straight line using the length of the boundary ridgeline. FIG. 9 is a drawing-substituting photograph showing vertices of a three-dimensional polygon model in a gray scale of 256 gradations. FIG. 10 is a drawing-substituting photograph in which the vertex of the three-dimensional polygon model is represented in two-tone monochrome. On the left side of the figure, the cross section of the polygon P is drawn in an arc shape, and the end point corresponds to the boundary vertex.
  一方、内部頂点viは、その近傍頂点vi,jを用いて、線形結合による局所的なパラメータ化を行う。 On the other hand, the internal vertex v i is subjected to local parameterization by linear combination using its neighboring vertices v i, j .
  パラメータ化後の座標値は、内部頂点の座標値を未定数とした連立方程式となる。これを解くことにより、ポリゴンから展開図への写像が定義できる。 座標 The coordinate values after parameterization are simultaneous equations with the internal vertex coordinate values as unconstant. By solving this, the mapping from the polygon to the development can be defined.
  この写像を用いれば、入力された写真画像Mi上のピクセル(画素)mjの2次元座標sjおよび3次元座標pjは、以下のように計算ができる。図11は、3次元ポリゴンモデルの点を2次元の展開図に射影する様子を説明する説明図である。 With this mapping, two-dimensional coordinate s j and 3-dimensional coordinates p j of the input image M i on the pixels (pixel) m j may calculated as follows. FIG. 11 is an explanatory diagram for explaining how the points of the three-dimensional polygon model are projected onto a two-dimensional development view.
  まず、写真画像Miの撮影位置Ciと、ピクセルmjと、により、視線が定義される。この視線と、ポリゴンPと、の交点の位置が、3次元座標pjである。以下、理解を容易にするため、適宜、座標により当該点そのものを表すこととする。 First, the imaging position C i of the image M i, and pixel m j, the line of sight is defined. The position of the intersection of this line of sight and the polygon P is the three-dimensional coordinate p j . Hereinafter, in order to facilitate understanding, the point itself is appropriately represented by coordinates.
  次に、交点pjを含む三角形tkを取得し、当該三角形tkの頂点の重み付き平均をとることで、当該三角形tkの重心座標を求める。 Then, to get the triangle t k including intersection p j, by taking a weighted average of the vertices of the triangle t k, obtaining the barycentric coordinates of the triangle t k.
  最後に、求められた重心座標と、展開図上に射影された三角形tkの頂点座標値と、を、組み合わせることで、交点pjに対する展開図上の座標値sjを取得できる。図12は、トンネル内の壁面の展開図を256階調のグレイスケールで表す図面代用写真である。図13は、トンネル内の壁面の展開図を2階調のモノクロで表す図面代用写真である。 Finally, by combining the obtained barycentric coordinates and the vertex coordinate values of the triangle t k projected on the development view, the coordinate values s j on the development view for the intersection point p j can be acquired. FIG. 12 is a drawing-substituting photograph showing a development view of the wall surface in the tunnel in a gray scale of 256 gradations. FIG. 13 is a drawing-substituting photograph in which the development of the wall surface in the tunnel is shown in two-tone monochrome.
  このように、コンピュータは、写真画像Mi内の画素mjに対する3次元ポリゴンモデルにおける3次元座標pj、や2次元展開図における2次元座標sjと、を計算して、第1写像を取得する(ステップS204)。 Thus, the computer image M i in the pixel m j 3-dimensional coordinates p j in the three-dimensional polygon model for a 2-dimensional coordinate s j in or a two-dimensional development view, is calculated to the first mapping Obtain (step S204).
  そして、コンピュータは、この計算結果に基く対応付けを各画素から壁面マップを対応付ける第1写像としてメモリやハードディスク等に保存するとともに、複数の写真画像の特徴点の位置や局所特徴量もメモリやハードディスク等に保存して(ステップS205)、本処理を終了する。 Then, the computer stores the association based on the calculation result as a first mapping that associates the wall surface map with each pixel in a memory or a hard disk, and also stores the positions of the feature points and local feature amounts of a plurality of photographic images in the memory or hard disk (Step S205), and the process is terminated.
  (同定処理)
  上述の通り、初期化処理と、同定処理と、は、典型的には異なる時期に実行される。同定処理では、トンネル内の壁面を撮影することにより得られた検索画像が、初期化処理によって構成されたい壁面マップ内のいずれの位置に相当するものか、を同定する。図14は、同定処理の制御の流れを示すフローチャートである。以下、本図を参照して説明する。
(Identification process)
As described above, the initialization process and the identification process are typically executed at different times. In the identification process, it is identified which position in the wall map to be constructed by the initialization process the search image obtained by photographing the wall surface in the tunnel corresponds to. FIG. 14 is a flowchart showing a flow of control of the identification process. Hereinafter, a description will be given with reference to FIG.
  同定処理は、コンピュータが、同定処理用のプログラムを実行することによって実現される。同定処理が開始されると、まず、コンピュータは、検索画像を取得する(ステップS301)。 Identification processing is realized by a computer executing a program for identification processing. When the identification process is started, first, the computer acquires a search image (step S301).
  点検者からすると、検索画像を撮影すると、撮影された領域の壁面マップ内における位置が直ちに判明することが望ましい。このためには、検索画像は、コンピュータに直結されたカメラから取得されることとするのが望ましい。 From the point of view of the inspector, it is desirable that when the search image is taken, the position of the taken region in the wall map is immediately determined. For this purpose, the search image is desirably acquired from a camera directly connected to a computer.
  すると、点検者は、カメラで検索画像を撮影するごとに、壁面マップ内のどの地点を現在観察しているのか、を、得ることができる。また、ひび割れなどを発見した場合にも、これを撮影すれば、これが新たに発生した変状であるのか、過去に発見された変状であるのか、を識別することが可能となる。 In other words, each time the searcher takes a search image with the camera, the inspector can obtain which point in the wall map is currently being observed. Also, when a crack or the like is found, if it is photographed, it is possible to identify whether this is a newly generated deformation or a deformation that has been discovered in the past.
  次に、コンピュータは、取得された検索画像における特徴点の位置ならびに局所特徴量を算定する(ステップS302)。特徴点の位置ならびに局所特徴量の計算は、初期化処理と同定処理で同様のアルゴリズムを用いる。上記のように、SIFTやSURFなどを採用することができる。 Next, the computer calculates the position of the feature point and the local feature amount in the acquired search image (step S302). The same algorithm is used in the initialization process and the identification process for calculating the position of the feature point and the local feature amount. As mentioned above, SIFT, SURF, etc. can be adopted.
  そして、コンピュータは、検索画像に対して算定された特徴点の位置ならびに局所特徴量と、初期化処理において複数の写真画像に対して算定された特徴点の位置ならびに局所特徴量と、を対比して、複数の写真画像から、検索画像にマッチする写真画像を検索する(ステップS303)。この処理により、検索画像と、マッチする画像と、の画素同士の対応関係を表す第2写像が得られる。 Then, the computer compares the feature point positions and local feature amounts calculated for the search image with the feature point positions and local feature amounts calculated for a plurality of photographic images in the initialization process. Then, a photographic image that matches the search image is searched from a plurality of photographic images (step S303). With this process, a second mapping representing the correspondence between the pixels of the search image and the matching image is obtained.
  検索は、以下のように行う。まず、検索画像に対して算定された各特徴点と、複数の写真画像に対して算定されたいずれかの特徴点と、を組み合わせた特徴点ペアを探す。 The bag search is performed as follows. First, a feature point pair obtained by combining each feature point calculated for a search image and any one of feature points calculated for a plurality of photographic images is searched.
  すなわち、1つの特徴点ペアは、検索画像内の1つの特徴点と、複数の写真画像のいずれかの写真画像内の1つの特徴点と、を対にしたものであり、この2つの特徴点に対して計算された局所特徴量が互いに類似するものである。 That is, one feature point pair is a pair of one feature point in the search image and one feature point in any one of a plurality of photographic images, and these two feature points. The local feature values calculated for are similar to each other.
  そして、特徴点ペアによって得られた特徴点が多く出現する写真画像を、検索画像にマッチする画像であるとみなす。 Then, a photographic image in which many feature points obtained by the feature point pair appear is regarded as an image that matches the search image.
  なお、特徴点ペアの中には、不正なペアが含まれることがあるが、RANSAC法を用いることで、不正なペアを除去することができる。 特 徴 Although the feature point pair may include an illegal pair, the illegal pair can be removed by using the RANSAC method.
  RANSAC法を用いた座標変換では、まず、一方の画像からランダムに4つの特徴点を選択し、この4つの特徴点と対になる他方の画像の特徴点を取得して、特徴点同士の座標を対比して、一方の画像を他方の画像に変換する座標変換を求める。 In the coordinate conversion using the RANSAC method, first, four feature points are selected at random from one image, the feature points of the other image paired with these four feature points are acquired, and the coordinates between the feature points are obtained. To obtain a coordinate transformation for transforming one image into the other image.
  次に、一方の画像で残った他の特徴点に対して、この座標変換を適用して、他方の画像で対応する特徴点の近傍に座標が変換される成功度合を求める。 Next, this coordinate transformation is applied to the other feature points remaining in one image, and the degree of success of the coordinate transformation in the vicinity of the corresponding feature point in the other image is obtained.
  そして、これらのランダム選択ならびに座標変換の成功度合を求める処理を繰り返して実行し、成功度合が最も高い座標変換を選択する。 Then, the random selection and the process for obtaining the success degree of coordinate transformation are repeatedly executed, and the coordinate transformation having the highest success degree is selected.
  そして、当該座標変換によって座標が近傍に変換されない特徴点ペアは、不正なものとして除去する。 Then, feature point pairs whose coordinates are not converted to the vicinity by the coordinate conversion are removed as illegal.
  この処理において求められた座標変換が、検索画像の各画素を、複数の写真画像のいずれかの写真画像のいずれかの画素に対応付ける第2写像に相当する。 The coordinate transformation obtained in this process corresponds to a second mapping in which each pixel of the search image is associated with any pixel of any one of the plurality of photo images.
  なお、検索画像の特徴点を用いてDelaunay三角形分割を構築し、接続関係をマッチした画像の特徴点に適用することで、一層精度の高い第2写像を得ることもできる。 Note that a Delaunay triangulation can be constructed using the feature points of the search image and applied to the feature points of the matched image to obtain a second mapping with higher accuracy.
  すなわち、特徴点ペアのうち、検索画像に含まれる特徴点に対して、Delaunay三角形分割を用いて三角形メッシュを生成し、特徴点で囲まれる三角形領域のパラメータ化を行う。 That is, among the feature point pairs, a triangle mesh is generated using Delaunay triangulation for the feature points included in the search image, and the triangular region surrounded by the feature points is parameterized.
  得られたメッシュの位相をそのまま、マッチ画像に適用すると同様の三角形がマッチする領域に構築できる。 If the obtained mesh phase is applied as it is to the match image, it can be constructed in a region where similar triangles match.
  検索画像の三角形Tを構成する3つの頂点t0, t1, t2を用いれば、三角形T内の点pに対する重心座標(x, y, z)が、
    p = x・t0 + y・t1 + z・t2
のように、一意に定義できる。
Using the three vertices t 0 , t 1 , t 2 constituting the triangle T of the search image, the barycentric coordinates (x, y, z) for the point p in the triangle T are
p = x ・ t 0 + y ・ t 1 + z ・ t 2
It can be defined uniquely.
  特徴点ペアにより、検索画像における三角形T(頂点t0, t1, t2)がマッチ画像における三角形T'(頂点t'0, t'1, t'2)に対応付けられているときには、三角形T内の点pは、三角形T'内の点p'へ、
    p' = x・t'0 + y・t'1 + z・t'2
のように座標変換されることになる。
When the triangle T (vertices t 0 , t 1 , t 2 ) in the search image is associated with the triangle T ′ (vertices t ′ 0 , t ′ 1 , t ′ 2 ) in the match image by the feature point pair, Point p in triangle T goes to point p 'in triangle T'
p '= x ・ t' 0 + y ・ t ' 1 + z ・ t' 2
The coordinates are converted as follows.
  このように、特徴点で囲まれる三角形に基づいて第2写像を定義することで、より正確な第2写像を得ることができる。 Thus, by defining the second map based on the triangle surrounded by the feature points, a more accurate second map can be obtained.
  第2写像が得られたら、コンピュータは、検索画像内の各画素に対して、第2写像および第1写像を適用して、壁面マップ内における座標値を計算する(ステップS304)。上述の通り、ここで得られる座標値は、3次元座標とすることもできるし、2次元座標とすることもできる。 When the second map is obtained, the computer calculates the coordinate value in the wall map by applying the second map and the first map to each pixel in the search image (step S304). As described above, the coordinate value obtained here can be a three-dimensional coordinate or a two-dimensional coordinate.
  最後に、コンピュータは、検索画像内の各画素に対して計算された座標値を、壁面マップ内における位置を出力して(ステップS305)、本処理を終了する。 Finally, the computer outputs the position in the wall map of the coordinate value calculated for each pixel in the search image (step S305), and ends this process.
  なお、各画素に対して計算された位置を出力するのではなく、検索画像内の所望の画素を点検者が指定したり、画像認識によって推定された変状が描画されている画素を与えると、当該画素に対する壁面マップ内の位置が計算、出力されるようにしても良い。この場合には、検索画像内の全画素について第1、第2写像を適用して出力する処理の大半を省略することができるので、処理の高速化を図ることができる。 In addition, instead of outputting the calculated position for each pixel, if an inspector designates a desired pixel in the search image or gives a pixel in which a deformation estimated by image recognition is drawn The position in the wall map for the pixel may be calculated and output. In this case, since most of the process of applying and outputting the first and second mappings for all the pixels in the search image can be omitted, the process can be speeded up.
  (GPUの利用)
  上記の計算において、たとえば、点pのマッピング先p'を計算するには、点pが含まれる三角形Tを求める必要がある。点pがどの三角形に包含されるかは自明ではない。
(Use of GPU)
In the above calculation, for example, in order to calculate the mapping destination p ′ of the point p, it is necessary to obtain a triangle T including the point p. It is not obvious which triangle the point p is contained in.
  すべての三角形に対して包含関係をチェックする単純な手法では、三角形の数に比例した計算時間がかかり、非効率である。 単 純 The simple method of checking the inclusion relation for all triangles is inefficient because it takes a computation time proportional to the number of triangles.
  また、任意の頂点からスタートして、pに近い方向の隣接する三角形をたどっていく方法もあるが、隣接関係をグラフ構造で持たせる必要がある。 方法 There is also a method of starting from an arbitrary vertex and tracing adjacent triangles in the direction close to p, but it is necessary to have the adjacency relationship in a graph structure.
  そこで、本実施形態では、GPUを用いてOpenGLのFBOを描画し、その結果を参照することで、画素単位で簡単かつ高速な写像の計算を頑健に実現する。 Therefore, in the present embodiment, an OpenGL FBO is drawn using a GPU, and a simple and high-speed calculation of a pixel is robustly realized by referring to the result.
  たとえば、3次元ポリゴンモデルに含まれる全ポリゴンの表面の各点には、当該各点の3次元座標や、当該各点を展開図に表した場合の2次元座標を、RGB(Red Green Blue)化した値を色として付与する。 For example, for each point on the surface of all polygons included in the 3D polygon model, the 3D coordinates of each point and the 2D coordinates when each point is shown in the development view are RGB (Red (Green Blue) The converted value is assigned as a color.
  そして、当該3次元ポリゴンモデルを、SfMの結果として得られた撮影位置および撮影方向から透視投影する。すると、レンダリングの結果は、元の写真画像と同じ構図で、各画素の色は、壁面マップにおける座標を色に変換した結果で描画された画像が得られる。この画像によって、第1写像を容易に表すことができる。 Then, the three-dimensional polygon model is perspective-projected from the photographing position and photographing direction obtained as a result of SfM. Then, the rendering result is the same composition as the original photographic image, and the image of each pixel is drawn as a result of converting the coordinates in the wall map into colors. The first mapping can be easily represented by this image.
  また、検索画像とマッチ画像を三角形分割した場合には、三角形の頂点に互いに重複しない色を与え、三角形の内部には頂点色を重心座標によって案分した色を与える。たとえば、頂点色として、当該頂点の2次元座標値を色に変換したものを利用することができる。 In addition, when the search image and the match image are divided into triangles, colors that do not overlap each other are given to the vertices of the triangles, and colors that are proportional to the vertex colors by the barycentric coordinates are given to the inside of the triangles. For example, a vertex color obtained by converting a two-dimensional coordinate value of the vertex into a color can be used.
  そして、この三角形を座標変換によって、一方から他方へ変換すると、三角形の内部には頂点色もしくはこれの線形和の色が描かれ、三角形の外部は色が描かれないことになる。 す る と Then, when this triangle is converted from one to the other by coordinate transformation, the vertex color or a linear sum color is drawn inside the triangle, and no color is drawn outside the triangle.
  このように、色の有無によって、三角形の内部か外部かを容易に判定することができるようになる。 Thus, it becomes possible to easily determine whether the triangle is inside or outside depending on the presence or absence of the color.
  なお、OpenGLでは、色のRGB成分は、それぞれ、0以上1以下の浮動小数点数で表現される。そこで、3次元座標値は、3次元の各要素を0以上1以下の値になるように正規化して、R, G, Bの各要素に割り当てて、色に変換すれば良い。2次元座標値の場合には、最も容易には、R, G, Bのうち2つの成分のみ使うこととすれば良い。 OpenIn OpenGL, each RGB component of a color is expressed as a floating point number between 0 and 1. Therefore, the three-dimensional coordinate value may be converted into a color by normalizing each three-dimensional element to a value between 0 and 1 and assigning it to each element of R, G, and B. In the case of a two-dimensional coordinate value, it is easiest to use only two components of R, G, and B.
  (レーザスキャナの利用)
  たとえば特許文献1に開示されるような、トンネルの壁面検査に従来から利用されているレーザスキャナでは、時速50kmで走行する自動車から1mm以上のオーダーのひび割れを計測できている。しかしながら、トンネル内の壁面のより微細なひび割れや段差、水分の有無の検出をするためには、より高精細、高精度なレーザスキャナを利用する必要がある。たとえば、5m程度離れた箇所から、0.2mm-0.3mm程度のひび割れや、0.1mm程度の段差を検知するには、水平解像度が0.2mm以下、奥行(測距)解像度が0.1mm以下の分解能が必要となる。
(Use of laser scanner)
For example, a laser scanner conventionally used for tunnel wall inspection as disclosed in Patent Document 1 can measure cracks on the order of 1 mm or more from an automobile traveling at a speed of 50 km / h. However, in order to detect finer cracks and steps on the wall surface in the tunnel and the presence or absence of moisture, it is necessary to use a laser scanner with higher definition and higher accuracy. For example, in order to detect cracks of about 0.2mm-0.3mm and steps of about 0.1mm from a distance of about 5m, the resolution with a horizontal resolution of 0.2mm or less and a depth (ranging) resolution of 0.1mm or less is required. Necessary.
  ここで、高精細、高精度なレーザスキャナを用いれば、動画や静止画の撮影に比べて、より詳細な情報を取得できる。しかしながら、高精細、高精度なレーザスキャナを用いた検出には、計測に時間を要するため、高精細、高精度なレーザスキャナを用いてトンネル内の壁面全体を点検することは事実上困難である。 Here, using a high-definition, high-precision laser scanner enables more detailed information to be acquired than when shooting moving images or still images. However, since detection using a high-definition and high-precision laser scanner takes time, it is practically difficult to inspect the entire wall surface in the tunnel using a high-definition and high-precision laser scanner. .
  また、分解能の高低にかかわらず、レーザスキャナによる計測では、動画や静止画を撮影する点検と同様に、撮影位置、撮影方向、撮影対象の位置を取得することが難しい。 か か わ ら ず Regardless of the resolution, it is difficult to obtain the shooting position, shooting direction, and position of the shooting target in the measurement by the laser scanner, as in the inspection for shooting a moving image or a still image.
  そこで、継続して監視を続ける必要がある変状およびその周辺を計測範囲として、当該計測範囲をレーザスキャナで計測する際に、上記実施形態を応用すれば、レーザスキャナで計測された範囲が壁面マップ内のどこであるか、を、取得できるようになる。以下説明する。 Therefore, if the above embodiment is applied when measuring the deformation range and its surroundings that need to be continuously monitored as a measurement range and measuring the measurement range with a laser scanner, the range measured by the laser scanner becomes a wall surface. You can get where in the map. This will be described below.
  本手法では、検索画像を撮影する際に、カメラにて、撮影位置および撮影方向を所定の誤差範囲内でほぼ維持して、トンネル内の壁面を連写する。 In the Enomoto method, when the retrieval image is photographed, the camera captures the wall surface in the tunnel continuously while maintaining the photographing position and the photographing direction within a predetermined error range.
  この連写の際に、途中から、レーザスキャナによるレーザ光を、計測範囲に照射する。 During this continuous shooting, the laser beam from the laser scanner is applied to the measurement range from the middle.
  すると、連写の前半で得られる写真には、壁面をそのまま撮影した様子が現れ、後半で得られる写真には、壁面の一部がレーザ光で明るくなっている様子が現れる。 In other words, the photograph obtained in the first half of the continuous shooting shows that the wall surface is photographed as it is, and the photograph obtained in the second half shows that a part of the wall surface is brightened with laser light.
  そこで、連写の前半で得られる写真を、上記の検索画像として利用する。連写の後半で得られる写真は、レーザスキャナによるスキャン領域を表すもので、以下ではこれをスキャン画像と呼ぶ。スキャン画像は、検索画像とほぼ重ね合わせることが可能であり、画素同士の対応関係を容易に定めることができる。 Therefore, the photograph obtained in the first half of the continuous shooting is used as the search image. The photograph obtained in the second half of the continuous shooting represents a scan area by the laser scanner, and is hereinafter referred to as a scan image. The scan image can be almost overlapped with the search image, and the correspondence between the pixels can be easily determined.
  なお、撮影位置や撮影方向のずれがある程度大きい場合であっても、SfMにおける特徴点抽出およびマッチの技術を使うことで、検索画像とスキャン画像との各画素の対応関係を求めることができる。これを第3写像と呼ぶこととする。 Note that even if the shooting position or shooting direction shift is large to some extent, the correspondence between each pixel of the search image and the scanned image can be obtained by using the feature point extraction and matching technique in SfM. This is called the third map.
  すると、スキャン画像における計測範囲の各点に対して、第3写像、第2写像および第1写像を適用することで、壁面マップ内における位置を容易に得ることができる。 In other words, the position in the wall map can be easily obtained by applying the third map, the second map, and the first map to each point of the measurement range in the scan image.
  なお、まず検索画像を撮影し、次にスキャン画像を撮影する、という順序とすることで、カメラにおける残像の影響を避け、連写時間を短縮することができる。 It should be noted that by taking the search image first and then the scan image, it is possible to avoid the effects of afterimages on the camera and shorten the continuous shooting time.
  (実験1)
  以下では、静岡県富士市にある施工技術総合研究所内の模擬トンネルの一部に対して、本実施形態を適用した実験の結果について説明する。
(Experiment 1)
Below, the result of the experiment which applied this embodiment with respect to a part of simulated tunnel in the construction technology research institute in Fuji City, Shizuoka Prefecture is demonstrated.
  カメラとしてPanasonic社DMC-GF6を利用し、トンネル内の壁面全体を被覆するように、250枚の写真画像を撮影した。各写真画像の解像度は1148×862ピクセルである。この写真画像から、3次元モデルを生成した。 DMPanasonic DMC-GF6 was used as a camera, and 250 photographic images were taken to cover the entire wall surface in the tunnel. The resolution of each photographic image is 1148 x 862 pixels. A three-dimensional model was generated from this photographic image.
  また、同じカメラを利用して、解像度4592×3598ピクセルの検索画像を撮影した。検索画像は、壁面に対して接写したもので、解像度は初期化で撮影された写真画像よりも高いが、撮影される視野は狭いものである。 検 索 Also, using the same camera, a search image with a resolution of 4592 x 3598 pixels was taken. The search image is a close-up image on the wall surface, and the resolution is higher than that of the photographic image taken by initialization, but the field of view taken is narrow.
  図15は、実験1において、検索画像の例を256階調のグレイスケールで表す図面代用写真である。図16は、実験1において、検索画像の例を2階調のモノクロで表す図面代用写真である。これらの図に示す検索画像においては、横に広いT字型のひび割れを強調して描いてある。 FIG. 15 is a drawing-substituting photograph that represents an example of a search image in 256 gray scales in Experiment 1. FIG. 16 is a drawing-substituting photograph that represents an example of a retrieval image in Experiment 1 in monochrome with two gradations. In the search images shown in these figures, wide T-shaped cracks are drawn with emphasis on the side.
  図17は、実験1において、検索画像にマッチする写真画像を256階調のグレイスケールで表す図面代用写真である。図18は、実験1において、検索画像にマッチする写真画像を2階調のモノクロで表す図面代用写真である。これは、初期化時に使用した写真画像のうち、検索画像にマッチする写真画像(マッチ画像)である。マッチ画像には、横に広いT字型のひび割れが存在するので、これを強調して描いてある。また、T字型のひび割れの周囲に配置された壁面の模様は、検索画像と、マッチ画像と、で一致している。 FIG. 17 is a drawing-substituting photograph that represents, in Experiment 1, a photographic image that matches the search image in a gray scale of 256 gradations. FIG. 18 is a drawing-substituting photograph that represents, in Experiment 1, a photographic image that matches the search image in two-tone monochrome. This is a photographic image (match image) that matches the search image among the photographic images used at the time of initialization. In the match image, there is a wide T-shaped crack next to it, which is drawn with emphasis. Further, the pattern of the wall arranged around the T-shaped crack matches the search image and the match image.
  図19は、実験1において、検索画像が検索された3次元ポリゴンモデルを256階調のグレイスケールで表す図面代用写真である。図20は、実験1において、検索画像が検索された3次元ポリゴンモデルを2階調のモノクロで表す図面代用写真である。この3次元ポリゴンモデルには、初期化時に参照した写真画像をテクスチャとして貼り付けてあり、トンネル縁のU字型の間には、検索画像と同様の模様を持った位置が検索結果として同定されており、T字型のひび割れがマッピングされている。 FIG. 19 is a drawing-substituting photograph showing the three-dimensional polygon model for which the search image is searched in Experiment 1 with a gray scale of 256 gradations. FIG. 20 is a drawing-substituting photograph that represents the three-dimensional polygon model for which the search image is searched in Experiment 1 in two gradations. In this 3D polygon model, the photographic image referenced at the time of initialization is pasted as a texture, and the position with the same pattern as the search image is identified as the search result between the U shape of the tunnel edge. The T-shaped crack is mapped.
  これらの図に示す通り、検索画像にあるひび割れ情報が、該当する3次元ポリゴンモデルのほぼ同じ位置に同定できたことが確認できる。位置の精度はSfMのデータ精度に依存するが、相対的な位置関係は保存されているため、検索画像のトンネル壁面における位置を適切に同定することができる。 通 り As shown in these figures, it can be confirmed that the crack information in the search image was identified at almost the same position of the corresponding 3D polygon model. Although the accuracy of the position depends on the data accuracy of SfM, since the relative positional relationship is preserved, the position of the search image on the tunnel wall surface can be appropriately identified.
  (実験2)
  以下では、上記研究所内の模擬トンネル(全長80m)に対して、SfMについてはOpenMVGを採用し、パラメータ化についてはlibiglを採用し、曲面再構成についてはSSDを採用した実験の結果について説明する。
(Experiment 2)
In the following, we explain the results of an experiment using OpenMVG for SfM, libigl for parameterization, and SSD for curved surface reconstruction for the simulated tunnel in the laboratory (total length 80m).
  SfMによる再構成においては、572枚の画像を使用した。各画像のサイズは1124×750ピクセルであり、Nikon(登録商標)社製のカメラD5500にて撮影された。全写真を撮影するのに、約1時間を要した。 572SfM reconstruction used 572 images. Each image had a size of 1124 × 750 pixels and was photographed with a Nikon (registered trademark) camera D5500. It took about an hour to take all the photos.
  画像検索については、まず、Panasonic(登録商標)社製のカメラDMC-GF6によりひび割れを撮影して、1148×862ピクセルの写真を5枚取得した。図21Aは、実験2において、検索したいひび割れの写真画像Aの例を256階調のグレイスケールで表す図面代用写真である。図21Bは、実験2において、検索したいひび割れの写真画像Bの例を256階調のグレイスケールで表す図面代用写真である。図21Cは、実験2において、検索したいひび割れの写真画像Cの例を256階調のグレイスケールで表す図面代用写真である。図21Dは、実験2において、検索したいひび割れの写真画像Dの例を256階調のグレイスケールで表す図面代用写真である。図21Eは、実験2において、検索したいひび割れの写真画像Eの例を256階調のグレイスケールで表す図面代用写真である。図22Aは、実験2において、検索したいひび割れの写真画像Aの例を2階調のモノクロで表す図面代用写真である。図22Bは、実験2において、検索したいひび割れの写真画像Bの例を2階調のモノクロで表す図面代用写真である。図22Cは、実験2において、検索したいひび割れの写真画像Cの例を2階調のモノクロで表す図面代用写真である。図22Dは、実験2において、検索したいひび割れの写真画像Dの例を2階調のモノクロで表す図面代用写真である。図22Eは、実験2において、検索したいひび割れの写真画像Eの例を2階調のモノクロで表す図面代用写真である。なお、実際に撮影された写真は、カラー画像である。 For image search, first, cracks were taken with a camera DMC-GF6 manufactured by Panasonic (registered trademark), and five photographs of 1148 × 862 pixels were obtained. FIG. 21A is a drawing-substituting photograph showing an example of a cracked photographic image A to be searched in Experiment 2 in a gray scale of 256 gradations. FIG. 21B is a drawing-substituting photograph showing an example of a cracked photographic image B to be searched in Experiment 2 in a gray scale of 256 gradations. FIG. 21C is a drawing-substituting photograph that represents an example of a cracked photographic image C to be searched in Experiment 2 in a gray scale of 256 gradations. FIG. 21D is a drawing-substituting photograph showing an example of a cracked photographic image D to be searched in Experiment 2 in a gray scale of 256 gradations. FIG. 21E is a drawing-substituting photograph showing an example of a cracked photographic image E to be searched in Experiment 2 in a gray scale of 256 gradations. FIG. 22A is a drawing-substituting photograph showing an example of a cracked photographic image A to be searched in Experiment 2 in monochrome of two gradations. FIG. 22B is a drawing-substituting photograph showing an example of a cracked photographic image B to be searched in Experiment 2 in monochrome of two gradations. FIG. 22C is a drawing-substituting photograph showing an example of a cracked photographic image C to be searched in Experiment 2 in monochrome of two gradations. FIG. 22D is a drawing-substituting photograph showing an example of a cracked photographic image D to be searched in Experiment 2 in monochrome of two gradations. FIG. 22E is a drawing-substituting photograph that represents an example of a cracked photographic image E to be searched in Experiment 2 in monochrome with two gradations. The photograph actually taken is a color image.
  次に、各写真から、手作業によってクラック(ひび割れ)を抽出することによって、検索画像とした。図23Aは、実験2において、写真画像Aに対する検索画像を2階調のモノクロで表す図面代用写真である。図23Bは、実験2において、写真画像Bに対する検索画像を2階調のモノクロで表す図面代用写真である。図23Cは、実験2において、写真画像Cに対する検索画像を2階調のモノクロで表す図面代用写真である。図23Dは、実験2において、写真画像Dに対する検索画像を2階調のモノクロで表す図面代用写真である。図23Eは、実験2において、写真画像Eに対する検索画像を2階調のモノクロで表す図面代用写真である。これらの図に示すように、検索画像は、モノクロ画像となっている。 Next, search images were obtained by manually extracting cracks from each photo. FIG. 23A is a drawing-substituting photograph that represents a search image for the photograph image A in Experiment 2 in monochrome with two gradations. FIG. 23B is a drawing-substituting photograph that represents the search image for the photograph image B in Experiment 2 in monochrome with two gradations. FIG. 23C is a drawing-substituting photograph in which the search image for the photographic image C in Experiment 2 is represented in monochrome with two gradations. FIG. 23D is a drawing-substituting photograph that represents a search image corresponding to the photograph image D in Experiment 2 in monochrome with two gradations. FIG. 23E is a drawing-substituting photograph in which the search image for the photographic image E in Experiment 2 is represented in monochrome with two gradations. As shown in these figures, the search image is a monochrome image.
  そして、572枚の写真から、上記の手法により、検索画像にマッチする写真画像を検索した。図25Aは、実験2において、検索画像Aにマッチした写真画像Aを256階調のグレイスケールで表す図面代用写真である。図25Bは、実験2において、検索画像Bにマッチした写真画像Bを256階調のグレイスケールで表す図面代用写真である。図25Cは、実験2において、検索画像Cにマッチした写真画像Cを256階調のグレイスケールで表す図面代用写真である。図25Dは、実験2において、検索画像Dにマッチした写真画像Dを256階調のグレイスケールで表す図面代用写真である。図25Eは、実験2において、検索画像Eにマッチした写真画像Eを256階調のグレイスケールで表す図面代用写真である。図24Aは、実験2において、検索画像Aにマッチした写真画像Aを2階調のモノクロで表す図面代用写真である。図24Bは、実験2において、検索画像Bにマッチした写真画像Bを2階調のモノクロで表す図面代用写真である。図24Cは、実験2において、検索画像Cにマッチした写真画像Cを2階調のモノクロで表す図面代用写真である。図24Dは、実験2において、検索画像Dにマッチした写真画像Dを2階調のモノクロで表す図面代用写真である。図24Eは、実験2において、検索画像Eにマッチした写真画像Eを2階調のモノクロで表す図面代用写真である。なお、実際に撮影された写真は、カラー画像である。 写真 Then, photographic images matching the search image were searched from the 572 photos by the above method. FIG. 25A is a drawing-substituting photograph in which the photographic image A that matches the search image A in Experiment 2 is represented by a gray scale of 256 gradations. FIG. 25B is a drawing-substituting photograph in which the photographic image B matched with the search image B in Experiment 2 is represented by a gray scale of 256 gradations. FIG. 25C is a drawing-substituting photograph in which the photographic image C matching the search image C in Experiment 2 is represented by a gray scale of 256 gradations. FIG. 25D is a drawing-substituting photograph in which the photographic image D matched with the search image D in Experiment 2 is represented by 256 gray scales. FIG. 25E is a drawing-substituting photograph in which the photographic image E that matches the search image E in Experiment 2 is represented by a gray scale of 256 gradations. FIG. 24A is a drawing-substituting photograph in which the photographic image A that matches the search image A in Experiment 2 is represented in two-tone monochrome. FIG. 24B is a drawing-substituting photograph in which the photographic image B that matches the search image B in Experiment 2 is represented in two-tone monochrome. FIG. 24C is a drawing-substituting photograph in which the photographic image C that matches the search image C in Experiment 2 is represented in two-tone monochrome. FIG. 24D is a drawing-substituting photograph that represents the photographic image D that matches the search image D in Experiment 2 in monochrome with two gradations. FIG. 24E is a drawing-substituting photograph in which the photographic image E that matches the search image E in Experiment 2 is represented in two-tone monochrome. The photograph actually taken is a color image.
  本実験では、上記の検索手法をコンピュータMacBook(登録商標) Pro (2016, Core i7 2.9GHz, 16GB RAM)上で動作させることとした。図27は、実験2において、トンネル内の壁面の展開図において検索画像A-Eが検索された結果を拡大して、2階調のモノクロで表す図面代用写真である。図26は、実験2において、トンネル内の壁面の展開図において検索画像A-Eが検索された結果を拡大して、256階調のグレイスケールで表す図面代用写真である。これらの図は、トンネル内面の展開図であり、検索結果である写真画像の位置を楕円で囲み、その横にひび割れを拡大した様子を示している。検索においては、データのロードに6.7秒を要し、検索そのもには平均して16.7秒を要した。したがって、1枚に検索画像を検索するのに、平均23.4秒を要し、実用的な時間で検索ができることがわかった。 In this experiment, the search method described above was run on the computer MacBook (registered trademark) Pro (2016, Core i7 2.9GHz, 16GB RAM). FIG. 27 is a drawing-substituting photograph that expands the search result of the search image A-E in the development of the wall surface in the tunnel in Experiment 2 and expresses it in two-tone monochrome. FIG. 26 is a drawing-substituting photograph that is obtained by enlarging the result of searching for the search image A-E in the development of the wall surface in the tunnel in Experiment 2 and expressing it in 256 gray scales. These drawings are developed views of the inner surface of the tunnel, and show a state in which the position of a photographic image as a search result is surrounded by an ellipse and a crack is enlarged beside it. The search took 6.7 seconds to load the data, and the search itself took an average of 16.7 seconds. Therefore, it took 23.4 seconds on average to search for one search image, and it was found that the search can be performed in a practical time.
  また、3次元モデルの再構成には約3時間を要した。図29は、実験2において、検索画像A-Eが検索された3次元ポリゴンモデルを2階調のモノクロで表す図面代用写真である。図28は、実験2において、検索画像A-Eが検索された3次元ポリゴンモデルを256階調のグレイスケールで表す図面代用写真である。これらの図の各点は、展開図の各点と対応付けられているため、ひび割れのある箇所の様子を3次元的に確認することができる。また、3次元モデルの再構成も、合理的な時間でなされていることがわかる。 It took about 3 hours to reconstruct the 3D model. FIG. 29 is a drawing-substituting photograph in which the three-dimensional polygon model for which the search images A to E are searched in Experiment 2 is represented in two-tone monochrome. FIG. 28 is a drawing-substituting photograph showing the three-dimensional polygon model in which the search images A to E are searched in Experiment 2 in a gray scale of 256 gradations. Since each point in these drawings is associated with each point in the development view, it is possible to confirm the state of the cracked portion in a three-dimensional manner. It can also be seen that the reconstruction of the 3D model is done in a reasonable time.
  (まとめ)
  以上説明したように、本実施形態に係る同定装置は、
  トンネル内の壁面が撮影された複数の写真画像を取得する第1取得部、
  前記複数の写真画像のそれぞれの特徴点の位置ならびに局所特徴量を計算する第1計算部、
  前記複数の写真画像のそれぞれについて前記計算された特徴点の位置ならびに局所特徴量を参照して前記壁面の3次元ポリゴンモデルを構築し、前記構築された3次元ポリゴンモデルに基づく壁面マップを生成する構築部、
  前記複数の写真画像のそれぞれの各画素と、前記壁面マップ内の位置と、を対応付ける第1マッピング部、
  前記トンネル内の壁面が新たに撮影された検索画像を取得する第2取得部、
  前記取得された検索画像の特徴点の位置ならびに局所特徴量を計算する第2計算部、
  前記検索画像について前記計算された特徴点の位置ならびに局所特徴量と、前記複数の写真画像のそれぞれについて前記計算された特徴点の位置ならびに局所特徴量と、を対比することにより、前記検索画像の各画素と、前記複数の写真画像のいずれかの画素と、を対応付ける第2マッピング部、
  前記検索画像の各画素に対応付けられた前記複数の写真画像のいずれかの画素が対応付けられた前記壁面マップ内の位置を出力する出力部
  を備える。
(Summary)
As described above, the identification device according to the present embodiment is
A first acquisition unit for acquiring a plurality of photographic images of the walls in the tunnel;
A first calculation unit for calculating a position of each feature point and a local feature amount of each of the plurality of photographic images;
A three-dimensional polygon model of the wall surface is constructed with reference to the calculated feature point position and local feature amount for each of the plurality of photographic images, and a wall surface map based on the constructed three-dimensional polygon model is generated. Construction Department,
A first mapping unit that associates each pixel of the plurality of photographic images with a position in the wall surface map;
A second acquisition unit for acquiring a search image in which a wall surface in the tunnel is newly photographed;
A second calculator for calculating the position of the feature point of the acquired search image and the local feature amount;
By comparing the calculated feature point position and local feature amount for the search image with the calculated feature point position and local feature amount for each of the plurality of photographic images, A second mapping unit that associates each pixel with one of the pixels of the plurality of photographic images;
An output unit that outputs a position in the wall map in which any pixel of the plurality of photographic images associated with each pixel of the search image is associated;
  また、本実施形態に係る同定装置において、
  前記第2取得部は、前記検索画像の撮影に引き続き、撮影位置ならびに撮影方向を所定の誤差範囲内で維持したまま、連写されたスキャン画像であって、前記壁面内のスキャン領域をレーザスキャナによりスキャンした様子が撮影されたスキャン画像を取得し、
  前記第2マッピング部は、前記スキャン画像と、前記検索画像と、を対比することにより、前記スキャン画像の各画素と、前記検索画像のいずれかの画素と、を対応付け、
  前記出力部は、前記スキャン画像に撮影された前記スキャン領域内の各画素に対応付けられた前記検索画像のいずれかの画素に対応付けられた前記複数の写真画像のいずれかの画素が対応付けられた前記壁面マップ内の位置を出力する
  ように構成することができる。
In the identification apparatus according to the present embodiment,
The second acquisition unit is a scan image that is continuously shot while the shooting position and the shooting direction are maintained within a predetermined error range following the shooting of the search image, and the scan area in the wall surface is scanned with a laser scanner. Acquire a scanned image of the scan taken by
The second mapping unit associates each pixel of the scan image with any pixel of the search image by comparing the scan image with the search image.
The output unit associates any pixel of the plurality of photographic images associated with any pixel of the search image associated with each pixel in the scan area captured in the scan image. The position in the wall surface map can be output.
  また、本実施形態に係る同定装置において、
  前記壁面マップは、前記3次元ポリゴンモデルにより前記壁面の3次元の座標値を表現する3次元マップである
  ように構成することができる。
In the identification apparatus according to the present embodiment,
The wall surface map can be configured to be a three-dimensional map that represents a three-dimensional coordinate value of the wall surface by the three-dimensional polygon model.
  また、本実施形態に係る同定装置において、
  前記壁面マップは、前記3次元ポリゴンモデルを展開した展開図により前記壁面の2次元の座標値を表現する2次元マップである
  ように構成することができる。
In the identification apparatus according to the present embodiment,
The wall surface map can be configured to be a two-dimensional map that represents a two-dimensional coordinate value of the wall surface by a developed view of the three-dimensional polygon model.
  また、本実施形態に係る同定装置において、
  前記第1マッピング部は、
  前記座標値を色情報に変換し、
  前記変換された色情報を前記3次元ポリゴンモデルの前記座標値に対応付けられる点の色として割り当て、
  前記複数の写真画像のそれぞれが撮影された撮影位置ならびに撮影方向から、前記色が割り当てられた前記3次元ポリゴンモデルをレンダリングして、前記複数の写真画像のそれぞれと同じサイズの対応画像を生成し、
  前記生成された対応画像における各画素に描画された色を座標値に逆変換することにより、
  前記複数の写真画像のそれぞれの各画素と、前記壁面マップ内の位置と、を対応付ける
  ように構成することができる。
In the identification apparatus according to the present embodiment,
The first mapping unit includes
Converting the coordinate value into color information;
Assign the converted color information as the color of the point associated with the coordinate value of the three-dimensional polygon model,
Rendering the three-dimensional polygon model to which the color is assigned from the shooting position and shooting direction at which each of the plurality of photographic images was taken, and generating a corresponding image having the same size as each of the plurality of photographic images ,
By inversely transforming the color drawn on each pixel in the generated corresponding image into coordinate values,
Each pixel of the plurality of photographic images can be configured to correspond to a position in the wall surface map.
  また、本実施形態に係る同定装置において、
  前記レンダリングは、GPU(Graphics Processing Unit)により実行される
  ように構成することができる。
In the identification apparatus according to the present embodiment,
The rendering can be configured to be executed by a GPU (Graphics Processing Unit).
  本実施形態に係る同定方法では、同定装置が、
  トンネル内の壁面が撮影された複数の写真画像を取得し、
  前記複数の写真画像のそれぞれの特徴点の位置ならびに局所特徴量を計算し、
  前記複数の写真画像のそれぞれについて前記計算された特徴点の位置ならびに局所特徴量を参照して前記壁面の3次元ポリゴンモデルを構築し、前記構築された3次元ポリゴンモデルに基づく壁面マップを生成し、
  前記複数の写真画像のそれぞれの各画素と、前記壁面マップ内の位置と、を対応付け、
  前記トンネル内の壁面が新たに撮影された検索画像を取得し、
  前記取得された検索画像の特徴点の位置ならびに局所特徴量を計算し、
  前記検索画像について前記計算された特徴点の位置ならびに局所特徴量と、前記複数の写真画像のそれぞれについて前記計算された特徴点の位置ならびに局所特徴量と、を対比することにより、前記検索画像の各画素と、前記複数の写真画像のいずれかの画素と、を対応付け、
  前記検索画像の各画素に対応付けられた前記複数の写真画像のいずれかの画素が対応付けられた前記壁面マップ内の位置を出力する。
In the identification method according to the present embodiment, the identification device is
Acquire multiple photographic images of the walls in the tunnel,
Calculating the position of each feature point and local feature amount of each of the plurality of photographic images;
A three-dimensional polygon model of the wall surface is constructed with reference to the calculated feature point positions and local feature amounts for each of the plurality of photographic images, and a wall map based on the constructed three-dimensional polygon model is generated. ,
Associating each pixel of the plurality of photographic images with a position in the wall map,
Obtain a search image in which the wall surface in the tunnel is newly photographed,
Calculating the position of the feature point of the acquired search image and the local feature amount;
By comparing the calculated feature point position and local feature amount for the search image with the calculated feature point position and local feature amount for each of the plurality of photographic images, Associating each pixel with one of the pixels of the plurality of photographic images,
A position in the wall surface map in which any pixel of the plurality of photographic images associated with each pixel of the search image is associated is output.
  本実施形態に係るプログラムは、
  第1コンピュータを、
  トンネル内の壁面が撮影された複数の写真画像を取得する第1取得部、
  前記複数の写真画像のそれぞれの特徴点の位置ならびに局所特徴量を計算する第1計算部、
  前記複数の写真画像のそれぞれについて前記計算された特徴点の位置ならびに局所特徴量を参照して前記壁面の3次元ポリゴンモデルを構築し、前記構築された3次元ポリゴンモデルに基づく壁面マップを生成する構築部、
  前記複数の写真画像のそれぞれの各画素と、前記壁面マップ内の位置と、を対応付ける第1マッピング部
  として機能させる第1プログラムと、
  第2コンピュータもしくは前記第1コンピュータを、
  前記トンネル内の壁面が新たに撮影された検索画像を取得する第2取得部、
  前記取得された検索画像の特徴点の位置ならびに局所特徴量を計算する第2計算部、
  前記検索画像について前記計算された特徴点の位置ならびに局所特徴量と、前記複数の写真画像のそれぞれについて前記計算された特徴点の位置ならびに局所特徴量と、を対比することにより、前記検索画像の各画素と、前記複数の写真画像のいずれかの画素と、を対応付ける第2マッピング部、
  前記検索画像の各画素に対応付けられた前記複数の写真画像のいずれかの画素が対応付けられた前記壁面マップ内の位置を出力する出力部
  として機能させる第2プログラムと、
  を備える。
The program according to this embodiment is
The first computer,
A first acquisition unit for acquiring a plurality of photographic images of the walls in the tunnel;
A first calculation unit for calculating a position of each feature point and a local feature amount of each of the plurality of photographic images;
A three-dimensional polygon model of the wall surface is constructed with reference to the calculated feature point position and local feature amount for each of the plurality of photographic images, and a wall surface map based on the constructed three-dimensional polygon model is generated. Construction Department,
A first program that functions as a first mapping unit that associates each pixel of the plurality of photographic images with a position in the wall surface map;
A second computer or the first computer,
A second acquisition unit for acquiring a search image in which a wall surface in the tunnel is newly photographed;
A second calculator for calculating the position of the feature point of the acquired search image and the local feature amount;
By comparing the calculated feature point position and local feature amount for the search image with the calculated feature point position and local feature amount for each of the plurality of photographic images, A second mapping unit that associates each pixel with one of the pixels of the plurality of photographic images;
A second program that functions as an output unit that outputs a position in the wall map in which any pixel of the plurality of photographic images associated with each pixel of the search image is associated;
Is provided.
  当該プログラムは、非一時的なコンピュータ読取可能な情報記録媒体に記録して配布、販売することができる。また、コンピュータ通信網等の一時的な伝送媒体を介して配布、販売することができる。 The program can be recorded and distributed and sold on a non-transitory computer-readable information recording medium. It can also be distributed and sold via a temporary transmission medium such as a computer communication network.
  本発明は、本発明の広義の精神と範囲を逸脱することなく、様々な実施の形態及び変形が可能とされるものである。また、上述した実施の形態は、この発明を説明するためのものであり、本発明の範囲を限定するものではない。すなわち、本発明の範囲は、実施の形態ではなく、特許請求の範囲によって示される。そして、特許請求の範囲内及びそれと同等の発明の意義の範囲内で施される様々な変形が、この発明の範囲内とみなされる。
  本願においては、日本国に対して平成29年(2017年)2月24日(金)に出願した特許出願特願2017-033771を基礎とする優先権を主張するものとし、指定国の法令が許す限り、当該基礎出願の内容を本願に取り込むものとする。
Various embodiments and modifications can be made to the present invention without departing from the broad spirit and scope of the present invention. The above-described embodiments are for explaining the present invention and do not limit the scope of the present invention. In other words, the scope of the present invention is shown not by the embodiments but by the claims. Various modifications within the scope of the claims and within the scope of the equivalent invention are considered to be within the scope of the present invention.
In this application, we shall claim priority based on patent application Japanese Patent Application No. 2017-033771 filed on February 24, 2017 (Friday) to Japan. The content of the basic application will be incorporated into this application as far as it permits.
  本発明によれば、写真画像に映ったトンネル内の壁面の位置を同定する同定装置、同定方法、ならびに、プログラムを提供することができる。 According to the present invention, it is possible to provide an identification device, an identification method, and a program for identifying the position of the wall surface in the tunnel reflected in the photographic image.
  101 同定装置
  111 第1取得部
  112 第1計算部
  113 構築部
  114 第1マッピング部
  121 第2取得部
  122 第2計算部
  124 第2マッピング部
  125 出力部
101 identification device 111 first acquisition unit 112 first calculation unit 113 construction unit 114 first mapping unit 121 second acquisition unit 122 second calculation unit 124 second mapping unit 125 output unit

Claims (8)

  1.   トンネル内の壁面が撮影された複数の写真画像を取得する第1取得部、
      前記複数の写真画像のそれぞれの特徴点の位置ならびに局所特徴量を計算する第1計算部、
      前記複数の写真画像のそれぞれについて前記計算された特徴点の位置ならびに局所特徴量を参照して前記壁面の3次元ポリゴンモデルを構築し、前記構築された3次元ポリゴンモデルに基づく壁面マップを生成する構築部、
      前記複数の写真画像のそれぞれの各画素と、前記壁面マップ内の位置と、を対応付ける第1マッピング部、
      前記トンネル内の壁面が新たに撮影された検索画像を取得する第2取得部、
      前記取得された検索画像の特徴点の位置ならびに局所特徴量を計算する第2計算部、
      前記検索画像について前記計算された特徴点の位置ならびに局所特徴量と、前記複数の写真画像のそれぞれについて前記計算された特徴点の位置ならびに局所特徴量と、を対比することにより、前記検索画像の各画素と、前記複数の写真画像のいずれかの画素と、を対応付ける第2マッピング部、
      前記検索画像の各画素に対応付けられた前記複数の写真画像のいずれかの画素が対応付けられた前記壁面マップ内の位置を出力する出力部
      を備える同定装置。
    A first acquisition unit for acquiring a plurality of photographic images of the walls in the tunnel;
    A first calculation unit for calculating a position of each feature point and a local feature amount of each of the plurality of photographic images;
    A three-dimensional polygon model of the wall surface is constructed with reference to the calculated feature point position and local feature amount for each of the plurality of photographic images, and a wall surface map based on the constructed three-dimensional polygon model is generated. Construction Department,
    A first mapping unit that associates each pixel of the plurality of photographic images with a position in the wall surface map;
    A second acquisition unit for acquiring a search image in which a wall surface in the tunnel is newly photographed;
    A second calculator for calculating the position of the feature point of the acquired search image and the local feature amount;
    By comparing the calculated feature point position and local feature amount for the search image with the calculated feature point position and local feature amount for each of the plurality of photographic images, A second mapping unit that associates each pixel with one of the pixels of the plurality of photographic images;
    An identification apparatus comprising: an output unit that outputs a position in the wall map in which any pixel of the plurality of photographic images associated with each pixel of the search image is associated.
  2.   前記第2取得部は、前記検索画像の撮影に引き続き、撮影位置ならびに撮影方向を所定の誤差範囲内で維持したまま、連写されたスキャン画像であって、前記壁面内のスキャン領域をレーザスキャナによりスキャンした様子が撮影されたスキャン画像を取得し、
      前記第2マッピング部は、前記スキャン画像と、前記検索画像と、を対比することにより、前記スキャン画像の各画素と、前記検索画像のいずれかの画素と、を対応付け、
      前記出力部は、前記スキャン画像に撮影された前記スキャン領域内の各画素に対応付けられた前記検索画像のいずれかの画素に対応付けられた前記複数の写真画像のいずれかの画素が対応付けられた前記壁面マップ内の位置を出力する
      ことを特徴とする請求項1に記載の同定装置。
    The second acquisition unit is a scan image that is continuously shot while the shooting position and the shooting direction are maintained within a predetermined error range following the shooting of the search image, and the scan area in the wall surface is scanned with a laser scanner. Acquire a scanned image of the scan taken by
    The second mapping unit associates each pixel of the scan image with any pixel of the search image by comparing the scan image with the search image.
    The output unit associates any pixel of the plurality of photographic images associated with any pixel of the search image associated with each pixel in the scan area captured in the scan image. 2. The identification apparatus according to claim 1, wherein a position in the wall surface map is output.
  3.   前記壁面マップは、前記3次元ポリゴンモデルにより前記壁面の3次元の座標値を表現する3次元マップである
      ことを特徴とする請求項1に記載の同定装置。
    2. The identification device according to claim 1, wherein the wall map is a three-dimensional map that represents a three-dimensional coordinate value of the wall surface by the three-dimensional polygon model.
  4.   前記壁面マップは、前記3次元ポリゴンモデルを展開した展開図により前記壁面の2次元の座標値を表現する2次元マップである
      ことを特徴とする請求項1に記載の同定装置。
    2. The identification device according to claim 1, wherein the wall surface map is a two-dimensional map that represents a two-dimensional coordinate value of the wall surface by a developed view of the three-dimensional polygon model.
  5.   前記第1マッピング部は、
      前記座標値を色情報に変換し、
      前記変換された色情報を前記3次元ポリゴンモデルの前記座標値に対応付けられる点の色として割り当て、
      前記複数の写真画像のそれぞれが撮影された撮影位置ならびに撮影方向から、前記色が割り当てられた前記3次元ポリゴンモデルをレンダリングして、前記複数の写真画像のそれぞれと同じサイズの対応画像を生成し、
      前記生成された対応画像における各画素に描画された色を座標値に逆変換することにより、
      前記複数の写真画像のそれぞれの各画素と、前記壁面マップ内の位置と、を対応付ける
      ことを特徴とする請求項3または4に記載の同定装置。
    The first mapping unit includes
    Converting the coordinate value into color information;
    Assign the converted color information as the color of the point associated with the coordinate value of the three-dimensional polygon model,
    Rendering the three-dimensional polygon model to which the color is assigned from the shooting position and shooting direction at which each of the plurality of photographic images was taken, and generating a corresponding image having the same size as each of the plurality of photographic images ,
    By inversely transforming the color drawn on each pixel in the generated corresponding image into coordinate values,
    5. The identification apparatus according to claim 3, wherein each pixel of the plurality of photographic images is associated with a position in the wall map.
  6.   前記レンダリングは、GPU(Graphics Processing Unit)により実行される
      ことを特徴とする請求項5に記載の同定装置。
    6. The identification apparatus according to claim 5, wherein the rendering is executed by a GPU (Graphics Processing Unit).
  7.   同定装置が、
      トンネル内の壁面が撮影された複数の写真画像を取得し、
      前記複数の写真画像のそれぞれの特徴点の位置ならびに局所特徴量を計算し、
      前記複数の写真画像のそれぞれについて前記計算された特徴点の位置ならびに局所特徴量を参照して前記壁面の3次元ポリゴンモデルを構築し、前記構築された3次元ポリゴンモデルに基づく壁面マップを生成し、
      前記複数の写真画像のそれぞれの各画素と、前記壁面マップ内の位置と、を対応付け、
      前記トンネル内の壁面が新たに撮影された検索画像を取得し、
      前記取得された検索画像の特徴点の位置ならびに局所特徴量を計算し、
      前記検索画像について前記計算された特徴点の位置ならびに局所特徴量と、前記複数の写真画像のそれぞれについて前記計算された特徴点の位置ならびに局所特徴量と、を対比することにより、前記検索画像の各画素と、前記複数の写真画像のいずれかの画素と、を対応付け、
      前記検索画像の各画素に対応付けられた前記複数の写真画像のいずれかの画素が対応付けられた前記壁面マップ内の位置を出力する
      ことを特徴とする同定方法。
    The identification device
    Acquire multiple photographic images of the walls in the tunnel,
    Calculating the position of each feature point and local feature amount of each of the plurality of photographic images;
    A three-dimensional polygon model of the wall surface is constructed with reference to the calculated feature point positions and local feature amounts for each of the plurality of photographic images, and a wall map based on the constructed three-dimensional polygon model is generated. ,
    Associating each pixel of the plurality of photographic images with a position in the wall map,
    Obtain a search image in which the wall surface in the tunnel is newly photographed,
    Calculating the position of the feature point of the acquired search image and the local feature amount;
    By comparing the calculated feature point position and local feature amount for the search image with the calculated feature point position and local feature amount for each of the plurality of photographic images, Associating each pixel with one of the pixels of the plurality of photographic images,
    An identification method, comprising: outputting a position in the wall map in which any pixel of the plurality of photographic images associated with each pixel of the search image is associated.
  8.   第1コンピュータを、
      トンネル内の壁面が撮影された複数の写真画像を取得する第1取得部、
      前記複数の写真画像のそれぞれの特徴点の位置ならびに局所特徴量を計算する第1計算部、
      前記複数の写真画像のそれぞれについて前記計算された特徴点の位置ならびに局所特徴量を参照して前記壁面の3次元ポリゴンモデルを構築し、前記構築された3次元ポリゴンモデルに基づく壁面マップを生成する構築部、
      前記複数の写真画像のそれぞれの各画素と、前記壁面マップ内の位置と、を対応付ける第1マッピング部
      として機能させる第1プログラムと、
      第2コンピュータもしくは前記第1コンピュータを、
      前記トンネル内の壁面が新たに撮影された検索画像を取得する第2取得部、
      前記取得された検索画像の特徴点の位置ならびに局所特徴量を計算する第2計算部、
      前記検索画像について前記計算された特徴点の位置ならびに局所特徴量と、前記複数の写真画像のそれぞれについて前記計算された特徴点の位置ならびに局所特徴量と、を対比することにより、前記検索画像の各画素と、前記複数の写真画像のいずれかの画素と、を対応付ける第2マッピング部、
      前記検索画像の各画素に対応付けられた前記複数の写真画像のいずれかの画素が対応付けられた前記壁面マップ内の位置を出力する出力部
      として機能させる第2プログラムと、
      を備えることを特徴とするプログラム。
    The first computer,
    A first acquisition unit for acquiring a plurality of photographic images of the walls in the tunnel;
    A first calculation unit for calculating a position of each feature point and a local feature amount of each of the plurality of photographic images;
    A three-dimensional polygon model of the wall surface is constructed with reference to the calculated feature point position and local feature amount for each of the plurality of photographic images, and a wall surface map based on the constructed three-dimensional polygon model is generated. Construction Department,
    A first program that functions as a first mapping unit that associates each pixel of the plurality of photographic images with a position in the wall surface map;
    A second computer or the first computer,
    A second acquisition unit for acquiring a search image in which a wall surface in the tunnel is newly photographed;
    A second calculator for calculating the position of the feature point of the acquired search image and the local feature amount;
    By comparing the calculated feature point position and local feature amount for the search image with the calculated feature point position and local feature amount for each of the plurality of photographic images, A second mapping unit that associates each pixel with one of the pixels of the plurality of photographic images;
    A second program that functions as an output unit that outputs a position in the wall map in which any pixel of the plurality of photographic images associated with each pixel of the search image is associated;
    A program comprising:
PCT/JP2018/006576 2017-02-24 2018-02-22 Identifying device, identifying method and program for identifying position of wall surface inside tunnel appearing in photographic image WO2018155590A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2019501810A JP7045721B2 (en) 2017-02-24 2018-02-22 Identification device, identification method, and program to identify the position of the wall surface in the tunnel shown in the photographic image.

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017033771 2017-02-24
JP2017-033771 2017-02-24

Publications (1)

Publication Number Publication Date
WO2018155590A1 true WO2018155590A1 (en) 2018-08-30

Family

ID=63253896

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/006576 WO2018155590A1 (en) 2017-02-24 2018-02-22 Identifying device, identifying method and program for identifying position of wall surface inside tunnel appearing in photographic image

Country Status (2)

Country Link
JP (1) JP7045721B2 (en)
WO (1) WO2018155590A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6584735B1 (en) * 2019-03-25 2019-10-02 三菱電機株式会社 Image generation apparatus, image generation method, and image generation program
WO2020158726A1 (en) * 2019-01-31 2020-08-06 富士フイルム株式会社 Image processing device, image processing method, and program
JP2020153873A (en) * 2019-03-20 2020-09-24 株式会社リコー Diagnosis processing device, diagnosis system, diagnosis processing method, and program
CN114692272A (en) * 2022-03-25 2022-07-01 中南大学 Method for automatically generating three-dimensional parameterized tunnel model based on two-dimensional design drawing
CN114943706A (en) * 2022-05-27 2022-08-26 宁波艾腾湃智能科技有限公司 Anti-counterfeiting authentication of plane works or products in absolute two-dimensional space state
JP7197218B1 (en) 2021-06-15 2022-12-27 ジビル調査設計株式会社 Structure inspection device
JP2023503426A (en) * 2019-11-19 2023-01-30 サクミ コオペラティヴァ メッカニチ イモラ ソシエタ コオペラティヴァ Device for optical inspection of sanitary ware

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1176393A2 (en) * 2000-07-17 2002-01-30 Inco Limited Self-contained mapping and positioning system utilizing point cloud data
CN102564393A (en) * 2011-12-28 2012-07-11 北京工业大学 Method for monitoring and measuring full section of tunnel through three-dimensional laser
JP2017020972A (en) * 2015-07-14 2017-01-26 東急建設株式会社 Three-dimensional shape measurement device, three-dimensional shape measurement method, and program
JP2017503100A (en) * 2014-01-14 2017-01-26 サンドヴィック マイニング アンド コンストラクション オーワイ Mining mine vehicle and method of starting mining mine task
JP2017129508A (en) * 2016-01-22 2017-07-27 三菱電機株式会社 Self-location estimation system, self-location estimation method, mobile terminal, server and self-location estimation program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005070840A (en) * 2003-08-25 2005-03-17 East Japan Railway Co Three dimensional model preparing device, three dimensional model preparing method and three dimensional model preparing program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1176393A2 (en) * 2000-07-17 2002-01-30 Inco Limited Self-contained mapping and positioning system utilizing point cloud data
CN102564393A (en) * 2011-12-28 2012-07-11 北京工业大学 Method for monitoring and measuring full section of tunnel through three-dimensional laser
JP2017503100A (en) * 2014-01-14 2017-01-26 サンドヴィック マイニング アンド コンストラクション オーワイ Mining mine vehicle and method of starting mining mine task
JP2017020972A (en) * 2015-07-14 2017-01-26 東急建設株式会社 Three-dimensional shape measurement device, three-dimensional shape measurement method, and program
JP2017129508A (en) * 2016-01-22 2017-07-27 三菱電機株式会社 Self-location estimation system, self-location estimation method, mobile terminal, server and self-location estimation program

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020158726A1 (en) * 2019-01-31 2020-08-06 富士フイルム株式会社 Image processing device, image processing method, and program
JP2020153873A (en) * 2019-03-20 2020-09-24 株式会社リコー Diagnosis processing device, diagnosis system, diagnosis processing method, and program
JP7205332B2 (en) 2019-03-20 2023-01-17 株式会社リコー Diagnostic processing device, diagnostic system, diagnostic processing method, and program
JP6584735B1 (en) * 2019-03-25 2019-10-02 三菱電機株式会社 Image generation apparatus, image generation method, and image generation program
WO2020194470A1 (en) * 2019-03-25 2020-10-01 三菱電機株式会社 Image generation device, image generation method, and image generation program
JP2023503426A (en) * 2019-11-19 2023-01-30 サクミ コオペラティヴァ メッカニチ イモラ ソシエタ コオペラティヴァ Device for optical inspection of sanitary ware
JP7450032B2 (en) 2019-11-19 2024-03-14 サクミ コオペラティヴァ メッカニチ イモラ ソシエタ コオペラティヴァ Equipment for optical inspection of sanitary ware
JP7197218B1 (en) 2021-06-15 2022-12-27 ジビル調査設計株式会社 Structure inspection device
JP2023002856A (en) * 2021-06-15 2023-01-11 ジビル調査設計株式会社 Structure inspection device
CN114692272A (en) * 2022-03-25 2022-07-01 中南大学 Method for automatically generating three-dimensional parameterized tunnel model based on two-dimensional design drawing
CN114943706A (en) * 2022-05-27 2022-08-26 宁波艾腾湃智能科技有限公司 Anti-counterfeiting authentication of plane works or products in absolute two-dimensional space state
CN114943706B (en) * 2022-05-27 2023-04-07 宁波艾腾湃智能科技有限公司 Anti-counterfeiting authentication of planar works or products in absolute two-dimensional space state

Also Published As

Publication number Publication date
JPWO2018155590A1 (en) 2019-12-12
JP7045721B2 (en) 2022-04-01

Similar Documents

Publication Publication Date Title
WO2018155590A1 (en) Identifying device, identifying method and program for identifying position of wall surface inside tunnel appearing in photographic image
Lee et al. Skeleton-based 3D reconstruction of as-built pipelines from laser-scan data
Lattanzi et al. 3D scene reconstruction for robotic bridge inspection
Huang et al. Semantics-aided 3D change detection on construction sites using UAV-based photogrammetric point clouds
KR102113068B1 (en) Method for Automatic Construction of Numerical Digital Map and High Definition Map
US20160133008A1 (en) Crack data collection method and crack data collection program
CN104574393A (en) Three-dimensional pavement crack image generation system and method
Guarnieri et al. Digital photogrammetry and laser scanning in cultural heritage survey
JP2016090547A (en) Crack information collection device and server apparatus to collect crack information
JP4568845B2 (en) Change area recognition device
JP2016217941A (en) Three-dimensional evaluation device, three-dimensional data measurement system and three-dimensional measurement method
US20220405878A1 (en) Image processing apparatus, image processing method, and image processing program
CN103424087B (en) A kind of large-scale steel plate three-dimensional measurement joining method
Dufour et al. 3D surface measurements with isogeometric stereocorrelation—application to complex shapes
WO2021014807A1 (en) Information processing apparatus, information processing method, and program
Yilmazturk et al. Geometric evaluation of mobile-phone camera images for 3D information
Zhang et al. Structure-from-motion based image unwrapping and stitching for small bore pipe inspections
US11423611B2 (en) Techniques for creating, organizing, integrating, and using georeferenced data structures for civil infrastructure asset management
JP7427615B2 (en) Information processing device, information processing method and program
JP6822086B2 (en) Simulation equipment, simulation method and simulation program
JP2006202152A (en) Image processor, image processing method and program used therefor
JP2006172099A (en) Changed region recognition device and change recognition system
Kolyvas et al. Application of photogrammetry techniques for the visual assessment of vessels’ cargo hold
Nikolov et al. Performance Characterization of Absolute Scale Computation for 3D Structure from Motion Reconstruction
JP7410387B2 (en) Accessory installation position inspection method and installation position inspection device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18756805

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019501810

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18756805

Country of ref document: EP

Kind code of ref document: A1