WO2022209709A1 - 画像処理装置、画像処理方法及び画像処理プログラム - Google Patents
画像処理装置、画像処理方法及び画像処理プログラム Download PDFInfo
- Publication number
- WO2022209709A1 WO2022209709A1 PCT/JP2022/010616 JP2022010616W WO2022209709A1 WO 2022209709 A1 WO2022209709 A1 WO 2022209709A1 JP 2022010616 W JP2022010616 W JP 2022010616W WO 2022209709 A1 WO2022209709 A1 WO 2022209709A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- patch
- image processing
- damage
- camera
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 110
- 238000003672 processing method Methods 0.000 title claims abstract description 7
- 238000012937 correction Methods 0.000 claims abstract description 54
- 238000005259 measurement Methods 0.000 claims abstract description 38
- 239000000284 extract Substances 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 abstract description 5
- 238000000034 method Methods 0.000 description 91
- 238000010586 diagram Methods 0.000 description 23
- 230000006870 function Effects 0.000 description 19
- 238000010187 selection method Methods 0.000 description 15
- ATJFFYVFTNAWJD-UHFFFAOYSA-N Tin Chemical compound [Sn] ATJFFYVFTNAWJD-UHFFFAOYSA-N 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 239000004567 concrete Substances 0.000 description 3
- 230000006866 deterioration Effects 0.000 description 3
- 230000005484 gravity Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000000691 measurement method Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 101710164994 50S ribosomal protein L13, chloroplastic Proteins 0.000 description 2
- 235000008733 Citrus aurantifolia Nutrition 0.000 description 2
- 235000011941 Tilia x europaea Nutrition 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 239000004571 lime Substances 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000032798 delamination Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 235000011389 fruit/vegetable juice Nutrition 0.000 description 1
- JEIPFZHSYJVQDO-UHFFFAOYSA-N iron(III) oxide Inorganic materials O=[Fe]O[Fe]=O JEIPFZHSYJVQDO-UHFFFAOYSA-N 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 239000011150 reinforced concrete Substances 0.000 description 1
- 230000003014 reinforcing effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/02—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/579—Depth or shape recovery from multiple images from motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/95—Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
- G01N21/9515—Objects of complex shape, e.g. examined with use of a surface follower device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30132—Masonry; Concrete
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30184—Infrastructure
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/56—Particle system, point based geometry or rendering
Definitions
- the present invention relates to an image processing device, an image processing method, and an image processing program, and more particularly to an image processing device that processes an image of a structure, detects damage on the surface of the structure, and measures the size of the damage.
- the present invention relates to an image processing method and an image processing program.
- Patent Literature 1 describes a technique for detecting appearance deterioration such as cracks from an image of a structure (standing concrete pole) and measuring its size (length and width). In order to obtain , it is described that the measurement result is corrected according to the position of appearance deterioration.
- Patent Literature 1 has the disadvantage that it is necessary to measure the position, angle, etc. of the appearance deterioration in detail in order to obtain information for correcting the measurement result, which requires a great deal of labor for the measurement. be.
- the present invention has been made in view of such circumstances, and an object thereof is to provide an image processing device, an image processing method, and an image processing program that can easily measure the accurate size of damage.
- a processor acquires a plurality of images of an object photographed from a plurality of viewpoints with overlapping photographing ranges, and a process of detecting damage to the surface of the object from the acquired plural images. , a process of analyzing a plurality of acquired images to generate 3D point cloud data of feature points, a process of generating a 3D patch model of the subject based on the generated point cloud data, and an acquired A process of selecting an image corresponding to each patch of the 3D patch model from among a plurality of images, a process of measuring the size of the damage in the area corresponding to the patch in the selected image, and the process of capturing the selected image. Based on the camera vector a connecting the position of the camera and the patch and the normal vector b of the patch, a process of generating correction information necessary for correcting the measurement result of the damage size, and based on the generated correction information and correcting the damage size measurement result.
- the processor divides the damage size measurement result by the cosine value cos ⁇ to correct the damage size measurement result, (2) or (3) image processing device.
- the processor selects an image having the smallest angle ⁇ between the camera vector a and the normal vector b in the process of selecting the image corresponding to the patch.
- Image processing device selects an image having the smallest angle ⁇ between the camera vector a and the normal vector b in the process of selecting the image corresponding to the patch.
- the processor extracts patches with continuous cracks, sums the corrected crack lengths obtained in each extracted patch, and further performs a process of measuring the length of continuous cracks.
- the image processing device of (12) The image processing device of (12).
- (14) a step of obtaining a plurality of images obtained by photographing a subject from a plurality of viewpoints with overlapping photographing ranges by a camera; a step of detecting damage on the surface of the subject from the obtained plurality of images; are analyzed to generate three-dimensional point cloud data of feature points; based on the generated point cloud data, a three-dimensional patch model of the subject is generated; selecting an image corresponding to each patch of the dimensional patch model; measuring the size of the damage in the region corresponding to the patch in the selected image; and determining the position of the camera and the patch when the selected image was captured. and the normal vector b of the patch, generating correction information necessary for correcting the damage size measurement result; and measuring the damage size based on the generated correction information. and correcting the result.
- a function of acquiring multiple images of a subject captured from multiple viewpoints with overlapping shooting ranges, a function of detecting damage to the surface of the subject from the acquired multiple images, and the acquired multiple images. are analyzed to generate 3D point cloud data of feature points, a function to generate a 3D patch model of the subject based on the generated point cloud data, and a function to generate 3D patch models from a plurality of acquired images.
- An image processing program that allows a computer to implement a function to correct the results.
- the exact size of damage can be easily measured.
- FIG. 2 is a block diagram showing an example of the hardware configuration of an image processing device; Block diagram of the main functions of the image processing device Diagram showing the outline of SfM Flowchart outlining the procedure of SfM processing Diagram showing an example of a TIN model A diagram showing an example of a three-dimensional shape model Conceptual diagram of crack width correction Conceptual diagram of correction value calculation 4 is a flow chart showing a procedure of image processing by an image processing device; Block diagram of the main functions of the image processing device Conceptual diagram of image selection Flowchart showing the procedure of image selection processing in the image selection unit Conceptual diagram of crack length measurement method Flowchart showing the procedure for measuring the size of damage without correction
- a pier is an example of a structure made of concrete, in particular of reinforced concrete. Cracks are an example of damage.
- the crack width is an example of the damage size.
- FIG. 1 is a diagram showing a schematic configuration of a system for measuring crack width.
- the system 1 of the present embodiment includes a camera 10 and an image processing device 100 that processes an image captured by the camera 10.
- the camera 10 is for photographing the object to be measured.
- a general digital camera including those mounted on mobile terminals and the like
- cracks generated on the surface of the pier Ob are to be measured.
- photography is performed with the bridge pier Ob as the subject.
- Figure 2 is a conceptual diagram of shooting.
- the rectangular area indicated by the symbol R indicates the shooting range of one time.
- shooting is performed with overlapping shooting ranges R from a plurality of viewpoints.
- imaging is performed with the overlap OL set to 80% or more and the side wrap SL set to 60% or more.
- Overlap OL refers to overlap in the course direction when shooting along a straight course. Therefore, “overlap OL of 80% or more” means that the overlap rate of images in the course direction is 80% or more.
- a side lap SL refers to an overlap between courses. Therefore, "60% or more of the side lap SL" means that the overlapping rate of images between courses is 60% or more.
- the image processing device 100 processes the image captured by the camera 10, detects cracks that have occurred on the surface of the structure, and measures the width of the cracks.
- the image processing apparatus 100 is configured by a computer including an input device, a display device, and the like.
- FIG. 3 is a block diagram showing an example of the hardware configuration of the image processing device.
- the image processing apparatus 100 includes a CPU (Central Processing Unit) 101, a RAM (Random Access Memory) 102, a ROM (Read Only Memory) 103, an auxiliary storage device 104, and a communication interface (Interface, IF) 105. , an input device 106, a display device 107, and the like.
- Programs executed by the CPU 101 and various data are stored in the ROM 103 and/or the auxiliary storage device 104 .
- the auxiliary storage device 104 is composed of, for example, an HDD (Hard Disk Drive), an SSD (Solid State Drive), or the like.
- the input device 106 is composed of, for example, a keyboard, mouse, touch panel, and the like.
- the display device 107 is configured by, for example, a liquid crystal display (Liquid Crystal Display), an organic EL display (Organic Light Emitting Diode Display), or the like.
- An image captured by the camera 10 is captured by the image processing apparatus 100 via the communication interface 105 and stored in the auxiliary storage device 104 .
- FIG. 4 is a block diagram of the main functions of the image processing device.
- the image processing apparatus 100 mainly includes an image acquisition unit 111, a point cloud data generation unit 112, a 3D patch model generation unit 113, a 3D shape model generation unit 114, a crack detection unit 115, a crack measurement It has the functions of a unit 116, a correction value calculation unit 117, a measurement result correction unit 118, and the like. These functions are realized by the processor executing a predetermined program (image processing program).
- the image acquisition unit 111 performs processing for acquiring an image group IG to be processed.
- the group of images IG to be processed is composed of a plurality of images obtained by photographing the bridge pier from a plurality of viewpoints with the camera 10 with overlapping photographing ranges R.
- the image acquisition unit 111 acquires an image group IG to be processed from the auxiliary storage device 104 .
- the point cloud data generation unit 112 analyzes the image group IG acquired by the image acquisition unit 111 and performs processing for generating three-dimensional point cloud data of feature points. In this embodiment, this processing is performed using SfM (Structure from Motion) and MVS (Multi-View Stereo) techniques.
- FIG. 5 is a diagram showing an overview of SfM.
- SfM is a technology that performs "estimation of captured position and posture” and "three-dimensional reconstruction of feature points" from multiple images captured by a camera.
- the SfM technology itself is a known technology.
- the outline of the processing is as follows.
- FIG. 6 is a flowchart showing an outline of the SfM processing procedure.
- a plurality of images (image group) to be processed are acquired (step S1).
- feature points are detected from each acquired image (step S2).
- matching feature points are detected as corresponding points by comparing the feature points of the two image pairs (step S3). That is, feature point matching is performed.
- the camera parameters eg, fundamental matrix, fundamental matrix, intrinsic parameters, etc.
- the shooting position and orientation are estimated based on the estimated camera parameters (step S5).
- the three-dimensional positions of the feature points of the object are obtained (step S6). That is, three-dimensional restoration of feature points is performed. After this, bundle adjustment is performed as necessary.
- the coordinates of the three-dimensional point cloud in order to minimize the reprojection error of the point cloud, which is a set of the feature points in the three-dimensional coordinates, to the camera, the coordinates of the three-dimensional point cloud, the camera internal parameters (focal length, principal point) , the camera extrinsic parameters (position, rotation) are adjusted.
- the 3D points restored by SfM are specific 3D points and sparse.
- a general three-dimensional model is mostly composed of textures with low feature amounts (for example, walls).
- MVS attempts to restore 3D textures with low feature amounts that account for the majority.
- MVS uses the "shooting position and orientation" estimated by SfM to generate a dense point cloud.
- the MVS technology itself is a known technology. Therefore, detailed description thereof is omitted.
- the restored shape and imaging position obtained by SfM are a point group represented by dimensionless coordinate values. Therefore, the shape cannot be quantitatively grasped from the obtained restored shape as it is. Therefore, it is necessary to give physical dimensions (actual dimensions).
- a known technique is adopted for this processing. For example, techniques such as extracting reference points (eg, ground control points) from an image and assigning physical dimensions can be employed.
- GCP Ground Control Point
- a Ground Control Point (GCP) is a landmark containing geospatial information (latitude, longitude, altitude) that is visible in a captured image. Therefore, in this case, it is necessary to set a reference point at the stage of photographing.
- the physical dimensions can be assigned using the distance measurement information.
- LIDAR Light Detection and Ranging or Laser Imaging Detection and Ranging
- SfM Sensor Detection and Ranging
- the 3D patch model generation unit 113 performs processing for generating a 3D patch model of the subject based on the 3D point cloud data of the subject generated by the point cloud data generation unit 112 .
- a patch (mesh) is generated from the generated 3D point group to generate a 3D patch model.
- This processing is performed using a known technique such as, for example, three-dimensional Delaunay triangulation. Therefore, detailed description thereof will be omitted.
- a three-dimensional Delaunay triangulation is used to generate a TIN (Triangular Irregular Network) model.
- a TIN model is an example of a three-dimensional patch model.
- FIG. 7 is a diagram showing an example of a TIN model.
- the surface is represented by a set of triangles. That is, a patch P is generated by a triangular mesh.
- the 3D shape model generation unit 114 performs processing for generating a textured 3D shape model by performing texture mapping on the 3D patch model generated by the 3D patch model generation unit 113 . This processing is performed by interpolating the space within each patch of the three-dimensional patch model with the captured image.
- the point cloud data generation unit 112 performs SfM and MVS processing. By this SfM and MVS processing, an image obtained by photographing an area corresponding to each patch and a corresponding position within the image can be obtained. Therefore, if the vertex of the generated surface can be observed, it is possible to associate the texture to be applied to that surface.
- the three-dimensional shape model generation unit 114 selects an image corresponding to each patch, and extracts an image of an area corresponding to the patch from the selected image as a texture. Specifically, the vertices of the patch are projected onto the selected image, and the image of the area surrounded by the projected vertices is extracted as the texture. A three-dimensional shape model is generated by applying the extracted texture to the patch. That is, the extracted texture is used to interpolate the space within the patch to generate a three-dimensional shape model.
- FIG. 8 is a diagram showing an example of a three-dimensional shape model.
- color information is added to each patch by adding a texture to each patch. If the subject has a crack, a crack C is displayed at the corresponding position.
- the generated three-dimensional shape model is stored in the auxiliary storage device 104 or the like as necessary. Moreover, it is displayed on the display device 107 as necessary.
- the crack detection unit 115 performs processing to detect cracks from each acquired image.
- a known technique can be adopted as a technique for detecting cracks from an image.
- a technique of detecting cracks from an image using an image recognition model generated by machine learning, deep learning, or the like can be employed.
- the type of machine learning algorithm is not particularly limited. For example, algorithms using neural networks such as RNN (Recurrent Neural Network), CNN (Convolutional Neural Network) and MLP (Multilayer Perceptron) can be used.
- RNN Recurrent Neural Network
- CNN Convolutional Neural Network
- MLP Multilayer Perceptron
- the crack measurement unit 116 performs processing for measuring the width of the crack detected by the crack detection unit 115.
- a known image measurement technique is employed for the measurement. If the texture applied to each patch contains a crack, the width of the crack is measured by this process.
- the correction value calculation unit 117 performs processing for calculating correction values. This correction value is for correcting the measured value of the width of the crack in the patch containing the crack.
- FIG. 9 is a conceptual diagram of crack width correction.
- the width of the crack measured from the image matches the actual width CW of the crack.
- the width CWx of the crack measured from the image is , is smaller than the actual crack width CW (CWx ⁇ CW).
- the correction value calculator 117 calculates cos ⁇ as the correction value ⁇ .
- the correction value ⁇ is an example of correction information.
- the point cloud data generation unit 112 generates point cloud data by SfM.
- SfM the position (position of the camera) where the image was taken and the orientation can be estimated. Therefore, the correction value can be calculated by using this information.
- FIG. 10 is a conceptual diagram of calculation of correction values.
- the figure shows an example of calculating the correction value ⁇ for the hatched patch Pi.
- Image Ii is an image corresponding to patch Pi.
- the position of the camera (shooting position) when this image Ii was shot is assumed to be a position Pc.
- the camera position Pc can be estimated by SfM.
- the tilt angle ⁇ when the image Ii is captured is obtained by the angle formed by the camera vector a and the normal vector b.
- . Therefore, the correction value ⁇ can be calculated by ⁇ a ⁇ b/
- the correction value calculator 117 calculates the correction value ⁇ for each patch. Note that the correction value ⁇ may be calculated only for patches that include cracks in the applied texture.
- the calculated actual crack width information is stored in the auxiliary storage device 104 or the like as necessary. Moreover, it is displayed on the display device 107 as necessary. When stored, it is stored in association with the three-dimensional shape model. Alternatively, it is stored in association with the captured image.
- FIG. 11 is a flow chart showing the procedure of image processing by the image processing apparatus.
- a process of acquiring a group of images of the subject is performed (step S11).
- a group of images of bridge piers is acquired.
- photographing is performed by overlapping photographing ranges from a plurality of viewpoints.
- processing is performed to detect damage from each acquired image (step S12).
- cracks are detected. Individual cracks detected from each image are assigned identification information and recorded.
- step S13 a process of measuring the size of damage detected in each image is performed (step S13).
- the crack width is measured.
- the measured width information is recorded in association with the identification information of the crack to be measured.
- processing is performed to generate three-dimensional point cloud data of the subject from the acquired image group (step S14).
- this process is performed by SfM and MVS.
- SfM and MVS in addition to the three-dimensional point cloud data of the subject, information on the position and orientation of the camera when each image was captured can be obtained.
- a process of generating a 3D patch model of the subject is performed based on the generated 3D point cloud data of the subject (step S15).
- a TIN model is generated as a 3D patch model using a 3D Delaunay triangulation (see FIG. 7).
- step S16 texture mapping is performed on the generated 3D patch model to generate a 3D shape model of the subject (step S16).
- the image of the area corresponding to the patch is extracted as the texture from the image corresponding to each patch of the three-dimensional patch model.
- the extracted texture is then applied to the patch to generate a 3D shape model (see FIG. 8).
- step S17 a process of calculating the correction value ⁇ is performed (step S17). This processing is performed at least on the patch containing the damage.
- the correction value ⁇ is calculated from the tilt angle ⁇ when the image corresponding to the patch is captured, as described above. This angle ⁇ is obtained by the angle formed by the camera vector a and the normal vector b.
- step S18 a process of correcting the measured value of the damage with the correction value is performed on the patch containing the damage. That is, the actual damage size is calculated by dividing the damage measurement value by the correction value ⁇ .
- a correction is made for all patches that contain damage. This makes it possible to obtain an accurate crack width.
- the image processing apparatus 100 of the present embodiment it is possible to accurately measure the size of the damage by eliminating the influence of tilting.
- the angle of the tilting or the like at the time of photographing work such as photographing at the site can be performed smoothly.
- the present embodiment generates a 3D shape model, it is not always necessary to generate a 3D shape model if the purpose is only to measure the size of damage. At least, it is sufficient if the image corresponding to each patch can be specified.
- the 3D shape model is generated by interpolating each patch of the 3D patch model with textures.
- the texture applied to each patch is generated by extracting from the image corresponding to each patch.
- This image may exist more than once. That is, since shooting is performed with overlapping shooting ranges from a plurality of viewpoints, there may be a plurality of images corresponding to one patch. When there are multiple corresponding images, the problem is which image to select.
- the image processing apparatus of this embodiment selects images from the following viewpoints. That is, the image most suitable for damage measurement is selected as the image corresponding to the patch.
- FIG. 12 is a block diagram of the main functions of the image processing apparatus of this embodiment.
- the three-dimensional shape model generation unit 114 differs from the image processing apparatus 100 of the first embodiment in that it has the function of an image selection unit 114A.
- the functions of the image selection unit 114A will be described below.
- the image selection unit 114A performs processing for selecting one of the multiple images. The selection is made from the viewpoint of measurement, and the image most suitable for damage measurement is selected as the image corresponding to the patch. In this embodiment, an image having the smallest angle ⁇ between the camera vector a and the normal vector b is selected. That is, an image with a smaller tilt angle is selected. As a result, the influence of tilting can be reduced, the damage can be measured with higher accuracy, and the measured value can be corrected with higher accuracy.
- FIG. 13 is a conceptual diagram of image selection.
- the figure shows an example in which there are two images corresponding to the hatched patch Pi.
- One image is the first image I1i, and the other image is the second image I2i.
- the position of the camera when the first image I1i was captured be the first position Pc1
- the camera vector of the first image I1i be the first camera vector a1.
- the position of the camera when the second image I2i was captured be a second position Pc2
- the camera vector of the second image I2i be a second camera vector a2.
- the first camera vector a1 is defined as a vector connecting the first position Pc1 and the centroid or center of the patch Pi.
- the second camera vector a2 is defined as a vector connecting the second position Pc2 and the center of gravity or the center of the patch Pi.
- the image with the smaller angle between the camera vector and the normal vector is selected. For example, let ⁇ 1 be the angle formed by the first camera vector a1 and the normal vector b, and ⁇ 2 be the angle formed by the second camera vector a2 and the normal vector b. is selected as an image corresponding to
- the image selection unit 114A obtains the cosine value (cos ⁇ ) of the angle formed by the camera vector and the normal vector, and selects the image with the largest value as the image corresponding to the patch.
- FIG. 14 is a flowchart showing the procedure of image selection processing in the image selection unit.
- a process of acquiring an image corresponding to the patch is performed (step S21).
- a process of determining whether or not there are a plurality of acquired images is performed (step S22). If multiple images corresponding to the patch do not exist, the selection process ends. On the other hand, if there are multiple images corresponding to the patch, a process of selecting an image is performed.
- a process of calculating the angle between the camera vector and the normal vector of each image is performed (step S23). This processing is performed by obtaining the cosine value (cos ⁇ ) of the angle between the camera vector and the normal vector.
- a process of selecting an image having the smallest angle between the camera vector and the normal vector is performed (step S24). The selected image is taken as the image corresponding to the patch.
- the image selected in this process is the image with the smallest tilt angle.
- the influence of tilting can be reduced, the size of damage can be measured with higher accuracy, and the measured value can be corrected with higher accuracy.
- the selected image is used for texture extraction. Also, if the image extracted as the texture contains a crack, it is subject to measurement and further subject to correction of the measured value.
- Shooting resolution is synonymous with resolution. Therefore, an image with higher resolution for the same subject is selected as the image corresponding to the patch. For example, in the case of cracks, the image that captures the crack at a higher resolution is selected as the image corresponding to the patch. When the same subject is shot with the same camera, the image shot at a position closer to the subject has a higher resolution. That is, an image with a shorter shooting distance has a higher resolution. Therefore, in this case, the image with the closest shooting distance, ie, the image with the closest camera position to the patch, is selected as the image corresponding to the patch.
- camera lenses have various aberrations that increase as they move away from the optical axis (center). Therefore, the farther the captured image is from the center, the lower the quality of the image. Specifically, distortion and the like increase.
- the image whose position of the region corresponding to the patch is closest to the center of the image is selected as the image corresponding to the patch.
- the distance between the center of gravity or the center of the area corresponding to the patch and the center of the image is calculated for each image, and the image with the smallest calculated distance is selected as the image corresponding to the patch.
- a method of setting priorities for each selection method can be adopted.
- the first selection method is based on the angle between the camera vector and the normal vector
- the second selection method is based on the shooting resolution
- the selection method is based on the position of the area corresponding to the patch.
- the third selection method is the third selection method.
- images are selected by the first selection method, that is, the method based on the angle between the camera vector and the normal vector. If the image cannot be selected by the first selection method, the second selection method is performed.
- a case where an image cannot be selected by the first selection method is, for example, a case where a plurality of images having the same angle exist. In this case, an image is selected by the second selection method from images having the same angle.
- the results of each selection method are ranked, and each ranking is given a predetermined score. Then, the total score is calculated, and the image with the highest total score is selected as the image corresponding to the patch. In this case, a difference may be provided in the score given to the result of each selection method. That is, weighting may be performed.
- Images to be processed need only be captured from a plurality of viewpoints with overlapping shooting ranges, and the shooting method is not particularly limited. It is also possible to mount a camera on an unmanned aerial vehicle such as a drone to take pictures.
- the images to be processed include those extracted from moving images. That is, it is possible to shoot a moving image of a subject and use an image of each frame of the shot moving image as an image to be processed.
- the present invention can also be applied to the case of measuring other types of damage.
- it works effectively when affected by tilting. Therefore, as long as it is affected by tilting, it can also be applied to measure the size (area, etc.) of damage such as delamination, exposure of reinforcing bars, water leakage (including rust juice), free lime, and corrosion.
- the present invention can also be applied to measuring the length of cracks when the length is affected by the deflection.
- the length of cracks can be measured by the following method. Patches with continuous cracks are extracted, and the lengths of cracks obtained from each extracted patch are summed up to measure the length of continuous cracks. That is, the length of cracks detected across a plurality of patches is measured. In addition, when the length of the crack is corrected, the length after the correction is added up to measure the length of the continuous crack.
- Fig. 15 is a conceptual diagram of a crack length measurement method.
- patches P1 to P13 indicated by dashed lines are patches containing continuous cracks C.
- the total length of the cracks measured in each of the patches P1 to P13 is taken as the length of the continuous crack C.
- CL13 are the lengths of cracks in patches P1, P2, .
- the area of damage detected across multiple regions can be measured by summing the areas of damage measured by each patch.
- processors include CPUs and/or GPUs (Graphic Processing Units), FPGAs (Field Programmable Gate Arrays), etc., which are general-purpose processors that execute programs and function as various processing units.
- Programmable Logic Device PLD
- ASIC Application Specific Integrated Circuit
- a dedicated electric circuit which is a processor having a circuit configuration specially designed to execute specific processing, etc. included.
- a program is synonymous with software.
- a single processing unit may be composed of one of these various processors, or may be composed of two or more processors of the same type or different types.
- one processing unit may be composed of a plurality of FPGAs or a combination of a CPU and an FPGA.
- a plurality of processing units may be configured by one processor.
- a single processor is configured with a combination of one or more CPUs and software, as typified by computers used for clients and servers. , in which the processor functions as a plurality of processing units.
- SoC System on Chip
- the method of selecting the image corresponding to the patch described in the second embodiment, including the modification, is effective even when the damage size is not corrected. That is, since an image suitable for damage measurement can be selected, damage can be measured with higher accuracy than usual. "Normal" means a case where an image corresponding to a patch is selected without selecting an image from the viewpoint described in the second embodiment. Also, better images can be applied to each patch when generating a three-dimensional shape model. Thereby, a higher-quality three-dimensional shape model can be generated.
- FIG. 16 is a flowchart showing the procedure of processing when measuring the size of damage without correction.
- a process of acquiring a group of images of the subject is performed (step S31).
- a process of detecting damage from each acquired image is performed (step S32). For example, processing for detecting cracks is performed.
- a process of measuring the size of damage detected in each image is performed (step S33). For example, the crack width is measured.
- a process of generating three-dimensional point cloud data of the subject from the acquired image group is performed (step S34). For example, SfM and MVS generate three-dimensional point cloud data of an object.
- a process of generating a three-dimensional patch model of the subject is performed based on the generated three-dimensional point cloud data of the subject (step S35).
- a 3D Delaunay triangulation is used to generate the TIN model.
- texture mapping is performed on the generated three-dimensional patch model to generate a three-dimensional shape model of the subject (step S36).
- images are selected by the method described in the second embodiment (including the method of the modified example).
- Appendix 1 with a processor
- the processor A process of acquiring a plurality of images of a subject captured by a camera from multiple viewpoints with overlapping shooting ranges; a process of detecting damage on the surface of the subject from the plurality of acquired images; a process of analyzing the plurality of acquired images to generate three-dimensional point cloud data of feature points; a process of generating a three-dimensional patch model of the subject based on the generated point cloud data; an image selection process of selecting an image corresponding to each patch of the three-dimensional patch model from among the plurality of acquired images; A process of measuring the size of the damage in the area corresponding to the patch in the selected image; I do, Image processing device.
- Appendix 2 In the image selection process, the processor selects an image having the smallest angle ⁇ between the camera vector a and the normal vector b.
- the image processing device according to appendix 1.
- Appendix 3 The processor selects an image with the highest shooting resolution in the image selection process.
- the image processing device according to appendix 1.
- Appendix 4 In the process of selecting an image corresponding to the patch, the processor selects an image in which the position of the area corresponding to the patch is closest to the center of the image.
- the image processing device according to appendix 1.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
Description
ここでは、橋脚の表面に発生したひび割れの幅を計測する場合を例に説明する。橋脚は、コンクリート製、特に鉄筋コンクリート製の構造物の一例である。ひび割れは、損傷の一例である。ひび割れの幅は、損傷のサイズの一例である。
図1は、ひび割れの幅を計測するシステムの概略構成を示す図である。
したがって、補正値算出部117は、cosθを補正値σとして算出する。補正値σは、補正情報の一例である。
図11は、画像処理装置による画像処理の手順を示すフローチャートである。
上記のように、3次元形状モデルは、3次元パッチモデルの各パッチをテクスチャで補間して生成される。各パッチに適用されるテクスチャは、各パッチに対応する画像から抽出して生成される。この画像は複数存在する場合がある。すなわち、撮影は、複数の視点から撮影範囲を重複させて行われるため、1つのパッチに対応する画像が複数存在する場合がある。対応する画像が複数存在する場合、どの画像を選択するかが問題となる。本実施の形態の画像処理装置は、次の観点で画像を選出する。すなわち、損傷の計測に最も適した画像をパッチに対応する画像として選出する。
上記実施の形態では、カメラベクトルと法線ベクトルとのなす角度に基づいて、パッチに対応する画像を選出する場合を例に説明したが、パッチに対応する画像を選出する方法は、これに限定されるものではない。以下、パッチに対応する画像を選出する方法の他の例について説明する。
パッチに対応する画像が複数存在する場合に、撮影解像度が最も高い画像をパッチに対応する画像として選出する。
パッチに対応する画像が複数存在する場合に、パッチに対応する領域の位置が画像の中心に最も近い画像をパッチに対応する画像として選出する。
上記方法を複数組み合わせて画像を選出する構成とすることもできる。
[3次元パッチモデルについて]
上記実施の形態では、3次元パッチモデルとして、各パッチが三角形(三角メッシュ)で構成されたモデルを生成する場合を例に説明したが、3次元パッチモデルを構成する各パッチの形状は、これに限定されるものではない。たとえば、各パッチが四角形(四角メッシュ)で構成されたモデルを生成することもできる。
処理対象とする画像については、複数の視点から撮影範囲を重複させて撮影されていればよく、その撮影手法については、特に限定されない。ドローン等の無人航空機にカメラを搭載して撮影することもできる。
上記実施の形態では、橋脚の表面に現れたひび割れの幅を計測する場合を例に説明したが、本発明の適用は、これに限定されるものではない。構造物、特にコンクリート製の構造物の表面に現れた損傷のサイズを画像から計測する場合に適用できる。構造物には、橋梁の他、トンネル、ダム、ビル等が含まれる。また、道路も含まれる。
上記のように、ひび割れについては、長さの計測を行う場合にも本発明を適用することができる。
画像処理装置の機能は、各種のプロセッサ(Processor)で実現される。各種のプロセッサには、プログラムを実行して各種の処理部として機能する汎用的なプロセッサであるCPU及び/又はGPU(Graphic Processing Unit)、FPGA(Field Programmable Gate Array)などの製造後に回路構成を変更可能なプロセッサであるプログラマブルロジックデバイス(Programmable Logic Device,PLD)、ASIC(Application Specific Integrated Circuit)などの特定の処理を実行させるために専用に設計された回路構成を有するプロセッサである専用電気回路などが含まれる。プログラムは、ソフトウェアと同義である。
変形例を含め上記第2の実施の形態で説明したパッチに対応する画像を選出する方法は、損傷のサイズを補正しない場合にも有効である。すなわち、損傷の計測に適した画像を選出できるので、通常よりも高精度に損傷を計測できる。通常とは、上記第2の実施の形態で説明した観点で画像の選出を行わずにパッチに対応する画像を選出した場合である。また、3次元形状モデルを生成する場合により良好な画像を各パッチに適用できる。これにより、より高品位な3次元形状モデルを生成できる。
以上の実施の形態に関し、更に以下の付記を開示する。
プロセッサを備え、
前記プロセッサは、
カメラにより被写体を複数の視点から撮影範囲を重複させて撮影した複数の画像を取得する処理と、
取得した複数の前記画像から前記被写体の表面の損傷を検出する処理と、
取得した複数の前記画像を解析して、特徴点の3次元の点群データを生成する処理と、
生成した前記点群データに基づいて、前記被写体の3次元パッチモデルを生成する処理と、
取得した複数の前記画像の中から前記3次元パッチモデルの各パッチに対応する画像を選出する画像選択処理と、
選出した前記画像において、前記パッチに対応する領域における前記損傷のサイズを計測する処理と、
を行う、
画像処理装置。
前記プロセッサは、前記画像選択処理にて、前記カメラベクトルaと前記法線ベクトルbとのなす角度θが最も小さい画像を選択する、
付記1に記載の画像処理装置。
前記プロセッサは、前記画像選択処理にて、撮影解像度の最も高い画像を選択する、
付記1に記載の画像処理装置。
前記プロセッサは、前記パッチに対応する画像を選出する処理において、前記パッチに対応する領域の位置が前記画像の中心に最も近い画像を選出する、
付記1に記載の画像処理装置。
10 カメラ
100 画像処理装置
101 CPU
102 RAM
103 ROM
104 補助記憶装置
105 通信インターフェース
106 入力装置
107 表示装置
111 画像取得部
112 点群データ生成部
113 3次元パッチモデル生成部
114 3次元形状モデル生成部
114A 画像選出部
115 ひび割れ検出部
116 ひび割れ計測部
117 補正値算出部
118 計測結果補正部
a カメラベクトル
a1 第1カメラベクトル
a2 第2カメラベクトル
b 法線ベクトル
θ カメラベクトルと法線ベクトルのなす角度
C ひび割れ
CL1~CL13 ひび割れの長さ
I1i 第1画像
I2i 第2画像
IG 画像群
Ii 画像
OL オーバーラップ
SL サイドラップ
Ob 被写体(橋脚)
P パッチ
P1~P13 ひび割れを含むパッチ
Pc1 第1位置
Pc2 第2位置
R 撮影範囲
S ひび割れを含む面
S1~S6 SfMの処理の手順
S11~S18 画像処理の手順
S21~S24 画像の選出処理の手順
S31~S36 補正せずに損傷のサイズを計測する場合の処理の手順
Claims (15)
- プロセッサを備え、
前記プロセッサは、
カメラにより被写体を複数の視点から撮影範囲を重複させて撮影した複数の画像を取得し、
取得した複数の前記画像から前記被写体の表面の損傷を検出し、
取得した複数の前記画像を解析して、特徴点の3次元の点群データを生成し、
生成した前記点群データに基づいて、前記被写体の3次元パッチモデルを生成し、
取得した複数の前記画像の中から前記3次元パッチモデルの各パッチに対応する画像を選出し、
選出した前記画像において、前記パッチに対応する領域における前記損傷のサイズを計測し、
選出した前記画像を撮影した際の前記カメラの位置と前記パッチとを結ぶカメラベクトルaと、前記パッチの法線ベクトルbとに基づいて、前記損傷のサイズの計測結果の補正に必要な補正情報を生成し、
生成した前記補正情報に基づいて、前記損傷のサイズの計測結果を補正する、
画像処理装置。 - 前記プロセッサは、前記カメラベクトルaと前記法線ベクトルbとのなす角度θの余弦値cosθを算出して、前記補正情報を生成する、
請求項1に記載の画像処理装置。 - 前記プロセッサは、cosθ=a・b/|a||b|により、前記余弦値cosθを算出する、
請求項2に記載の画像処理装置。 - 前記プロセッサは、前記損傷のサイズの計測結果を前記余弦値cosθで除算して、前記損傷のサイズの計測結果を補正する、
請求項2又は3に記載の画像処理装置。 - 前記プロセッサは、前記カメラベクトルaと前記法線ベクトルbとの角度θが最も小さい画像を選出する、
請求項1から4のいずれか1項に記載の画像処理装置。 - 前記プロセッサは、撮影解像度の最も高い画像を選出する、
請求項1から4のいずれか1項に記載の画像処理装置。 - 前記プロセッサは、前記パッチに対応する領域の位置が前記画像の中心に最も近い画像を選出する、
請求項1から4のいずれか1項に記載の画像処理装置。 - 前記プロセッサは、三角メッシュ又は四角メッシュにより前記パッチを生成し、前記3次元パッチモデルを生成する、
請求項1から7のいずれか1項に記載の画像処理装置。 - 前記プロセッサは、前記3次元パッチモデルにおける前記パッチの物理寸法を算出する、
請求項1から8のいずれか1項に記載の画像処理装置。 - 前記プロセッサは、前記画像に関連付けられた測距情報に基づいて、前記パッチの物理寸法を算出する、
請求項9に記載の画像処理装置。 - 前記プロセッサは、前記画像に含まれる基準点の情報に基づいて、前記パッチの物理寸法を算出する、
請求項9に記載の画像処理装置。 - 前記損傷が、ひび割れであり、前記ひび割れの幅及び/又は長さを計測する、
請求項1から11のいずれか1項に記載の画像処理装置。 - 前記プロセッサは、前記ひび割れが連続する前記パッチを抽出し、抽出された各前記パッチで求められた補正後の前記ひび割れの長さを合算して、連続する前記ひび割れの長さを更に計測する、
請求項12に記載の画像処理装置。 - カメラにより被写体を複数の視点から撮影範囲を重複させて撮影した複数の画像を取得し、
取得した複数の前記画像から前記被写体の表面の損傷を検出し、
取得した複数の前記画像を解析して、特徴点の3次元の点群データを生成し、
生成した前記点群データに基づいて、前記被写体の3次元パッチモデルを生成し、
取得した複数の前記画像の中から前記3次元パッチモデルの各パッチに対応する画像を選出し、
選出した前記画像において、前記パッチに対応する領域における前記損傷のサイズを計測し、
選出した前記画像を撮影した際の前記カメラの位置と前記パッチとを結ぶカメラベクトルaと、前記パッチの法線ベクトルbとに基づいて、前記損傷のサイズの計測結果の補正に必要な補正情報を生成し、
生成した前記補正情報に基づいて、前記損傷のサイズの計測結果を補正する、
画像処理方法。 - カメラにより被写体を複数の視点から撮影範囲を重複させて撮影した複数の画像を取得し、
取得した複数の前記画像から前記被写体の表面の損傷を検出し、
取得した複数の前記画像を解析して、特徴点の3次元の点群データを生成し、
生成した前記点群データに基づいて、前記被写体の3次元パッチモデルを生成し、
取得した複数の前記画像の中から前記3次元パッチモデルの各パッチに対応する画像を選出し、
選出した前記画像において、前記パッチに対応する領域における前記損傷のサイズを計測し、
選出した前記画像を撮影した際の前記カメラの位置と前記パッチとを結ぶカメラベクトルaと、前記パッチの法線ベクトルbとに基づいて、前記損傷のサイズの計測結果の補正に必要な補正情報を生成し、
生成した前記補正情報に基づいて、前記損傷のサイズの計測結果を補正する、
コンピュータに実現させる画像処理プログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE112022000936.5T DE112022000936T5 (de) | 2021-04-02 | 2022-03-10 | Bildverarbeitungsvorrichtung, bildverarbeitungsverfahren und bildverarbeitungsprogramm |
JP2023510797A JPWO2022209709A1 (ja) | 2021-04-02 | 2022-03-10 | |
CN202280022317.8A CN117098971A (zh) | 2021-04-02 | 2022-03-10 | 图像处理装置、图像处理方法及图像处理程序 |
US18/466,620 US20230419468A1 (en) | 2021-04-02 | 2023-09-13 | Image processing apparatus, image processing method, and image processing program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-063672 | 2021-04-02 | ||
JP2021063672 | 2021-04-02 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/466,620 Continuation US20230419468A1 (en) | 2021-04-02 | 2023-09-13 | Image processing apparatus, image processing method, and image processing program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022209709A1 true WO2022209709A1 (ja) | 2022-10-06 |
Family
ID=83456066
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/010616 WO2022209709A1 (ja) | 2021-04-02 | 2022-03-10 | 画像処理装置、画像処理方法及び画像処理プログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230419468A1 (ja) |
JP (1) | JPWO2022209709A1 (ja) |
CN (1) | CN117098971A (ja) |
DE (1) | DE112022000936T5 (ja) |
WO (1) | WO2022209709A1 (ja) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011124965A (ja) * | 2009-12-11 | 2011-06-23 | Advas Co Ltd | 被写体寸法測定用カメラ装置 |
JP2015114954A (ja) * | 2013-12-13 | 2015-06-22 | 株式会社ジオ技術研究所 | 撮影画像解析方法 |
JP2020038227A (ja) * | 2016-06-14 | 2020-03-12 | 富士フイルム株式会社 | 画像処理方法、画像処理装置、及び画像処理プログラム |
JP2020160944A (ja) * | 2019-03-27 | 2020-10-01 | 富士通株式会社 | 点検作業支援装置、点検作業支援方法及び点検作業支援プログラム |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3304336B2 (ja) | 2000-09-13 | 2002-07-22 | 東北ポール株式会社 | 立設コンクリートポールにおける外観劣化の記録方法、及びこれを用いた外観劣化の経年変化記録システム |
-
2022
- 2022-03-10 DE DE112022000936.5T patent/DE112022000936T5/de active Pending
- 2022-03-10 WO PCT/JP2022/010616 patent/WO2022209709A1/ja active Application Filing
- 2022-03-10 JP JP2023510797A patent/JPWO2022209709A1/ja active Pending
- 2022-03-10 CN CN202280022317.8A patent/CN117098971A/zh active Pending
-
2023
- 2023-09-13 US US18/466,620 patent/US20230419468A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011124965A (ja) * | 2009-12-11 | 2011-06-23 | Advas Co Ltd | 被写体寸法測定用カメラ装置 |
JP2015114954A (ja) * | 2013-12-13 | 2015-06-22 | 株式会社ジオ技術研究所 | 撮影画像解析方法 |
JP2020038227A (ja) * | 2016-06-14 | 2020-03-12 | 富士フイルム株式会社 | 画像処理方法、画像処理装置、及び画像処理プログラム |
JP2020160944A (ja) * | 2019-03-27 | 2020-10-01 | 富士通株式会社 | 点検作業支援装置、点検作業支援方法及び点検作業支援プログラム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022209709A1 (ja) | 2022-10-06 |
US20230419468A1 (en) | 2023-12-28 |
CN117098971A (zh) | 2023-11-21 |
DE112022000936T5 (de) | 2023-12-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5832341B2 (ja) | 動画処理装置、動画処理方法および動画処理用のプログラム | |
TWI555379B (zh) | 一種全景魚眼相機影像校正、合成與景深重建方法與其系統 | |
JP5746477B2 (ja) | モデル生成装置、3次元計測装置、それらの制御方法及びプログラム | |
EP2847741B1 (en) | Camera scene fitting of real world scenes for camera pose determination | |
CN111583411A (zh) | 一种基于倾斜摄影的三维模型建立方法 | |
US7764284B2 (en) | Method and system for detecting and evaluating 3D changes from images and a 3D reference model | |
US20100204964A1 (en) | Lidar-assisted multi-image matching for 3-d model and sensor pose refinement | |
CN104596439A (zh) | 一种基于相位信息辅助的散斑匹配三维测量方法 | |
CN106530358A (zh) | 仅用两幅场景图像标定ptz摄像机的方法 | |
KR20140027468A (ko) | 깊이 측정치 품질 향상 | |
JP2008082870A (ja) | 画像処理プログラム及びこれを用いた路面状態計測システム | |
JP6970817B2 (ja) | 構造物管理装置、構造物管理方法、及び構造物管理プログラム | |
JP6061770B2 (ja) | カメラ姿勢推定装置及びそのプログラム | |
US11222433B2 (en) | 3 dimensional coordinates calculating apparatus and 3 dimensional coordinates calculating method using photo images | |
JP2019032218A (ja) | 位置情報記録方法および装置 | |
Santoši et al. | Evaluation of synthetically generated patterns for image-based 3D reconstruction of texture-less objects | |
CN114359410B (zh) | 泳池防溺水多相机空间融合方法、装置、计算机设备及存储介质 | |
CN108053481B (zh) | 三维点云法向量的生成方法、装置和存储介质 | |
CN117522853A (zh) | 光伏电站的故障定位方法、系统、设备及存储介质 | |
WO2022209709A1 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
CN111583388A (zh) | 一种三维扫描系统的扫描方法及设备 | |
JP6456084B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
JP7195785B2 (ja) | 3次元形状データを生成する装置、方法、及びプログラム | |
CN112241984A (zh) | 双目视觉传感器标定方法、装置、计算机设备和存储介质 | |
Gao et al. | Full‐field deformation measurement by videogrammetry using self‐adaptive window matching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22779931 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023510797 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280022317.8 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112022000936 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22779931 Country of ref document: EP Kind code of ref document: A1 |