CN110634131B - A Crack Image Recognition and Modeling Method - Google Patents

A Crack Image Recognition and Modeling Method Download PDF

Info

Publication number
CN110634131B
CN110634131B CN201910807036.3A CN201910807036A CN110634131B CN 110634131 B CN110634131 B CN 110634131B CN 201910807036 A CN201910807036 A CN 201910807036A CN 110634131 B CN110634131 B CN 110634131B
Authority
CN
China
Prior art keywords
crack
image
training
mask
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910807036.3A
Other languages
Chinese (zh)
Other versions
CN110634131A (en
Inventor
章杨松
顾天纵
张宁
何元
李孟寒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201910807036.3A priority Critical patent/CN110634131B/en
Publication of CN110634131A publication Critical patent/CN110634131A/en
Application granted granted Critical
Publication of CN110634131B publication Critical patent/CN110634131B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30132Masonry; Concrete

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种裂缝图像识别与建模方法,包括以下步骤:步骤1、基于深度学习Mask‑RCNN网络进行裂缝图片的粗识别并得到粗提取裂缝图;步骤2、将粗提取裂缝图经过OpenCV图像处理后获得裂缝在二维图片上的拐点和突变点的像素坐标;步骤3、利用SIFT算法进行图像特征匹配,获得同名点坐标;步骤4、构建裂缝的三维模型。本发明将深度学习图像识别与OpenCV图像处理技术的优点相结合,形成一种“由粗识别到精确识别”的图像识别方法,有效提高了识别准确率,并最终依靠同一裂缝位置不同角度的图片坐标提取生成裂缝三维建模,解决了裂缝因为尺寸细小较难识别与建模的问题,可行性强、鲁棒性较高。

Figure 201910807036

The invention discloses a crack image identification and modeling method, comprising the following steps: step 1, based on a deep learning Mask-RCNN network, perform rough identification of crack pictures and obtain a rough extracted crack map; step 2, pass the rough extracted crack map through After OpenCV image processing, the pixel coordinates of the inflection point and the mutation point of the crack on the two-dimensional image are obtained; step 3, use the SIFT algorithm to perform image feature matching to obtain the coordinates of the point with the same name; step 4, build the three-dimensional model of the crack. The invention combines the advantages of deep learning image recognition and OpenCV image processing technology to form an image recognition method from "coarse recognition to precise recognition", which effectively improves the recognition accuracy, and finally relies on pictures from different angles of the same crack position. Coordinate extraction generates three-dimensional modeling of cracks, which solves the problem that cracks are difficult to identify and model due to their small size, and has strong feasibility and high robustness.

Figure 201910807036

Description

Crack image identification and modeling method
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a crack image recognition and modeling method.
Background
The image recognition and modeling is a series of processing processes such as learning and processing of existing crack images through a computer, and extraction and three-dimensional reconstruction of image targets are achieved. Image recognition technology based on neural networks is an important field of artificial intelligence. The technology can provide much convenience for users, for example, the plant disease picture is identified to quickly find out the plant disease; and if the site remote sensing picture is identified, the site type can be quickly identified, the face picture can be accurately identified, and the identity information can be accurately obtained by matching the face picture with the face in the existing database. However, due to the characteristic that the width of the crack is narrow and the small crack is difficult to mark, the existing neural network identification method has poor crack identification effect and low identification accuracy. At present, the neural network is more suitable for determining the nature of the picture target, and accurate coordinates are difficult to extract. The identification of the bounding regions is difficult to meet with the requirements of subsequent three-dimensional modeling.
Disclosure of Invention
The invention aims to provide a crack image identification and modeling method to solve the problems.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a crack image identification and modeling method comprises the following steps:
step 1, carrying out rough identification on a crack picture based on a deep learning Mask-RCNN network and obtaining a rough extracted crack picture, wherein the rough identification comprises the following steps:
step 1.1: preparing a training crack picture for training a network and a crack picture to be detected;
step 1.2: labeling the crack picture for training by using labelme:
firstly, introducing a crack picture for training into labelme, dividing a crack and the periphery of the crack into a plurality of small rectangular blocks and labeling, and labeling the small rectangular blocks into three types: the horizontal cracks, the vertical cracks and the inclined cracks are respectively represented by three labels of hc, vc and oc, and a corresponding json label file can be obtained from each crack picture for training after storage;
step 1.3: and converting the json label file, wherein the process is as follows:
the json markup file from step 1.2 is converted into three files using the labelmejson to dataset script function carried in labelme itself: a set of labeled mask drawings, schematic diagrams of mask drawings plus artwork, and label names;
step 1.4: putting the crack picture for training into Mask-RCNN for training: putting the Mask graph, the set of label names and the crack picture for training generated in the step 1.3 into a Mask-RCNN network for training, inputting the crack picture to be detected into the trained network for recognition after the network training is finished, dividing a potential crack region in the crack picture to be detected into multiple sections by the network, giving the confidence coefficient of each section, wherein the confidence coefficient represents the probability that the network considers that cracks exist in the region of the section, setting a probability threshold, marking out the region section with the confidence coefficient higher than the probability threshold, and extracting the region section with the confidence coefficient, namely extracting a crude extracted crack picture;
step 2, obtaining pixel coordinates of inflection points and mutation points of the cracks on the two-dimensional picture after the crude crack image is processed by OpenCV images, and the specific process is as follows: carrying out Gaussian filtering and denoising on the crude extraction crack diagram by utilizing OpenCV (open source computer vision correction) to obtain a denoised pure crude extraction crack diagram, further carrying out binarization and edge extraction on the crude extraction crack diagram to obtain a crude extraction crack diagram only with crack edges, extracting N inflection points and mutation points from the crack, wherein the inflection points are the inflection points of the crack, the mutation points are coarse and fine mutation points of the crack width, formatting and outputting the corresponding inflection points and mutation point pixel coordinates to a txt file, formatting and outputting the inflection points and mutation point pixel coordinates of each picture of the same crack at different angles to the txt, and N is more than or equal to 3;
step 3, carrying out image feature matching by using an SIFT algorithm to obtain coordinates of the same-name points, wherein the process is as follows:
matching pictures of the same crack at different angles by using an SIFT algorithm, matching inflection points and mutation points on each picture, solving coordinates of crack homonymous points when the inflection points and the mutation points in some regions reach preset density, wherein the coordinates of the crack homonymous points are coordinates of points of the same crack position in the pictures at different angles, and outputting the coordinate information of all crack homonymous points to a txt file in a formatted mode after the coordinates of all crack homonymous points are obtained;
step 4, constructing a three-dimensional model of the crack;
and (3) acquiring the three-dimensional space coordinates of the same-name point of the crack by utilizing photoscan software, so as to construct a three-dimensional model of the crack.
Further, the specific process of putting the Mask image, the set of tag names and the crack image for training into the Mask-RCNN network for training is as follows: the three files are read respectively, a Mask image is covered on a crack image for training, a label is added, the Mask image is sent to a Mask-RCNN network, the learning rate is set to be 0.001, the learning momentum is 0.9, the foreground threshold value is set to be 0.3, initialization is carried out by using the weight of a COCO training set after training is completed, the training network is responsible for classifying and positioning the headlayer of the target object for 80 times, and then all the networks are trained for 80 times.
Further, N in the step 2 is 8.
Compared with the prior art, the invention has the advantages that:
the invention provides a reliable crack image identification and modeling method, which combines the advantages of deep learning image identification and OpenCV image processing technology to form an image identification method from rough identification to accurate identification, effectively improves the identification accuracy, finally extracts and generates a crack three-dimensional modeling by depending on picture coordinates of the same crack position at different angles, solves the problem that the crack is difficult to identify and model due to small size, has strong feasibility and high robustness, and can effectively save the labor cost.
Drawings
FIG. 1 is a labelme labeled fracture map of the present invention.
FIG. 2 is a crack diagram of the present invention after rough extraction by Mask-RCNN network.
FIG. 3 is a crude extraction fracture map of the present invention.
FIG. 4 is a fracture map after OpenCV edge extraction of the present invention.
Detailed Description
The following describes the implementation of the present invention in detail with reference to specific embodiments.
A crack image identification and modeling method comprises the following steps:
step 1, carrying out rough identification on crack pictures based on a deep learning Mask-RCNN network and obtaining rough extracted crack pictures
Step 1.1: a training crack picture for training a network and a crack picture to be detected are prepared as follows.
A plurality of crack pictures with the same pixel size are prepared, wherein pictures with various angles are prepared for each crack, the crack pictures are divided into two parts, one part is used as a crack picture for training, and the other part is used as a crack picture to be detected, so that the identification accuracy can be effectively improved.
Step 1.2: labeling the crack picture for training by using labelme, wherein the process is as follows:
firstly, a crack picture for training is led into labelme, and the crack and the periphery of the crack are divided into a plurality of small rectangular blocks for marking, so that manual errors are avoided. Meanwhile, the identification accuracy is increased, and the rectangular small blocks are marked as three types: horizontal cracks, vertical cracks and inclined cracks are respectively represented by three labels of hc, vc and oc. And after storage, each picture can have a corresponding json label file. The labeling process is as in fig. 1.
Step 1.3: the json markup file is converted, and the process is as follows:
and converting the json label file obtained in the previous step into three files by using a labelme json to dataset script function carried in labelme, wherein one file is a labeled mask diagram, one file is a schematic diagram of the mask diagram and the original diagram, and the other file is a set of label names.
Step 1.4: putting the crack picture for training into Mask-RCNN for training: putting the Mask picture, the set of label names and the crack picture for training generated in the step 1.3 into a Mask-RCNN network for training, wherein the specific process is as follows: and reading the three files respectively, covering the Mask pattern on the crack picture for training, adding a label, and sending the label into a Mask-RCNN network. The method comprises the steps that the learning rate is set to be 0.001, the learning momentum is 0.9, the foreground threshold is set to be 0.3, training weight of an official COCO training set is utilized to conduct initialization, a training network is responsible for classifying and positioning headlayer of a target object for 80 times, then all networks are trained for 80 times, a crack picture to be detected is input into the trained network to be recognized after network training is completed, the network divides a potential crack area in the crack picture to be detected into multiple sections and gives confidence of each section, the confidence indicates the probability that the network considers that cracks exist in the section area, a probability threshold is set, the section with the confidence higher than the probability threshold is marked, the result is shown in figure 2, and multiple groups of data in figure 2 are the confidence of each section. The region segment marked with confidence in fig. 2 is extracted, and the extracted fracture map is shown in fig. 3, which is a crude extracted fracture map.
Step 2, obtaining pixel coordinates of the cracks on the two-dimensional picture after the crude crack image is processed by an OpenCV image, wherein the method comprises the following steps: carrying out Gaussian filtering and denoising on the crude extracted crack image by utilizing OpenCV in a python third-party library to obtain a denoised pure image, further carrying out binarization and edge extraction on the image to obtain an image only with a crack edge as shown in FIG. 4, extracting 3-15 inflection points and mutation points from the crack, wherein the inflection points are crack inflection points, the mutation points are rough and fine mutation points of the crack width, and the corresponding inflection points and the corresponding mutation point pixel coordinates are formatted and output to a txt file for use. And formatting the pixel coordinates of the inflection point and the mutation point of each picture with the same crack at different angles and outputting the pixel coordinates to txt.
Step 3, carrying out image feature matching by using an SIFT algorithm to obtain coordinates of the same-name points, wherein the process is as follows:
matching pictures of the same crack at different angles by using an SIFT algorithm, matching inflection points and mutation points on each picture, when the inflection points and the mutation points in some areas reach a preset density, solving coordinates of the same-name points of the crack by using a spatial perspective transformation principle (for example, a crack picture of one angle is taken as a starting point, the angle of the picture is transformed into the angle of the crack picture of other angles by using the spatial perspective transformation principle, matching the picture after the angle transformation with the crack pictures of other angles so as to find a point corresponding to a certain inflection point or mutation point in the picture in the crack pictures of other angles, wherein the point coordinate is the coordinate of one of the same-name points), wherein the coordinates of the same-name points of the crack position in the pictures of different angles (for example, 10 pictures with different angles in one crack, then the corresponding position of one crack position in each picture has a coordinate point, and there are 10 coordinate points, and these 10 coordinate points are the coordinates of the same-name point of the crack position). And after obtaining the coordinates of all the crack homonymous points, formatting the coordinate information of all the crack homonymous points and outputting the formatted coordinate information to a txt file.
And 4, constructing a three-dimensional model of the crack.
At present, coordinates of the same-name point of the crack on pictures at different angles are obtained, and a three-dimensional space coordinate of the same-name point of the crack is obtained by utilizing photoscan software, so that a three-dimensional model of the crack can be constructed.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (3)

1.一种裂缝图像识别与建模方法,其特征在于,包括以下步骤:1. a crack image recognition and modeling method, is characterized in that, comprises the following steps: 步骤1、基于深度学习Mask-RCNN网络进行裂缝图片的粗识别并得到粗提取裂缝图,包括:Step 1. Based on the deep learning Mask-RCNN network, rough identification of crack images and obtain a rough extracted crack map, including: 步骤1.1:准备用于训练网络的训练用裂缝图片和待检测的裂缝图片;Step 1.1: Prepare training crack images and crack images to be detected for training the network; 步骤1.2:利用labelme对训练用裂缝图片进行标注:Step 1.2: Use labelme to label the crack image for training: 首先将训练用裂缝图片导入labelme,将裂缝及其周围分割为多个矩形小块并进行标注,将所述多个矩形小块标注为三类:水平裂缝、竖直裂缝、斜裂缝,分别用hc、vc、oc三种标签表示,保存后每张训练用裂缝图片会得到一个对应的json标注文件;First, import the crack image for training into labelme, divide the crack and its surroundings into multiple rectangular blocks and label them, and label the multiple rectangular blocks into three categories: horizontal cracks, vertical cracks, and oblique cracks. Hc, vc, oc three tags represent, after saving, each training crack image will get a corresponding json annotation file; 步骤1.3:对所述json标注文件进行转换,过程如下:Step 1.3: Convert the json annotation file, the process is as follows: 使用labelme中自带的labelme json to dataset脚本函数将步骤1.2所得的json标注文件转换为三个文件:标注的掩模图、掩模图加原图的示意图和标签名的集合;Use the labelme json to dataset script function that comes with labelme to convert the json annotation file obtained in step 1.2 into three files: the labeled mask image, the mask image plus the original image and the set of label names; 步骤1.4:将训练用裂缝图片放入Mask-RCNN中进行训练:将步骤1.3中生成的掩模图、标签名的集合及训练用裂缝图片放入Mask-RCNN网络中进行训练,网络训练完成后将待检测的裂缝图片输入到训练好的网络中进行识别,网络将待检测的裂缝图片中的潜在裂缝区域划分为多段并给出每一段的置信度,所述置信度表示网络认为该段区域中存在裂缝的概率,设置一个概率阈值,将置信度高于该概率阈值的区域段标记出,将标记有置信度的区域段提取出来即为粗提取裂缝图;Step 1.4: Put the training crack image into the Mask-RCNN for training: Put the mask image, the set of label names and the training crack image generated in step 1.3 into the Mask-RCNN network for training. After the network training is completed Input the crack image to be detected into the trained network for identification, the network divides the potential crack area in the crack image to be detected into multiple segments and gives the confidence level of each segment, the confidence level indicates that the network considers this segment area The probability of existence of cracks is set in the probability threshold, and the region segment with confidence higher than the probability threshold is marked, and the region segment marked with confidence is extracted as the rough extraction crack map; 步骤2、将粗提取裂缝图经过OpenCV图像处理后获得裂缝在二维图片上的拐点和突变点的像素坐标,具体过程为:利用OpenCV对粗提取裂缝图进行高斯滤波以及去噪的操作,获得一去噪的较为纯净的粗提取裂缝图,且进一步将粗提取裂缝图二值化以及边缘提取,得出一张仅有裂缝边缘的粗提取裂缝图,将裂缝提取N个拐点和突变点,所述拐点为裂缝的拐点,所述突变点为裂缝宽度的粗细突变点,并将对应的拐点和突变点像素坐标格式化输出到txt文件,将同一裂缝不同角度的每一张图片的拐点和突变点像素坐标格式化并输出到txt,所述N≥3;Step 2. After the rough extracted crack image is processed by OpenCV, the pixel coordinates of the inflection point and the mutation point of the crack on the two-dimensional image are obtained. The specific process is: using OpenCV to perform Gaussian filtering and denoising operations on the rough extracted crack image to obtain A relatively pure coarsely extracted crack image denoised, and further binarized and edge extracted from the coarsely extracted crack image to obtain a coarsely extracted crack image with only crack edges, and N inflection points and mutation points are extracted from the crack, The inflection point is the inflection point of the crack, the mutation point is the thickness mutation point of the crack width, and the corresponding inflection point and the pixel coordinates of the mutation point are formatted and output to a txt file, and the inflection point and The pixel coordinates of the mutation point are formatted and output to txt, the N≥3; 步骤3、利用SIFT算法进行图像特征匹配,获得同名点坐标,过程如下:Step 3. Use the SIFT algorithm to perform image feature matching to obtain the coordinates of the point with the same name. The process is as follows: 利用SIFT算法匹配同一裂缝不同角度的图片,通过对每一张图片上的拐点和突变点进行匹配,当一些区域中的拐点和突变点达到预定密度时,求解裂缝同名点坐标,所述裂缝同名点坐标即为同一裂缝位置在不同角度图片中的点的坐标,在得到所有裂缝同名点坐标后,将所有裂缝同名点坐标信息格式化输出到txt文件;The SIFT algorithm is used to match pictures of the same crack from different angles. By matching the inflection points and mutation points on each picture, when the inflection points and mutation points in some areas reach a predetermined density, the coordinates of the cracks with the same name are solved, and the cracks have the same name. The point coordinates are the coordinates of the same crack position in the pictures from different angles. After obtaining the coordinates of all cracks with the same name, format and output the coordinate information of all cracks with the same name to a txt file; 步骤4、构建裂缝的三维模型;Step 4. Build a 3D model of the crack; 利用photoscan软件获取裂缝同名点的三维空间坐标,从而构建出裂缝的三维模型。Use photoscan software to obtain the three-dimensional space coordinates of the point with the same name of the crack, so as to construct the three-dimensional model of the crack. 2.根据权利要求1所述的方法,其特征在于,将掩模图、标签名的集合及训练用裂缝图片放入Mask-RCNN网络中进行训练的具体过程为:分别读取这三个文件,并将掩模图覆盖于训练用裂缝图片之上添加标签送入Mask-RCNN网络,其中学习率设置为0.001,学习动量为0.9,前景阈值设置为0.3,利用COCO训练集训练完毕的权重进行初始化,训练网络负责分类定位目标物的headlayer 80次,再训练全部网络80次。2. method according to claim 1, is characterized in that, the concrete process that the set of mask map, label name and training crack picture are put into Mask-RCNN network and carry out training is: read these three files respectively , and overlay the mask image on the crack image for training and add labels to the Mask-RCNN network, where the learning rate is set to 0.001, the learning momentum is set to 0.9, and the foreground threshold is set to 0.3. Initialization, the training network is responsible for classifying and positioning the headlayer of the target object 80 times, and then training the entire network 80 times. 3.根据权利要求1所述的方法,其特征在于,所述步骤2中的N为8。3 . The method according to claim 1 , wherein N in the step 2 is 8. 4 .
CN201910807036.3A 2019-08-29 2019-08-29 A Crack Image Recognition and Modeling Method Active CN110634131B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910807036.3A CN110634131B (en) 2019-08-29 2019-08-29 A Crack Image Recognition and Modeling Method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910807036.3A CN110634131B (en) 2019-08-29 2019-08-29 A Crack Image Recognition and Modeling Method

Publications (2)

Publication Number Publication Date
CN110634131A CN110634131A (en) 2019-12-31
CN110634131B true CN110634131B (en) 2022-03-22

Family

ID=68969393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910807036.3A Active CN110634131B (en) 2019-08-29 2019-08-29 A Crack Image Recognition and Modeling Method

Country Status (1)

Country Link
CN (1) CN110634131B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111735434A (en) * 2020-03-25 2020-10-02 南京理工大学 Method for measuring fracture development and change based on three-dimensional space angle
CN111597377B (en) * 2020-04-08 2021-05-11 广东省国土资源测绘院 Deep learning technology-based field investigation method and system
CN111337496A (en) * 2020-04-13 2020-06-26 黑龙江北草堂中药材有限责任公司 Chinese herbal medicine picking device and picking method
CN113838005B (en) * 2021-09-01 2023-11-21 山东大学 Intelligent identification and three-dimensional reconstruction method and system for rock mass cracks based on dimension conversion
CN114612429B (en) * 2022-03-10 2024-06-11 北京工业大学 Die forging crack identification positioning and improvement method based on binocular vision

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651893A (en) * 2016-12-23 2017-05-10 贵州电网有限责任公司电力科学研究院 Edge detection-based wall body crack identification method
CN106910186B (en) * 2017-01-13 2019-12-27 陕西师范大学 Bridge crack detection and positioning method based on CNN deep learning
CN107704857B (en) * 2017-09-25 2020-07-24 北京邮电大学 An end-to-end lightweight license plate recognition method and device
CN109767423B (en) * 2018-12-11 2019-12-10 西南交通大学 A Crack Detection Method for Asphalt Pavement Image

Also Published As

Publication number Publication date
CN110634131A (en) 2019-12-31

Similar Documents

Publication Publication Date Title
CN110634131B (en) A Crack Image Recognition and Modeling Method
Chen et al. An end-to-end shape modeling framework for vectorized building outline generation from aerial images
Ma et al. A review of 3D reconstruction techniques in civil engineering and their applications
Cheng et al. Repfinder: finding approximately repeated scene elements for image editing
Teo et al. Lidar-based change detection and change-type determination in urban areas
CN104331682B (en) A kind of building automatic identifying method based on Fourier descriptor
CN105139395B (en) SAR image segmentation method based on small echo pond convolutional neural networks
Zhang et al. Semi-automatic road tracking by template matching and distance transformation in urban areas
CN110334578B (en) Weak supervision method for automatically extracting high-resolution remote sensing image buildings through image level annotation
CN104778679B (en) A kind of control point pel fast matching method based on high score No.1 satellite data
CN113343976B (en) Anti-highlight interference engineering measurement mark extraction method based on color-edge fusion feature growth
CN109727279B (en) Automatic registration method of vector data and remote sensing image
JP6188052B2 (en) Information system and server
CN105160686A (en) Improved scale invariant feature transformation (SIFT) operator based low altitude multi-view remote-sensing image matching method
CN104616247A (en) Method for aerial photography map splicing based on super-pixels and SIFT
Gao et al. Classification of 3D terracotta warrior fragments based on deep learning and template guidance
CN112381830A (en) Method and device for extracting bird key parts based on YCbCr superpixels and graph cut
CN109657540B (en) Dead tree location method and system
CN115937708A (en) A method and device for automatic identification of roof information based on high-definition satellite images
CN103116890A (en) Video image based intelligent searching and matching method
CN108022245A (en) Photovoltaic panel template automatic generation method based on upper thread primitive correlation model
CN106485733A (en) A kind of method following the tracks of interesting target in infrared image
CN105718663A (en) Automatic distribution network AUTOCAD design drawing identification method
CN111160433B (en) A high-speed matching method and system for high-resolution image feature points
CN117765246A (en) Digital instrument identification method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant