CN110634131B - Crack image identification and modeling method - Google Patents
Crack image identification and modeling method Download PDFInfo
- Publication number
- CN110634131B CN110634131B CN201910807036.3A CN201910807036A CN110634131B CN 110634131 B CN110634131 B CN 110634131B CN 201910807036 A CN201910807036 A CN 201910807036A CN 110634131 B CN110634131 B CN 110634131B
- Authority
- CN
- China
- Prior art keywords
- crack
- picture
- training
- points
- mask
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000035772 mutation Effects 0.000 claims abstract description 25
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 7
- 238000013135 deep learning Methods 0.000 claims abstract description 6
- 238000012549 training Methods 0.000 claims description 46
- 238000010586 diagram Methods 0.000 claims description 15
- 238000000605 extraction Methods 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 14
- 238000002372 labelling Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 5
- 239000000284 extract Substances 0.000 abstract description 2
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30132—Masonry; Concrete
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a crack image identification and modeling method, which comprises the following steps: step 1, carrying out rough identification on a crack picture based on a deep learning Mask-RCNN network and obtaining a rough extracted crack picture; step 2, obtaining pixel coordinates of inflection points and mutation points of the cracks on the two-dimensional picture after the crude crack image is subjected to OpenCV image processing; step 3, carrying out image feature matching by using an SIFT algorithm to obtain coordinates of the same-name points; and 4, constructing a three-dimensional model of the crack. The method combines the advantages of deep learning image recognition and OpenCV image processing technology to form an image recognition method from rough recognition to accurate recognition, effectively improves recognition accuracy, finally extracts and generates crack three-dimensional modeling by means of picture coordinates of the same crack position at different angles, solves the problem that the crack is difficult to recognize and model due to small size, and is high in feasibility and high in robustness.
Description
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a crack image recognition and modeling method.
Background
The image recognition and modeling is a series of processing processes such as learning and processing of existing crack images through a computer, and extraction and three-dimensional reconstruction of image targets are achieved. Image recognition technology based on neural networks is an important field of artificial intelligence. The technology can provide much convenience for users, for example, the plant disease picture is identified to quickly find out the plant disease; and if the site remote sensing picture is identified, the site type can be quickly identified, the face picture can be accurately identified, and the identity information can be accurately obtained by matching the face picture with the face in the existing database. However, due to the characteristic that the width of the crack is narrow and the small crack is difficult to mark, the existing neural network identification method has poor crack identification effect and low identification accuracy. At present, the neural network is more suitable for determining the nature of the picture target, and accurate coordinates are difficult to extract. The identification of the bounding regions is difficult to meet with the requirements of subsequent three-dimensional modeling.
Disclosure of Invention
The invention aims to provide a crack image identification and modeling method to solve the problems.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a crack image identification and modeling method comprises the following steps:
step 1, carrying out rough identification on a crack picture based on a deep learning Mask-RCNN network and obtaining a rough extracted crack picture, wherein the rough identification comprises the following steps:
step 1.1: preparing a training crack picture for training a network and a crack picture to be detected;
step 1.2: labeling the crack picture for training by using labelme:
firstly, introducing a crack picture for training into labelme, dividing a crack and the periphery of the crack into a plurality of small rectangular blocks and labeling, and labeling the small rectangular blocks into three types: the horizontal cracks, the vertical cracks and the inclined cracks are respectively represented by three labels of hc, vc and oc, and a corresponding json label file can be obtained from each crack picture for training after storage;
step 1.3: and converting the json label file, wherein the process is as follows:
the json markup file from step 1.2 is converted into three files using the labelmejson to dataset script function carried in labelme itself: a set of labeled mask drawings, schematic diagrams of mask drawings plus artwork, and label names;
step 1.4: putting the crack picture for training into Mask-RCNN for training: putting the Mask graph, the set of label names and the crack picture for training generated in the step 1.3 into a Mask-RCNN network for training, inputting the crack picture to be detected into the trained network for recognition after the network training is finished, dividing a potential crack region in the crack picture to be detected into multiple sections by the network, giving the confidence coefficient of each section, wherein the confidence coefficient represents the probability that the network considers that cracks exist in the region of the section, setting a probability threshold, marking out the region section with the confidence coefficient higher than the probability threshold, and extracting the region section with the confidence coefficient, namely extracting a crude extracted crack picture;
step 2, obtaining pixel coordinates of inflection points and mutation points of the cracks on the two-dimensional picture after the crude crack image is processed by OpenCV images, and the specific process is as follows: carrying out Gaussian filtering and denoising on the crude extraction crack diagram by utilizing OpenCV (open source computer vision correction) to obtain a denoised pure crude extraction crack diagram, further carrying out binarization and edge extraction on the crude extraction crack diagram to obtain a crude extraction crack diagram only with crack edges, extracting N inflection points and mutation points from the crack, wherein the inflection points are the inflection points of the crack, the mutation points are coarse and fine mutation points of the crack width, formatting and outputting the corresponding inflection points and mutation point pixel coordinates to a txt file, formatting and outputting the inflection points and mutation point pixel coordinates of each picture of the same crack at different angles to the txt, and N is more than or equal to 3;
step 3, carrying out image feature matching by using an SIFT algorithm to obtain coordinates of the same-name points, wherein the process is as follows:
matching pictures of the same crack at different angles by using an SIFT algorithm, matching inflection points and mutation points on each picture, solving coordinates of crack homonymous points when the inflection points and the mutation points in some regions reach preset density, wherein the coordinates of the crack homonymous points are coordinates of points of the same crack position in the pictures at different angles, and outputting the coordinate information of all crack homonymous points to a txt file in a formatted mode after the coordinates of all crack homonymous points are obtained;
step 4, constructing a three-dimensional model of the crack;
and (3) acquiring the three-dimensional space coordinates of the same-name point of the crack by utilizing photoscan software, so as to construct a three-dimensional model of the crack.
Further, the specific process of putting the Mask image, the set of tag names and the crack image for training into the Mask-RCNN network for training is as follows: the three files are read respectively, a Mask image is covered on a crack image for training, a label is added, the Mask image is sent to a Mask-RCNN network, the learning rate is set to be 0.001, the learning momentum is 0.9, the foreground threshold value is set to be 0.3, initialization is carried out by using the weight of a COCO training set after training is completed, the training network is responsible for classifying and positioning the headlayer of the target object for 80 times, and then all the networks are trained for 80 times.
Further, N in the step 2 is 8.
Compared with the prior art, the invention has the advantages that:
the invention provides a reliable crack image identification and modeling method, which combines the advantages of deep learning image identification and OpenCV image processing technology to form an image identification method from rough identification to accurate identification, effectively improves the identification accuracy, finally extracts and generates a crack three-dimensional modeling by depending on picture coordinates of the same crack position at different angles, solves the problem that the crack is difficult to identify and model due to small size, has strong feasibility and high robustness, and can effectively save the labor cost.
Drawings
FIG. 1 is a labelme labeled fracture map of the present invention.
FIG. 2 is a crack diagram of the present invention after rough extraction by Mask-RCNN network.
FIG. 3 is a crude extraction fracture map of the present invention.
FIG. 4 is a fracture map after OpenCV edge extraction of the present invention.
Detailed Description
The following describes the implementation of the present invention in detail with reference to specific embodiments.
A crack image identification and modeling method comprises the following steps:
step 1, carrying out rough identification on crack pictures based on a deep learning Mask-RCNN network and obtaining rough extracted crack pictures
Step 1.1: a training crack picture for training a network and a crack picture to be detected are prepared as follows.
A plurality of crack pictures with the same pixel size are prepared, wherein pictures with various angles are prepared for each crack, the crack pictures are divided into two parts, one part is used as a crack picture for training, and the other part is used as a crack picture to be detected, so that the identification accuracy can be effectively improved.
Step 1.2: labeling the crack picture for training by using labelme, wherein the process is as follows:
firstly, a crack picture for training is led into labelme, and the crack and the periphery of the crack are divided into a plurality of small rectangular blocks for marking, so that manual errors are avoided. Meanwhile, the identification accuracy is increased, and the rectangular small blocks are marked as three types: horizontal cracks, vertical cracks and inclined cracks are respectively represented by three labels of hc, vc and oc. And after storage, each picture can have a corresponding json label file. The labeling process is as in fig. 1.
Step 1.3: the json markup file is converted, and the process is as follows:
and converting the json label file obtained in the previous step into three files by using a labelme json to dataset script function carried in labelme, wherein one file is a labeled mask diagram, one file is a schematic diagram of the mask diagram and the original diagram, and the other file is a set of label names.
Step 1.4: putting the crack picture for training into Mask-RCNN for training: putting the Mask picture, the set of label names and the crack picture for training generated in the step 1.3 into a Mask-RCNN network for training, wherein the specific process is as follows: and reading the three files respectively, covering the Mask pattern on the crack picture for training, adding a label, and sending the label into a Mask-RCNN network. The method comprises the steps that the learning rate is set to be 0.001, the learning momentum is 0.9, the foreground threshold is set to be 0.3, training weight of an official COCO training set is utilized to conduct initialization, a training network is responsible for classifying and positioning headlayer of a target object for 80 times, then all networks are trained for 80 times, a crack picture to be detected is input into the trained network to be recognized after network training is completed, the network divides a potential crack area in the crack picture to be detected into multiple sections and gives confidence of each section, the confidence indicates the probability that the network considers that cracks exist in the section area, a probability threshold is set, the section with the confidence higher than the probability threshold is marked, the result is shown in figure 2, and multiple groups of data in figure 2 are the confidence of each section. The region segment marked with confidence in fig. 2 is extracted, and the extracted fracture map is shown in fig. 3, which is a crude extracted fracture map.
Step 2, obtaining pixel coordinates of the cracks on the two-dimensional picture after the crude crack image is processed by an OpenCV image, wherein the method comprises the following steps: carrying out Gaussian filtering and denoising on the crude extracted crack image by utilizing OpenCV in a python third-party library to obtain a denoised pure image, further carrying out binarization and edge extraction on the image to obtain an image only with a crack edge as shown in FIG. 4, extracting 3-15 inflection points and mutation points from the crack, wherein the inflection points are crack inflection points, the mutation points are rough and fine mutation points of the crack width, and the corresponding inflection points and the corresponding mutation point pixel coordinates are formatted and output to a txt file for use. And formatting the pixel coordinates of the inflection point and the mutation point of each picture with the same crack at different angles and outputting the pixel coordinates to txt.
Step 3, carrying out image feature matching by using an SIFT algorithm to obtain coordinates of the same-name points, wherein the process is as follows:
matching pictures of the same crack at different angles by using an SIFT algorithm, matching inflection points and mutation points on each picture, when the inflection points and the mutation points in some areas reach a preset density, solving coordinates of the same-name points of the crack by using a spatial perspective transformation principle (for example, a crack picture of one angle is taken as a starting point, the angle of the picture is transformed into the angle of the crack picture of other angles by using the spatial perspective transformation principle, matching the picture after the angle transformation with the crack pictures of other angles so as to find a point corresponding to a certain inflection point or mutation point in the picture in the crack pictures of other angles, wherein the point coordinate is the coordinate of one of the same-name points), wherein the coordinates of the same-name points of the crack position in the pictures of different angles (for example, 10 pictures with different angles in one crack, then the corresponding position of one crack position in each picture has a coordinate point, and there are 10 coordinate points, and these 10 coordinate points are the coordinates of the same-name point of the crack position). And after obtaining the coordinates of all the crack homonymous points, formatting the coordinate information of all the crack homonymous points and outputting the formatted coordinate information to a txt file.
And 4, constructing a three-dimensional model of the crack.
At present, coordinates of the same-name point of the crack on pictures at different angles are obtained, and a three-dimensional space coordinate of the same-name point of the crack is obtained by utilizing photoscan software, so that a three-dimensional model of the crack can be constructed.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (3)
1. A crack image identification and modeling method is characterized by comprising the following steps:
step 1, carrying out rough identification on a crack picture based on a deep learning Mask-RCNN network and obtaining a rough extracted crack picture, wherein the rough identification comprises the following steps:
step 1.1: preparing a training crack picture for training a network and a crack picture to be detected;
step 1.2: labeling the crack picture for training by using labelme:
firstly, introducing a crack picture for training into labelme, dividing a crack and the periphery of the crack into a plurality of small rectangular blocks and labeling, and labeling the small rectangular blocks into three types: the horizontal cracks, the vertical cracks and the inclined cracks are respectively represented by three labels of hc, vc and oc, and a corresponding json label file can be obtained from each crack picture for training after storage;
step 1.3: and converting the json label file, wherein the process is as follows:
the json markup file obtained in step 1.2 is converted into three files using the labelme json to dataset script function carried in labelme: a set of labeled mask drawings, schematic diagrams of mask drawings plus artwork, and label names;
step 1.4: putting the crack picture for training into Mask-RCNN for training: putting the Mask graph, the set of label names and the crack picture for training generated in the step 1.3 into a Mask-RCNN network for training, inputting the crack picture to be detected into the trained network for recognition after the network training is finished, dividing a potential crack region in the crack picture to be detected into multiple sections by the network, giving the confidence coefficient of each section, wherein the confidence coefficient represents the probability that the network considers that cracks exist in the region of the section, setting a probability threshold, marking out the region section with the confidence coefficient higher than the probability threshold, and extracting the region section with the confidence coefficient, namely extracting a crude extracted crack picture;
step 2, obtaining pixel coordinates of inflection points and mutation points of the cracks on the two-dimensional picture after the crude crack image is processed by OpenCV images, and the specific process is as follows: carrying out Gaussian filtering and denoising on the crude extraction crack diagram by utilizing OpenCV (open source computer vision correction) to obtain a denoised pure crude extraction crack diagram, further carrying out binarization and edge extraction on the crude extraction crack diagram to obtain a crude extraction crack diagram only with crack edges, extracting N inflection points and mutation points from the crack, wherein the inflection points are the inflection points of the crack, the mutation points are coarse and fine mutation points of the crack width, formatting and outputting the corresponding inflection points and mutation point pixel coordinates to a txt file, formatting and outputting the inflection points and mutation point pixel coordinates of each picture of the same crack at different angles to the txt, and N is more than or equal to 3;
step 3, carrying out image feature matching by using an SIFT algorithm to obtain coordinates of the same-name points, wherein the process is as follows:
matching pictures of the same crack at different angles by using an SIFT algorithm, matching inflection points and mutation points on each picture, solving coordinates of crack homonymous points when the inflection points and the mutation points in some regions reach preset density, wherein the coordinates of the crack homonymous points are coordinates of points of the same crack position in the pictures at different angles, and outputting the coordinate information of all crack homonymous points to a txt file in a formatted mode after the coordinates of all crack homonymous points are obtained;
step 4, constructing a three-dimensional model of the crack;
and (3) acquiring the three-dimensional space coordinates of the same-name point of the crack by utilizing photoscan software, so as to construct a three-dimensional model of the crack.
2. The method according to claim 1, wherein the specific process of putting the Mask map, the set of tag names and the crack picture for training in the Mask-RCNN network for training comprises: the three files are read respectively, a Mask image is covered on a crack image for training, a label is added, the Mask image is sent to a Mask-RCNN network, the learning rate is set to be 0.001, the learning momentum is 0.9, the foreground threshold value is set to be 0.3, initialization is carried out by using the weight of a COCO training set after training is completed, the training network is responsible for classifying and positioning the headlayer of the target object for 80 times, and then all the networks are trained for 80 times.
3. The method of claim 1, wherein N in step 2 is 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910807036.3A CN110634131B (en) | 2019-08-29 | 2019-08-29 | Crack image identification and modeling method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910807036.3A CN110634131B (en) | 2019-08-29 | 2019-08-29 | Crack image identification and modeling method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110634131A CN110634131A (en) | 2019-12-31 |
CN110634131B true CN110634131B (en) | 2022-03-22 |
Family
ID=68969393
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910807036.3A Active CN110634131B (en) | 2019-08-29 | 2019-08-29 | Crack image identification and modeling method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110634131B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111735434A (en) * | 2020-03-25 | 2020-10-02 | 南京理工大学 | Method for measuring crack development change based on three-dimensional space angle |
CN111597377B (en) * | 2020-04-08 | 2021-05-11 | 广东省国土资源测绘院 | Deep learning technology-based field investigation method and system |
CN111337496A (en) * | 2020-04-13 | 2020-06-26 | 黑龙江北草堂中药材有限责任公司 | Chinese herbal medicine picking device and picking method |
CN113838005B (en) * | 2021-09-01 | 2023-11-21 | 山东大学 | Intelligent identification and three-dimensional reconstruction method and system for rock mass fracture based on dimension conversion |
CN114612429B (en) * | 2022-03-10 | 2024-06-11 | 北京工业大学 | Die forging crack identification positioning and improvement method based on binocular vision |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106651893A (en) * | 2016-12-23 | 2017-05-10 | 贵州电网有限责任公司电力科学研究院 | Edge detection-based wall body crack identification method |
CN106910186B (en) * | 2017-01-13 | 2019-12-27 | 陕西师范大学 | Bridge crack detection and positioning method based on CNN deep learning |
CN107704857B (en) * | 2017-09-25 | 2020-07-24 | 北京邮电大学 | End-to-end lightweight license plate recognition method and device |
CN109767423B (en) * | 2018-12-11 | 2019-12-10 | 西南交通大学 | Crack detection method for asphalt pavement image |
-
2019
- 2019-08-29 CN CN201910807036.3A patent/CN110634131B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110634131A (en) | 2019-12-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110634131B (en) | Crack image identification and modeling method | |
Chen et al. | An end-to-end shape modeling framework for vectorized building outline generation from aerial images | |
Lin et al. | Color-, depth-, and shape-based 3D fruit detection | |
CN107093205B (en) | A kind of three-dimensional space building window detection method for reconstructing based on unmanned plane image | |
CN107833213B (en) | Weak supervision object detection method based on false-true value self-adaptive method | |
CN108052942B (en) | Visual image recognition method for aircraft flight attitude | |
Zhang et al. | Semi-automatic road tracking by template matching and distance transformation in urban areas | |
CN105740798B (en) | A kind of point cloud object scene recognition methods based on structural analysis | |
CN106909902A (en) | A kind of remote sensing target detection method based on the notable model of improved stratification | |
CN103886619A (en) | Multi-scale superpixel-fused target tracking method | |
JP6188052B2 (en) | Information system and server | |
CN112381830B (en) | Method and device for extracting bird key parts based on YCbCr superpixels and graph cut | |
CN107392953B (en) | Depth image identification method based on contour line | |
Wang | Automatic extraction of building outline from high resolution aerial imagery | |
CN108022245A (en) | Photovoltaic panel template automatic generation method based on upper thread primitive correlation model | |
CN115035089A (en) | Brain anatomy structure positioning method suitable for two-dimensional brain image data | |
CN113033386B (en) | High-resolution remote sensing image-based transmission line channel hidden danger identification method and system | |
CN106780577B (en) | A kind of matching line segments method based on group feature | |
Omidalizarandi et al. | Segmentation and classification of point clouds from dense aerial image matching | |
Safdarinezhad et al. | An automatic method for precise 3D registration of high resolution satellite images and Airborne LiDAR Data | |
CN116246161A (en) | Method and device for identifying target fine type of remote sensing image under guidance of domain knowledge | |
Poornima et al. | A method to align images using image segmentation | |
Ren et al. | SAR image matching method based on improved SIFT for navigation system | |
Yu et al. | Multimodal urban remote sensing image registration via roadcross triangular feature | |
CN114926635A (en) | Method for segmenting target in multi-focus image combined with deep learning method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |