CN110634131A - Crack image identification and modeling method - Google Patents

Crack image identification and modeling method Download PDF

Info

Publication number
CN110634131A
CN110634131A CN201910807036.3A CN201910807036A CN110634131A CN 110634131 A CN110634131 A CN 110634131A CN 201910807036 A CN201910807036 A CN 201910807036A CN 110634131 A CN110634131 A CN 110634131A
Authority
CN
China
Prior art keywords
crack
picture
training
points
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910807036.3A
Other languages
Chinese (zh)
Other versions
CN110634131B (en
Inventor
章杨松
顾天纵
张宁
何元
李孟寒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tech University
Original Assignee
Nanjing Tech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tech University filed Critical Nanjing Tech University
Priority to CN201910807036.3A priority Critical patent/CN110634131B/en
Publication of CN110634131A publication Critical patent/CN110634131A/en
Application granted granted Critical
Publication of CN110634131B publication Critical patent/CN110634131B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30132Masonry; Concrete

Abstract

The invention discloses a crack image identification and modeling method, which comprises the following steps: step 1, carrying out rough identification on a crack picture based on a deep learning Mask-RCNN network and obtaining a rough extracted crack picture; step 2, obtaining pixel coordinates of inflection points and mutation points of the cracks on the two-dimensional picture after the crude crack image is subjected to OpenCV image processing; step 3, carrying out image feature matching by using an SIFT algorithm to obtain coordinates of the same-name points; and 4, constructing a three-dimensional model of the crack. The method combines the advantages of deep learning image recognition and OpenCV image processing technology to form an image recognition method from rough recognition to accurate recognition, effectively improves recognition accuracy, finally extracts and generates crack three-dimensional modeling by means of picture coordinates of the same crack position at different angles, solves the problem that the crack is difficult to recognize and model due to small size, and is high in feasibility and high in robustness.

Description

Crack image identification and modeling method
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a crack image recognition and modeling method.
Background
The image recognition and modeling is a series of processing processes such as learning and processing of existing crack images through a computer, and extraction and three-dimensional reconstruction of image targets are achieved. Image recognition technology based on neural networks is an important field of artificial intelligence. The technology can provide much convenience for users, for example, the plant disease picture is identified to quickly find out the plant disease; and if the site remote sensing picture is identified, the site type can be quickly identified, the face picture can be accurately identified, and the identity information can be accurately obtained by matching the face picture with the face in the existing database. However, due to the characteristic that the width of the crack is narrow and the small crack is difficult to mark, the existing neural network identification method has poor crack identification effect and low identification accuracy. At present, the neural network is more suitable for determining the nature of the picture target, and accurate coordinates are difficult to extract. The identification of the bounding regions is difficult to meet with the requirements of subsequent three-dimensional modeling.
Disclosure of Invention
The invention aims to provide a crack image identification and modeling method to solve the problems.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a crack image identification and modeling method comprises the following steps:
step 1, carrying out rough identification on a crack picture based on a deep learning Mask-RCNN network and obtaining a rough extracted crack picture, wherein the rough identification comprises the following steps:
step 1.1: preparing a training crack picture for training a network and a crack picture to be detected;
step 1.2: labeling the crack picture for training by using labelme:
firstly, introducing a crack picture for training into labelme, dividing a crack and the periphery of the crack into a plurality of small rectangular blocks and labeling, and labeling the small rectangular blocks into three types: the horizontal cracks, the vertical cracks and the inclined cracks are respectively represented by three labels of hc, vc and oc, and a corresponding json label file can be obtained from each crack picture for training after storage;
step 1.3: and converting the json label file, wherein the process is as follows:
the json markup file from step 1.2 is converted into three files using the labelmejson to dataset script function carried in labelme itself: a set of labeled mask drawings, schematic diagrams of mask drawings plus artwork, and label names;
step 1.4: putting the crack picture for training into Mask-RCNN for training: putting the Mask graph, the set of label names and the crack picture for training generated in the step 1.3 into a Mask-RCNN network for training, inputting the crack picture to be detected into the trained network for recognition after the network training is finished, dividing a potential crack region in the crack picture to be detected into multiple sections by the network, giving the confidence coefficient of each section, wherein the confidence coefficient represents the probability that the network considers that cracks exist in the region of the section, setting a probability threshold, marking out the region section with the confidence coefficient higher than the probability threshold, and extracting the region section with the confidence coefficient, namely extracting a crude extracted crack picture;
step 2, obtaining pixel coordinates of inflection points and mutation points of the cracks on the two-dimensional picture after the crude crack image is processed by OpenCV images, and the specific process is as follows: carrying out Gaussian filtering and denoising on the crude extraction crack diagram by utilizing OpenCV (open source computer vision correction) to obtain a denoised pure crude extraction crack diagram, further carrying out binarization and edge extraction on the crude extraction crack diagram to obtain a crude extraction crack diagram only with crack edges, extracting N inflection points and mutation points from the crack, wherein the inflection points are the inflection points of the crack, the mutation points are coarse and fine mutation points of the crack width, formatting and outputting the corresponding inflection points and mutation point pixel coordinates to a txt file, formatting and outputting the inflection points and mutation point pixel coordinates of each picture of the same crack at different angles to the txt, and N is more than or equal to 3;
step 3, carrying out image feature matching by using an SIFT algorithm to obtain coordinates of the same-name points, wherein the process is as follows:
matching pictures of the same crack at different angles by using an SIFT algorithm, matching inflection points and mutation points on each picture, solving coordinates of crack homonymous points when the inflection points and the mutation points in some regions reach preset density, wherein the coordinates of the crack homonymous points are coordinates of points of the same crack position in the pictures at different angles, and outputting the coordinate information of all crack homonymous points to a txt file in a formatted mode after the coordinates of all crack homonymous points are obtained;
step 4, constructing a three-dimensional model of the crack;
and (3) acquiring the three-dimensional space coordinates of the same-name point of the crack by utilizing photoscan software, so as to construct a three-dimensional model of the crack.
Further, the specific process of putting the Mask image, the set of tag names and the crack image for training into the Mask-RCNN network for training is as follows: the three files are read respectively, a Mask image is covered on a crack image for training, a label is added, the Mask image is sent to a Mask-RCNN network, the learning rate is set to be 0.001, the learning momentum is 0.9, the foreground threshold value is set to be 0.3, initialization is carried out by using the weight of a COCO training set after training is completed, the training network is responsible for classifying and positioning the headlayer of the target object for 80 times, and then all the networks are trained for 80 times.
Further, N in the step 2 is 8.
Compared with the prior art, the invention has the advantages that:
the invention provides a reliable crack image identification and modeling method, which combines the advantages of deep learning image identification and OpenCV image processing technology to form an image identification method from rough identification to accurate identification, effectively improves the identification accuracy, finally extracts and generates a crack three-dimensional modeling by depending on picture coordinates of the same crack position at different angles, solves the problem that the crack is difficult to identify and model due to small size, has strong feasibility and high robustness, and can effectively save the labor cost.
Drawings
FIG. 1 is a labelme labeled fracture map of the present invention.
FIG. 2 is a crack diagram of the present invention after rough extraction by Mask-RCNN network.
FIG. 3 is a crude extraction fracture map of the present invention.
FIG. 4 is a fracture map after OpenCV edge extraction of the present invention.
Detailed Description
The following describes the implementation of the present invention in detail with reference to specific embodiments.
A crack image identification and modeling method comprises the following steps:
step 1, carrying out rough identification on crack pictures based on a deep learning Mask-RCNN network and obtaining rough extracted crack pictures
Step 1.1: a training crack picture for training a network and a crack picture to be detected are prepared as follows.
A plurality of crack pictures with the same pixel size are prepared, wherein pictures with various angles are prepared for each crack, the crack pictures are divided into two parts, one part is used as a crack picture for training, and the other part is used as a crack picture to be detected, so that the identification accuracy can be effectively improved.
Step 1.2: labeling the crack picture for training by using labelme, wherein the process is as follows:
firstly, a crack picture for training is led into labelme, and the crack and the periphery of the crack are divided into a plurality of small rectangular blocks for marking, so that manual errors are avoided. Meanwhile, the identification accuracy is increased, and the rectangular small blocks are marked as three types: horizontal cracks, vertical cracks and inclined cracks are respectively represented by three labels of hc, vc and oc. And after storage, each picture can have a corresponding json label file. The labeling process is as in fig. 1.
Step 1.3: the json markup file is converted, and the process is as follows:
and converting the json label file obtained in the previous step into three files by using a labelme json to dataset script function carried in labelme, wherein one file is a labeled mask diagram, one file is a schematic diagram of the mask diagram and the original diagram, and the other file is a set of label names.
Step 1.4: putting the crack picture for training into Mask-RCNN for training: putting the Mask picture, the set of label names and the crack picture for training generated in the step 1.3 into a Mask-RCNN network for training, wherein the specific process is as follows: and reading the three files respectively, covering the Mask pattern on the crack picture for training, adding a label, and sending the label into a Mask-RCNN network. The method comprises the steps that the learning rate is set to be 0.001, the learning momentum is 0.9, the foreground threshold is set to be 0.3, training weight of an official COCO training set is utilized to conduct initialization, a training network is responsible for classifying and positioning headlayer of a target object for 80 times, then all networks are trained for 80 times, a crack picture to be detected is input into the trained network to be recognized after network training is completed, the network divides a potential crack area in the crack picture to be detected into multiple sections and gives confidence of each section, the confidence indicates the probability that the network considers that cracks exist in the section area, a probability threshold is set, the section with the confidence higher than the probability threshold is marked, the result is shown in figure 2, and multiple groups of data in figure 2 are the confidence of each section. The region segment marked with confidence in fig. 2 is extracted, and the extracted fracture map is shown in fig. 3, which is a crude extracted fracture map.
Step 2, obtaining pixel coordinates of the cracks on the two-dimensional picture after the crude crack image is processed by an OpenCV image, wherein the method comprises the following steps: carrying out Gaussian filtering and denoising on the crude extracted crack image by utilizing OpenCV in a python third-party library to obtain a denoised pure image, further carrying out binarization and edge extraction on the image to obtain an image only with a crack edge as shown in FIG. 4, extracting 3-15 inflection points and mutation points from the crack, wherein the inflection points are crack inflection points, the mutation points are rough and fine mutation points of the crack width, and the corresponding inflection points and the corresponding mutation point pixel coordinates are formatted and output to a txt file for use. And formatting the pixel coordinates of the inflection point and the mutation point of each picture with the same crack at different angles and outputting the pixel coordinates to txt.
Step 3, carrying out image feature matching by using an SIFT algorithm to obtain coordinates of the same-name points, wherein the process is as follows:
matching pictures of the same crack at different angles by using an SIFT algorithm, matching inflection points and mutation points on each picture, when the inflection points and the mutation points in some areas reach a preset density, solving coordinates of the same-name points of the crack by using a spatial perspective transformation principle (for example, a crack picture of one angle is taken as a starting point, the angle of the picture is transformed into the angle of the crack picture of other angles by using the spatial perspective transformation principle, matching the picture after the angle transformation with the crack pictures of other angles so as to find a point corresponding to a certain inflection point or mutation point in the picture in the crack pictures of other angles, wherein the point coordinate is the coordinate of one of the same-name points), wherein the coordinates of the same-name points of the crack position in the pictures of different angles (for example, 10 pictures with different angles in one crack, then the corresponding position of one crack position in each picture has a coordinate point, and there are 10 coordinate points, and these 10 coordinate points are the coordinates of the same-name point of the crack position). And after obtaining the coordinates of all the crack homonymous points, formatting the coordinate information of all the crack homonymous points and outputting the formatted coordinate information to a txt file.
And 4, constructing a three-dimensional model of the crack.
At present, coordinates of the same-name point of the crack on pictures at different angles are obtained, and a three-dimensional space coordinate of the same-name point of the crack is obtained by utilizing photoscan software, so that a three-dimensional model of the crack can be constructed.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (3)

1. A crack image identification and modeling method is characterized by comprising the following steps:
step 1, carrying out rough identification on a crack picture based on a deep learning Mask-RCNN network and obtaining a rough extracted crack picture, wherein the rough identification comprises the following steps:
step 1.1: preparing a training crack picture for training a network and a crack picture to be detected;
step 1.2: labeling the crack picture for training by using labelme:
firstly, introducing a crack picture for training into labelme, dividing a crack and the periphery of the crack into a plurality of small rectangular blocks and labeling, and labeling the small rectangular blocks into three types: the horizontal cracks, the vertical cracks and the inclined cracks are respectively represented by three labels of hc, vc and oc, and a corresponding json label file can be obtained from each crack picture for training after storage;
step 1.3: and converting the json label file, wherein the process is as follows:
the json markup file obtained in step 1.2 is converted into three files using the labelme json to dataset script function carried in labelme: a set of labeled mask drawings, schematic diagrams of mask drawings plus artwork, and label names;
step 1.4: putting the crack picture for training into Mask-RCNN for training: putting the Mask graph, the set of label names and the crack picture for training generated in the step 1.3 into a Mask-RCNN network for training, inputting the crack picture to be detected into the trained network for recognition after the network training is finished, dividing a potential crack region in the crack picture to be detected into multiple sections by the network, giving the confidence coefficient of each section, wherein the confidence coefficient represents the probability that the network considers that cracks exist in the region of the section, setting a probability threshold, marking out the region section with the confidence coefficient higher than the probability threshold, and extracting the region section with the confidence coefficient, namely extracting a crude extracted crack picture;
step 2, obtaining pixel coordinates of inflection points and mutation points of the cracks on the two-dimensional picture after the crude crack image is processed by OpenCV images, and the specific process is as follows: carrying out Gaussian filtering and denoising on the crude extraction crack diagram by utilizing OpenCV (open source computer vision correction) to obtain a denoised pure crude extraction crack diagram, further carrying out binarization and edge extraction on the crude extraction crack diagram to obtain a crude extraction crack diagram only with crack edges, extracting N inflection points and mutation points from the crack, wherein the inflection points are the inflection points of the crack, the mutation points are coarse and fine mutation points of the crack width, formatting and outputting the corresponding inflection points and mutation point pixel coordinates to a txt file, formatting and outputting the inflection points and mutation point pixel coordinates of each picture of the same crack at different angles to the txt, and N is more than or equal to 3;
step 3, carrying out image feature matching by using an SIFT algorithm to obtain coordinates of the same-name points, wherein the process is as follows:
matching pictures of the same crack at different angles by using an SIFT algorithm, matching inflection points and mutation points on each picture, solving coordinates of crack homonymous points when the inflection points and the mutation points in some regions reach preset density, wherein the coordinates of the crack homonymous points are coordinates of points of the same crack position in the pictures at different angles, and outputting the coordinate information of all crack homonymous points to a txt file in a formatted mode after the coordinates of all crack homonymous points are obtained;
step 4, constructing a three-dimensional model of the crack;
and (3) acquiring the three-dimensional space coordinates of the same-name point of the crack by utilizing photoscan software, so as to construct a three-dimensional model of the crack.
2. The method according to claim 1, wherein the specific process of putting the Mask map, the set of tag names and the crack picture for training in the Mask-RCNN network for training comprises: the three files are read respectively, a Mask image is covered on a crack image for training, a label is added, the Mask image is sent to a Mask-RCNN network, the learning rate is set to be 0.001, the learning momentum is 0.9, the foreground threshold value is set to be 0.3, initialization is carried out by using the weight of a COCO training set after training is completed, the training network is responsible for classifying and positioning the headlayer of the target object for 80 times, and then all the networks are trained for 80 times.
3. The method of claim 1, wherein N in step 2 is 8.
CN201910807036.3A 2019-08-29 2019-08-29 Crack image identification and modeling method Active CN110634131B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910807036.3A CN110634131B (en) 2019-08-29 2019-08-29 Crack image identification and modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910807036.3A CN110634131B (en) 2019-08-29 2019-08-29 Crack image identification and modeling method

Publications (2)

Publication Number Publication Date
CN110634131A true CN110634131A (en) 2019-12-31
CN110634131B CN110634131B (en) 2022-03-22

Family

ID=68969393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910807036.3A Active CN110634131B (en) 2019-08-29 2019-08-29 Crack image identification and modeling method

Country Status (1)

Country Link
CN (1) CN110634131B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111597377A (en) * 2020-04-08 2020-08-28 广东省国土资源测绘院 Deep learning technology-based field investigation method and system
CN111735434A (en) * 2020-03-25 2020-10-02 南京理工大学 Method for measuring crack development change based on three-dimensional space angle
CN113838005A (en) * 2021-09-01 2021-12-24 山东大学 Intelligent rock fracture identification and three-dimensional reconstruction method and system based on dimension conversion

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651893A (en) * 2016-12-23 2017-05-10 贵州电网有限责任公司电力科学研究院 Edge detection-based wall body crack identification method
CN106910186A (en) * 2017-01-13 2017-06-30 陕西师范大学 A kind of Bridge Crack detection localization method based on CNN deep learnings
US20190095730A1 (en) * 2017-09-25 2019-03-28 Beijing University Of Posts And Telecommunications End-To-End Lightweight Method And Apparatus For License Plate Recognition
CN109767423A (en) * 2018-12-11 2019-05-17 西南交通大学 A kind of crack detection method of bituminous pavement image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651893A (en) * 2016-12-23 2017-05-10 贵州电网有限责任公司电力科学研究院 Edge detection-based wall body crack identification method
CN106910186A (en) * 2017-01-13 2017-06-30 陕西师范大学 A kind of Bridge Crack detection localization method based on CNN deep learnings
US20190095730A1 (en) * 2017-09-25 2019-03-28 Beijing University Of Posts And Telecommunications End-To-End Lightweight Method And Apparatus For License Plate Recognition
CN109767423A (en) * 2018-12-11 2019-05-17 西南交通大学 A kind of crack detection method of bituminous pavement image

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111735434A (en) * 2020-03-25 2020-10-02 南京理工大学 Method for measuring crack development change based on three-dimensional space angle
CN111597377A (en) * 2020-04-08 2020-08-28 广东省国土资源测绘院 Deep learning technology-based field investigation method and system
CN111597377B (en) * 2020-04-08 2021-05-11 广东省国土资源测绘院 Deep learning technology-based field investigation method and system
CN113838005A (en) * 2021-09-01 2021-12-24 山东大学 Intelligent rock fracture identification and three-dimensional reconstruction method and system based on dimension conversion
CN113838005B (en) * 2021-09-01 2023-11-21 山东大学 Intelligent identification and three-dimensional reconstruction method and system for rock mass fracture based on dimension conversion

Also Published As

Publication number Publication date
CN110634131B (en) 2022-03-22

Similar Documents

Publication Publication Date Title
Lin et al. Color-, depth-, and shape-based 3D fruit detection
Chen et al. An end-to-end shape modeling framework for vectorized building outline generation from aerial images
CN107833213B (en) Weak supervision object detection method based on false-true value self-adaptive method
CN110634131B (en) Crack image identification and modeling method
WO2017190656A1 (en) Pedestrian re-recognition method and device
Zhang et al. Semi-automatic road tracking by template matching and distance transformation in urban areas
CN108052942B (en) Visual image recognition method for aircraft flight attitude
CN104850822B (en) Leaf identification method under simple background based on multi-feature fusion
JP6188052B2 (en) Information system and server
CN108022245B (en) Facial line primitive association model-based photovoltaic panel template automatic generation method
Wang Automatic extraction of building outline from high resolution aerial imagery
CN113033386B (en) High-resolution remote sensing image-based transmission line channel hidden danger identification method and system
CN107392953B (en) Depth image identification method based on contour line
CN112381830B (en) Method and device for extracting bird key parts based on YCbCr superpixels and graph cut
CN113989604A (en) Tire DOT information identification method based on end-to-end deep learning
CN114140700A (en) Step-by-step heterogeneous image template matching method based on cascade network
CN114332172A (en) Improved laser point cloud registration method based on covariance matrix
Omidalizarandi et al. Segmentation and classification of point clouds from dense aerial image matching
Safdarinezhad et al. An automatic method for precise 3D registration of high resolution satellite images and Airborne LiDAR Data
CN116246161A (en) Method and device for identifying target fine type of remote sensing image under guidance of domain knowledge
Poornima et al. A method to align images using image segmentation
CN115035089A (en) Brain anatomy structure positioning method suitable for two-dimensional brain image data
Ren et al. SAR image matching method based on improved SIFT for navigation system
CN114241150A (en) Water area data preprocessing method in oblique photography modeling
Yu et al. Multimodal urban remote sensing image registration via roadcross triangular feature

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant