CN108363951B - Automatic acquisition method of deep learning sample library corresponding to remote sensing image land type identification - Google Patents

Automatic acquisition method of deep learning sample library corresponding to remote sensing image land type identification Download PDF

Info

Publication number
CN108363951B
CN108363951B CN201810026909.2A CN201810026909A CN108363951B CN 108363951 B CN108363951 B CN 108363951B CN 201810026909 A CN201810026909 A CN 201810026909A CN 108363951 B CN108363951 B CN 108363951B
Authority
CN
China
Prior art keywords
remote sensing
land
sensing image
mask
sample library
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810026909.2A
Other languages
Chinese (zh)
Other versions
CN108363951A (en
Inventor
张小国
贾友斌
陈孝烽
陈刚
韦国钧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201810026909.2A priority Critical patent/CN108363951B/en
Publication of CN108363951A publication Critical patent/CN108363951A/en
Application granted granted Critical
Publication of CN108363951B publication Critical patent/CN108363951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of land utilization remote sensing monitoring, and particularly relates to an automatic acquisition method of a deep learning sample library corresponding to remote sensing image land type identification, which comprises the steps of firstly superposing a land utilization current situation vector diagram and a remote sensing diagram under the same coordinate system; marking points with smaller gradient values in the remote sensing image as marking points by setting threshold values; filling flooding water through the mark points, assigning values to the masks corresponding to each filling area and storing land information; extracting the segmented images according to the mask, and performing classified storage according to the land type information of the land utilization status stored by the mask to form a sample library; the method realizes automatic collection of the remote sensing image feature libraries corresponding to different land types by superposing and comparing the land utilization current situation data and the remote sensing data at the same time phase, and compared with the defects of large workload and difficult acquisition of sample areas in the traditional method for manually acquiring samples, the method for acquiring samples is quicker and more accurate, and the labor cost is obviously reduced.

Description

Automatic acquisition method of deep learning sample library corresponding to remote sensing image land type identification
Technical Field
The invention belongs to the technical field of land utilization remote sensing monitoring, and particularly relates to an automatic acquisition method of a deep learning sample library corresponding to remote sensing image land type identification.
Background
In the technical field of land utilization current situation investigation, the aged land utilization information is important, and the full-automatic interpretation of the land class of the remote sensing image is a major technical problem of overcoming the scientific and technical efforts of national and native resources in China! In recent years, with the rapid development of machine learning technologies represented by deep learning, the application of deep learning to automatic interpretation of remote sensing images and the realization of automated land type recognition as much as possible are important research targets and directions of researchers in China at present. However, Deep learning works with Deep Neural Networks (DNN) on the premise that Deep networks are sufficiently trained, which all requires a large number of samples as training data. Traditionally, training images are obtained by manpower, manual labeling is performed, time and labor are wasted, workload is huge, and the training images are easily influenced by working emotion and working negligence of operators.
Disclosure of Invention
The invention solves the technical problems in the prior art and provides an automatic acquisition method of a deep learning sample library corresponding to remote sensing image land type identification.
In order to solve the problems, the technical scheme of the invention is as follows:
an automatic acquisition method of a deep learning sample library corresponding to remote sensing image land type identification comprises the following steps,
the method comprises the steps of segmenting a remote sensing image into image spots by overlapping a current land utilization vector diagram and a remote sensing diagram and utilizing the image spot boundary information of vector data, extracting mark points from the image spots, filling the image spots with water, classifying and extracting the image spots, and arranging the image spots to obtain a large number of training sample libraries required by training of remote sensing image recognition deep neural networks corresponding to different land types.
Preferably, the method for automatically acquiring the deep learning sample library corresponding to the remote sensing image land type identification comprises the following steps,
step 1: edge mapping, namely superposing a current land use state vector diagram and a remote sensing diagram under the same coordinate system, and then mapping the boundary of the current land use state vector diagram into a closed edge consisting of continuous pixels in the remote sensing diagram;
step 2: extracting the mark points, and selecting the mark points inside the closed edge; and step 3: filling flooding water, namely performing flooding water filling through the mark points, and assigning values to the masks corresponding to each filling area and storing land information;
and 4, step 4: and (4) image classification extraction, namely extracting the segmented images according to the mask, and performing classification storage according to the land type information of the land utilization status stored by the mask to form a sample library.
Preferably, the acquisition time of the land use status vector diagram and the acquisition time of the remote sensing diagram in the step 1 are the same.
Preferably, the mapping of the boundary of the land use status vector diagram to the closed edge consisting of the continuous pixels in the remote sensing diagram is realized by a linear rasterization method (numerical differentiation method).
Preferably, the pixels on the closed edge are marked as edge pixels, and the edge pixels are set to have higher pixel values, so as to ensure that the edge pixels have larger gradient values.
Preferably, in step 2, the following formula is used to extract the marked points:
Figure BDA0001545214670000021
in the formula: h and l are respectively the row number and the column number of the pixel; g (h, l) is the gradient value of the pixel; t (h, l) is a threshold value corresponding to the pixel; when the value of m (h, l) is 1, a marked point is represented, and when the value of m (h, l) is 0, an unmarked point is represented.
The mark points are internal points in the image, and the gradient value is small; unmarked points are points at the edge and the vicinity of the edge in the image, and have larger gradient values; therefore, a certain threshold value can be set to distinguish the two according to the above formula.
Preferably, in the step 3, each closed region is correspondingly assigned with a mask, and all pixel values are set to 0, and each mask is used to record the classification hierarchy structure and the name of the corresponding closed region in the present land use situation in the form of a file path.
Preferably, in step 3, the obtained mark point is used as a seed point to perform flood filling on the area in each closed edge until all the pixels in the edge constraint are marked, because the constraint boundary has a higher pixel value and a larger gradient value, the whole constraint area is obtained after the flood filling.
Preferably, in the step 3, the mark point with the smallest gradient value is selected as the seed point.
Preferably, in step 3, in the filling process of the overflowing water, for the mask corresponding to each closed region, the mask pixel value corresponding to the marked image element is set to be 1, and the remaining mask pixel values are 0.
Preferably, in the step 4, the image of each closed region is extracted according to the obtained mask, and each image is saved according to the file path and the name recorded by the mask, so as to finally generate training sample libraries corresponding to different categories.
Compared with the prior art, the invention has the advantages that,
the invention provides an automatic acquisition method for automatically acquiring a land type identification sample library required for remote sensing image interpretation by superposing the current land utilization situation and a remote sensing image, which realizes the automatic collection of remote sensing image feature libraries corresponding to different land types by superposing and comparing the current land utilization situation data and the remote sensing data at the same time phase, and uses the information in the current land utilization situation for extracting the remote sensing image land sample, thereby solving the problem of insufficient machine learning training samples in the task of remote sensing image land classification identification; compared with the defects of large workload and difficult acquisition of sample regions in the traditional method for manually acquiring samples, the method for acquiring the samples is faster and more accurate, and the labor cost is obviously reduced.
Drawings
FIG. 1 is a flow chart of an implementation of a method for automatically obtaining a deep learning sample library corresponding to remote sensing image terrain identification;
FIG. 2 is a schematic edge map, wherein (a) is a schematic view of superimposing a land utilization status and a remote sensing graphic and (b) is a schematic view of an edge pixel;
FIG. 3 is a schematic diagram of marker extraction;
FIG. 4 is a schematic of flood fill;
fig. 5 is a mask diagram.
Detailed Description
Example 1:
for a better understanding of the technical content of the present invention, specific embodiments are described below in conjunction with the appended drawings:
as shown in fig. 1, according to a preferred embodiment of the present invention, the method for automatically obtaining the deep learning sample library corresponding to the remote sensing image land type identification includes the following steps:
step 1: edge mapping, namely firstly, overlapping the current land use situation and a remote sensing image under the same coordinate system, and then mapping the boundary of a vector image of the current land use situation into a closed edge consisting of continuous pixels in the remote sensing image;
step 2: extracting the marking points, namely marking the points with smaller gradient values in the remote sensing image as the marking points by setting a threshold value;
and step 3: filling flooding water, namely performing flooding water filling through the mark points, and assigning values to the masks corresponding to each filling area and storing land information;
and 4, step 4: and (4) image classification extraction, namely extracting the segmented images according to the mask, and performing classification storage according to the land type information of the land utilization status stored by the mask to form a sample library.
In this embodiment, in step 1, the proposed method for superimposing the current land utilization situation and the remote sensing image first needs to uniformly convert the two data into the same coordinate system, so as to ensure that the current land utilization situation is completely matched with the remote sensing image, and ensure that incomplete or even wrong samples cannot be obtained, which causes the obtained samples to hardly help training the learner, and therefore, it is necessary to unify the two data into the same coordinate system.
In this embodiment, in step 1, the superposed and analyzed current land utilization state and the time for acquiring the remote sensing image are consistent, because the current land utilization state and the remote sensing image of different phases are updated later than the remote sensing image, and the changed land type information in the remote sensing image is not updated in the current land utilization state, which results in obtaining a wrong land sample subsequently, so that the current land utilization state and the remote sensing image which are actually the same are required to reduce errors caused by inconsistency between the current land utilization state and the remote sensing image.
Referring to fig. 2, in the foregoing step 1, the edge mapping maps the boundary of the area in the current land utilization vector diagram onto the remote sensing image to form a closed edge constraint, so that flood is limited inside the constrained area in the subsequent flood filling process, which is a key for implementing image segmentation under the vector diagram constraint. The method comprises the steps of mapping vector boundaries to a remote sensing image by superposing a current vector diagram and the remote sensing diagram on the land, then mapping the boundaries of the vector diagrams to closed edges consisting of continuous pixels by utilizing a linear rasterization algorithm in computer graphics, namely a numerical differentiation method, marking the pixels on the closed edges as the edge pixels and setting higher pixel values, for example, setting the pixel values to be maximum pixel (RGB) (255 x 255) so as to ensure that the pixels have larger gradient values, and forming a subsequent constraint area filled with the overflowing water.
Referring to fig. 2, in the step 2, after the edge mapping in the step 1, the image pixels are divided into 2 types of edge pixels and non-edge pixels. The markers are a set of spatially adjacent pixels of the non-edge pixels with smaller gradient values, corresponding to the interior regions of the image. The key of the extraction of the constraint area is the extraction of the mark point.
The marked points are internal points in the image, and the gradient value is smaller (as shown in figure 3); unmarked points are points at the edge and the vicinity of the edge in the image, and have larger gradient values; therefore, by setting a certain threshold T, the two can be distinguished according to the following formula.
Figure BDA0001545214670000041
In the formula: h and l are respectively the row number and the column number of the pixel; g (h, l) is the gradient value of the pixel; t (h, l) is a threshold corresponding to the pixel, and can be a global threshold irrelevant to the position or a local threshold relevant to the position. When the value of m (h, l) is 1, a marked point is represented, and when the value of m (h, l) is 0, an unmarked point is represented.
The threshold T is selected according to the actual image under the premise of ensuring that each constraint area has a mark point, and the method comprises the following steps:
1. first, the gradient value distribution of image elements, the minimum and maximum gradient values are calculated
Image gradient: g (x, y) ═ dx (i, j) + dy (i, j);
dx(i,j)=|(i+1,j)-|(i,j);
dy(i,j)=l(i,j+1)-l(i,j);
where l is the value of an image pixel (e.g., RGB value) and i, j is the coordinate of the pixel
2. The midpoint value of the maximum and minimum gradient values is chosen as T, and all points in the image where the gradient values are equal to or close (within 1 pixel value of error) to T are chosen,
3. searching a local minimum value point of the gradient value by using a point of which the gradient value is equal to or close to T as a starting point according to a gradient descent method,
4. judging whether a plurality of minimum value points belong to the same constraint area (no edge point exists between the two minimum values), and selecting the minimum gradient value as the seed point of the area.
Because the mark point will be used as the seed point of the following flood filling operation, but the mark point of each constraint area may be more than one due to the size of the threshold T, after the appropriate threshold T is set, if more than one mark point of a certain constraint area is used, the seed point with the smallest gradient value is selected.
In the step 3, after the marking is completed, it is first required to correspondingly allocate a mask to each closed region, and set all pixel values to 0, and record the classification hierarchy structure and the name of the land class to which the corresponding closed region belongs in the present land use situation in the form of a file path by using each mask. Such as: corresponding land type information in the land use current situation graph, the first-level classification is as follows: residential land, the second class classification is: in the rural homestead, the mask recording path information is as follows: "G: \ residential land \ rural homestead ", name: "rural homestead 001. jpg".
Referring to fig. 4, in the foregoing step 3, the obtained mark point is used as a seed point to perform flood filling on the area in each closed edge until all the pixels in the edge constraint are marked, because the constraint boundary has a higher pixel value and a larger gradient value, the whole constraint area is obtained after the flood filling.
Referring to fig. 5, in step 3, in the filling process of the overflowing water, for the mask corresponding to each closed region, the mask pixel value corresponding to the marked pixel is set to 1, and the remaining mask pixel values are still 0. Firstly, the mask pixel value of the seed point is 1, the pixels marked with 1 mask extend to the edge along with the process of filling the flood until all the mask pixel values of the constraint area are 1, and therefore, the mask can be used for extracting the complete image in the constraint area in the next step.
In the step 4, the image of each closed region is extracted according to the mask obtained in the previous step, and the region with the mask pixel value of 1 is the region of the image to be extracted (as shown in fig. 5), so that a complete image of each region is obtained, then each image is stored according to the file path and the name recorded by the mask in the step 3, and finally, training sample libraries corresponding to different categories are generated. Such as: the mask recording path information is: g, land for residence, rural homestead, name: and if a plurality of images of the same type exist, the images are sequentially ordered and named according to the storage sequence after the naming: the 'rural homestead 002. jpg' and 'rural homestead 003. jpg' … form a sample library finally.
It should be noted that the above-mentioned embodiments are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention, and all equivalent substitutions or substitutions made on the above-mentioned embodiments are included in the scope of the present invention.

Claims (9)

1. An automatic acquisition method of a deep learning sample library corresponding to remote sensing image land type identification is characterized by comprising the following steps of superposing a land utilization current state vector diagram and a remote sensing diagram, dividing a remote sensing image into image spots by using the image spot boundary information of vector data, extracting mark points from the image spots, filling with overflowing water, and carrying out classified extraction to obtain a sample library;
the method specifically comprises the following steps of,
step 1: edge mapping, namely firstly, superposing a current land use vector diagram and a remote sensing diagram under the same coordinate system, and then mapping the boundary of the current land use vector diagram into a closed edge consisting of continuous pixels in the remote sensing diagram;
step 2: extracting the mark points, and selecting the mark points inside the closed edge;
and step 3: filling flooding water, namely performing flooding water filling through the mark points, and assigning values to the masks corresponding to each filling area and storing land information;
and 4, step 4: and (4) image classification extraction, namely extracting the segmented images according to the mask, and performing classification storage according to the land type information of the land utilization status stored by the mask to form a sample library.
2. The method for automatically acquiring the deep learning sample library corresponding to the remote sensing image land type identification as claimed in claim 1, wherein the acquisition time of the land use status vector diagram and the remote sensing diagram in step 1 is the same.
3. The method for automatically acquiring the deep learning sample library corresponding to the remote sensing image land type identification as claimed in claim 1, wherein the step 1 of mapping the boundary of the land use status vector diagram to a closed edge consisting of continuous pixels in the remote sensing image is realized by a linear rasterization method.
4. The method for automatically acquiring the deep learning sample library corresponding to the remote sensing image land type identification as claimed in claim 1, wherein the pixels on the closed edge in the step 1 are marked as edge pixels, and the edge pixels are set to have higher pixel values.
5. The method for automatically acquiring the deep learning sample library corresponding to the remote sensing image land type identification as claimed in claim 4, wherein in the step 2, the following formula is adopted to extract the mark points:
Figure FDA0003310112340000011
in the formula: h and l are respectively the row number and the column number of the pixel; g (h, l) is the gradient value of the pixel; t (h, l) is a threshold value corresponding to the pixel; when the value of m (h, l) is 1, a marked point is represented, and when the value of m (h, l) is 0, an unmarked point is represented.
6. The method for automatically acquiring the deep learning sample library corresponding to the remote sensing image land type identification as claimed in claim 1, wherein in the step 3, a mask is correspondingly allocated to each closed region, all pixel values are set to 0, and the classification hierarchy structure and the name of the corresponding closed region in the land use status are recorded in the form of a file path by using each mask.
7. The method for automatically acquiring the deep learning sample library corresponding to the remote sensing image land type identification as claimed in claim 6, wherein in the step 3, the obtained mark points are used as seed points to fill the area in each closed edge with water until all pixels in the edge constraint are marked.
8. The method for automatically acquiring the deep learning sample library corresponding to the remote sensing image land type identification as claimed in claim 7, wherein in the step 3, for the mask corresponding to each closed region in the flooding filling process, the mask pixel value corresponding to the marked pixel is set to be 1, and the rest mask pixel values are 0.
9. The method for automatically acquiring the deep learning sample library corresponding to the remote sensing image land type identification as claimed in claim 8, wherein in the step 4, the image of each closed region is respectively extracted according to the obtained mask, each image is respectively stored according to the file path and the name recorded by the mask, and training sample libraries corresponding to different types are generated.
CN201810026909.2A 2018-01-11 2018-01-11 Automatic acquisition method of deep learning sample library corresponding to remote sensing image land type identification Active CN108363951B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810026909.2A CN108363951B (en) 2018-01-11 2018-01-11 Automatic acquisition method of deep learning sample library corresponding to remote sensing image land type identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810026909.2A CN108363951B (en) 2018-01-11 2018-01-11 Automatic acquisition method of deep learning sample library corresponding to remote sensing image land type identification

Publications (2)

Publication Number Publication Date
CN108363951A CN108363951A (en) 2018-08-03
CN108363951B true CN108363951B (en) 2022-02-22

Family

ID=63010981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810026909.2A Active CN108363951B (en) 2018-01-11 2018-01-11 Automatic acquisition method of deep learning sample library corresponding to remote sensing image land type identification

Country Status (1)

Country Link
CN (1) CN108363951B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657540B (en) * 2018-11-06 2020-11-27 北京农业信息技术研究中心 Withered tree positioning method and system
CN109657728B (en) * 2018-12-26 2021-03-30 江苏省基础地理信息中心 Sample production method and model training method
CN110363798B (en) * 2019-07-24 2022-02-18 宁波市测绘和遥感技术研究院 Method for generating remote sensing image interpretation sample set
CN111091054B (en) * 2019-11-13 2020-11-10 广东国地规划科技股份有限公司 Method, system and device for monitoring land type change and storage medium
CN111563928B (en) * 2020-03-26 2021-05-25 广东省国土资源测绘院 Exception photo abnormity identification and reminding method and system
CN111597377B (en) * 2020-04-08 2021-05-11 广东省国土资源测绘院 Deep learning technology-based field investigation method and system
CN113223042B (en) * 2021-05-19 2021-11-05 自然资源部国土卫星遥感应用中心 Intelligent acquisition method and equipment for remote sensing image deep learning sample
CN117788982A (en) * 2024-02-26 2024-03-29 中国铁路设计集团有限公司 Large-scale deep learning data set manufacturing method based on railway engineering topography result

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077400B (en) * 2012-12-26 2015-11-25 中国土地勘测规划院 The ground category information remote sensing automatic identifying method that Land Use Database is supported
CN103546726B (en) * 2013-10-28 2017-02-08 东南大学 Method for automatically discovering illegal land use
CN104484682A (en) * 2014-12-31 2015-04-01 中国科学院遥感与数字地球研究所 Remote sensing image classification method based on active deep learning
CN105678818A (en) * 2016-03-08 2016-06-15 浙江工商大学 Method for extracting estuary intertidal zone classification area by using object-oriented classification technology
CN105956058B (en) * 2016-04-27 2019-05-21 东南大学 A kind of variation land used rapid discovery method using unmanned aerial vehicle remote sensing images
CN107133360B (en) * 2017-05-31 2021-02-02 东南大学 Construction method of large-scale remote sensing image feature point library

Also Published As

Publication number Publication date
CN108363951A (en) 2018-08-03

Similar Documents

Publication Publication Date Title
CN108363951B (en) Automatic acquisition method of deep learning sample library corresponding to remote sensing image land type identification
Xiao et al. Change detection of built-up land: A framework of combining pixel-based detection and object-based recognition
CN110598784B (en) Machine learning-based construction waste classification method and device
CN110263717B (en) Method for determining land utilization category of street view image
CN111626947B (en) Map vectorization sample enhancement method and system based on generation of countermeasure network
CN110334578B (en) Weak supervision method for automatically extracting high-resolution remote sensing image buildings through image level annotation
CN107067405B (en) Remote sensing image segmentation method based on scale optimization
CN108629777A (en) A kind of number pathology full slice image lesion region automatic division method
CN112836614B (en) High-resolution remote sensing image classification method based on residual error network and transfer learning
CN110348415B (en) High-efficiency labeling method and system for high-resolution remote sensing target big data set
CN111598101A (en) Urban area intelligent extraction method, system and equipment based on remote sensing image scene segmentation
CN113674216A (en) Subway tunnel disease detection method based on deep learning
CN116433634A (en) Industrial image anomaly detection method based on domain self-adaption
CN117636160A (en) Automatic high-resolution remote sensing cultivated land block updating method based on semi-supervised learning
CN112418033A (en) Landslide slope surface segmentation and identification method based on mask rcnn neural network
CN113378642B (en) Method for detecting illegal occupation buildings in rural areas
CN112381730B (en) Remote sensing image data amplification method
CN114092826A (en) Method and device for refining earth surface coverage classification products based on image time sequence
CN112784806A (en) Lithium-containing pegmatite vein extraction method based on full convolution neural network
He et al. Building extraction based on U-net and conditional random fields
CN116434054A (en) Intensive remote sensing ground object extraction method based on line-plane combination
CN110378307A (en) Texture image orientation estimate method based on deep learning
CN115457384A (en) Method and device for identifying buildings in hollow villages, electronic equipment and storage medium
CN116958801A (en) Karst cave identification method for open-air outcrop data
CN115063684A (en) Agricultural machinery track identification method based on remote sensing image scene division and application method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant