CN113723464A - Remote sensing image classification method and device - Google Patents

Remote sensing image classification method and device Download PDF

Info

Publication number
CN113723464A
CN113723464A CN202110882078.0A CN202110882078A CN113723464A CN 113723464 A CN113723464 A CN 113723464A CN 202110882078 A CN202110882078 A CN 202110882078A CN 113723464 A CN113723464 A CN 113723464A
Authority
CN
China
Prior art keywords
image
grid
classification
sample image
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110882078.0A
Other languages
Chinese (zh)
Other versions
CN113723464B (en
Inventor
杜世宏
刘波
杜守航
张修远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202110882078.0A priority Critical patent/CN113723464B/en
Publication of CN113723464A publication Critical patent/CN113723464A/en
Application granted granted Critical
Publication of CN113723464B publication Critical patent/CN113723464B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明提供一种遥感影像分类方法及装置,包括:获取待分类影像对象;将待分类影像对象输入分类模型,得到所述分类模型输出的分类结果;其中,所述分类模型是基于样本影像对象的对象特征、深度特征及样本影像对象的类别标签训练得到的。通过基于样本影像对象的表征图像形状、空间等特征的对象特征,以及深度学习的深度特征训练分类模型,可以获得较好的模型训练效果,从而可以更好地获得待分类影像对象的分类结果,实现遥感影像的快速、精确分类,可应用于基于遥感影像的土地覆盖/利用制图。

Figure 202110882078

The present invention provides a remote sensing image classification method and device, comprising: acquiring an image object to be classified; inputting the image object to be classified into a classification model to obtain a classification result output by the classification model; wherein the classification model is based on sample image objects The object features, depth features and class labels of sample image objects are trained. By training the classification model based on the object features representing the image shape, space and other characteristics of the sample image objects, and the deep features of deep learning, a better model training effect can be obtained, so that the classification results of the image objects to be classified can be better obtained. Realize rapid and accurate classification of remote sensing images, which can be applied to land cover/utilization mapping based on remote sensing images.

Figure 202110882078

Description

Remote sensing image classification method and device
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to a remote sensing image classification method and device.
Background
Remote sensing image classification methods range from pixel-based image classification methods to object-oriented image classification methods to deep learning-based image classification methods. The image classification method based on the pixels is mainly used for classifying remote sensing images with lower spatial resolution in the early stage of remote sensing development. With the improvement of the spatial resolution of the remote sensing image, the heterogeneity inside the ground objects and the similarity between the ground objects are larger and larger, and at the moment, the object-oriented image classification method is more suitable for the situation. And then, the rise of deep learning, a plurality of remote sensing classification methods based on deep learning are also developed.
The object-oriented image classification method and the deep learning-based image classification method are more suitable for high-resolution remote sensing image classification. At present, the main method for combining the two image classification methods is to select some pixels from the image object, cut out an image block with the pixels as the center, classify the image block by using deep learning, and then vote to obtain the category of the image object based on the categories of the image blocks. The method does not effectively combine an object-oriented image classification method and an image classification method based on deep learning, the object-oriented image classification method only plays a role of a classification boundary constraint, the classification is still carried out by taking pixels as units in the actual classification process, and the method does not use more effective artificial definition characteristics.
Disclosure of Invention
The invention provides a remote sensing image classification method and a remote sensing image classification device, which are used for solving the defect that an object-oriented image classification method and an image classification method based on deep learning in the prior art cannot be fully combined and applied, and realizing effective combination of the object-oriented image classification method and the image classification method based on deep learning.
The invention provides a remote sensing image classification method, which comprises the following steps:
acquiring an image object to be classified;
inputting the image object to be classified into a classification model to obtain a classification result output by the classification model;
the classification model is obtained by training based on the object features and the depth features of the sample image objects and the class labels of the sample image objects.
According to the remote sensing image classification method provided by the invention, the classification model is obtained based on object features and depth features of sample image objects and class label training of the sample image objects, and specifically comprises the following steps:
acquiring the sample image object;
extracting object features of the sample image object;
extracting depth features of the sample image object;
superposing the object features and the depth features to obtain comprehensive features of the sample image object;
and training to obtain the classification model through the comprehensive characteristics of the sample image object and the class label of the sample image object.
According to the remote sensing image classification method provided by the invention, the extracting of the depth feature of the sample image object specifically comprises the following steps:
dividing grids on the sample image object by an image grid expression method to obtain a grid image set, wherein the grid image set comprises a plurality of sub-grid images with consistent grid sizes;
respectively carrying out depth feature extraction on each sub-grid image to obtain corresponding sub-depth features, wherein the sub-depth features are obtained by inputting the sub-grid images into a depth convolution neural network model;
and aggregating the sub-depth features by a depth feature aggregation method to obtain the depth features of the sample image object.
According to the remote sensing image classification method provided by the invention, the image grid expression method is used for dividing grids on the sample image object to obtain a grid image set, wherein the grid image set comprises a plurality of sub-grid images with consistent grid sizes, and the method specifically comprises the following steps:
determining a unit moving grid size, wherein the unit moving grid size is consistent with a sub-grid image size;
determining an initial grid of sample image objects;
taking the initial grid as a center, taking the size of a unit moving grid as a moving unit to move, and when the unit moving grid contains pixels of a sample image object, determining an image of the position of the unit moving grid as the sub-grid image;
determining the weight of each sub-grid image.
According to the remote sensing image classification method provided by the invention, the determining of the initial grid of the sample image object specifically comprises the following steps:
determining the center of gravity of a sample image object;
and taking the unit moving grid with the gravity center as the initial grid.
According to the remote sensing image classification method provided by the invention, the depth features of the sample image object are obtained by aggregating the sub-depth features through a depth feature aggregation method, and the method specifically comprises the following steps: and aggregating to form the depth features of the sample image object according to the weight of each sub-grid image and the corresponding sub-depth features.
According to the remote sensing image classification method provided by the invention, the sub-depth features are obtained by inputting the sub-grid images into a depth convolution neural network model, and the method specifically comprises the following steps: and inputting the sub-grid image into a depth convolution neural network model, and taking the last convolution layer of the depth convolution neural network model as the sub-depth feature and outputting the sub-depth feature.
The invention also provides a remote sensing image classification device, comprising:
the image object acquisition unit is used for acquiring an image object to be classified;
the image object classification unit is used for inputting the image object to be classified into a classification model to obtain a classification result output by the classification model;
the classification model is obtained by training based on the object features and the depth features of the sample image objects and the class labels of the sample image objects.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the steps of any one of the remote sensing image classification methods.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the remote sensing image classification methods described above.
According to the remote sensing image classification method and device provided by the invention, model training is carried out by combining an object-oriented image classification method for extracting object features and a deep learning image classification method for extracting depth features better through a classification model trained on the basis of the object features of the characterization image shape, the space and other features of a sample image object and the deep learning depth features, so that the classification result of the image object to be classified can be better obtained, the rapid and accurate classification of the remote sensing image is realized, and the method and device can be applied to land coverage/utilization mapping based on the remote sensing image.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow chart of a remote sensing image classification method provided by the present invention;
FIG. 2 is a flow chart of a classification model training method provided by the present invention;
FIG. 3 is a detailed flowchart of step 230 in FIG. 2;
FIG. 4 is a schematic representation of an image mesh provided by the present invention;
FIG. 5 is a schematic diagram of a deep convolutional neural network model structure provided by the present invention;
FIG. 6 is a schematic diagram of a sub-depth feature aggregation process provided by the present invention;
FIG. 7 is a detailed flowchart of step 310 in FIG. 3;
FIG. 8 is a detailed flowchart of step 720 in FIG. 7;
FIG. 9 is a schematic structural diagram of a remote sensing image classification device according to the present invention;
fig. 10 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Remote sensing image classification essentially needs to solve four problems: what are the classification units? What are the classification samples? How are classification features extracted? How is the classification model selected? In the actual classification process, the samples are generally determined. Therefore, for different remote sensing image classification methods, different places are classification units, feature extraction and classification models.
The main differences of object-oriented image classification methods compared to pixel-based image analysis methods are classification units and feature extraction. When the image spatial resolution is high, a feature is often composed of a plurality of pixels, and at this time, heterogeneity often exists inside the feature, and the image classification method based on the pixels cannot completely classify the feature. Therefore, before classification, it is necessary to group adjacent and similar pixels in an image into an image object, which is called image segmentation. In addition, the object-oriented image classification method can extract features such as shapes and spatial relationships of image objects for image classification, and the features cannot be extracted on the basis of pixels, so that the object-oriented image classification method is more advantageous than the pixel-based image classification method in classifying high-resolution remote sensing images.
The image classification method based on deep learning is mainly different from the two methods in feature extraction, the image classification method based on pixel and the image classification method facing to the object both need to define some features manually, then the features are used for image classification, and the image classification method based on deep learning automatically learns effective classification features under the guidance of a sample. Compared with the artificial definition of the features, the classification features automatically learned under the guidance of the samples can represent the implicit information of the original data, and classification of the remote sensing images is facilitated.
In conclusion, the object-oriented image classification method and the deep learning-based image classification method are more suitable for high-resolution remote sensing image classification. Because the object-oriented image analysis method uses insufficient feature expression capability caused by manually defined features and the image analysis method based on deep learning cannot effectively model geographic objects caused by pixel-based classification, the two image classification methods are combined to realize quick and accurate classification of remote sensing images and can be applied to land coverage/utilization mapping based on the remote sensing images.
As shown in fig. 1, the present invention provides a remote sensing image classification method combining an object-oriented image classification method and a deep learning-based remote sensing image classification method, including:
step 110: acquiring an image object to be classified;
in the embodiment of the invention, the remote sensing image which is subjected to land covering/utilization classification in advance is subjected to image segmentation to obtain the image object to be classified. Preferably, multi-resolution segmentation is adopted when the remote sensing image is segmented, namely, different remote sensing image areas are segmented by adopting different resolutions according to actual needs.
Step 120: inputting the image object to be classified into a classification model to obtain a classification result output by the classification model;
the classification model is obtained by training based on the object features and the depth features of the sample image objects and the class labels of the sample image objects.
In this step, each image object to be classified obtained by segmenting the remote sensing image is respectively input into the trained classification model, and corresponding classification results are respectively output.
In the embodiment of the present invention, as shown in fig. 2, the training process of the classification model specifically includes:
step 210: acquiring the sample image object;
step 220: extracting object features of the sample image object;
the step is to extract object characteristics such as spectrum, geometry, texture and the like of the obtained image by an image analysis oriented method.
Step 230: extracting depth features of the sample image object;
step 240: superposing the object features and the depth features to obtain comprehensive features of the sample image object;
step 250: and training to obtain the classification model through the comprehensive characteristics of the sample image object and the class label of the sample image object.
In the embodiment of the invention, the object characteristics of the sample image object are extracted by an object-oriented image classification method, the depth characteristics of the sample image object are extracted based on a depth learning method, a classification model is trained based on the fused depth characteristics and object characteristics, for example, the classification model adopts a random forest model, and the classification of each image object is identified by the trained classification model, so that a land cover/utilization thematic map is obtained.
In the embodiment of the present invention, as shown in fig. 3, step 230 specifically includes:
step 310: dividing grids on the sample image object by an image grid expression method to obtain a grid image set, wherein the grid image set comprises a plurality of sub-grid images with consistent grid sizes; fig. 4 is a schematic diagram illustrating a sample image object as a grid image set. This step is the mesh representation of the sample image object.
Step 320: respectively carrying out depth feature extraction on each sub-grid image to obtain corresponding sub-depth features, wherein the sub-depth features are obtained by inputting the sub-grid images into a depth convolution neural network model;
as shown in fig. 5, the deep convolutional neural network model is generally composed of convolutional layers, pooling layers, and fully-connected layers. The sample image objects with the class labels can be used for training a deep convolutional neural network model after grid expression.
The sub-grid images are input into a trained deep convolutional neural network model, and the sub-depth features of the sub-grid images can be extracted by selecting the last convolutional layer as the depth features.
Step 330: and aggregating the sub-depth features by a depth feature aggregation method to obtain the depth features of the sample image object.
Fig. 6 is a schematic diagram of a depth feature aggregation process.
In the embodiment of the present invention, since the sample image object is irregular and has multiple deformations, and the input of the depth convolutional neural network model used in step 230 is generally a regular image block in order to obtain the depth features of the image object, the main purpose of the mesh expression of the sample image object is to solve the gap between the irregular shape of the sample image object and the regular image block required by the input of the depth convolutional neural network model. The size of the mesh of each sub-mesh image in the set of mesh images obtained in step 310 thus corresponds to the size of the input image required by the deep convolutional neural network model.
In the embodiment of the present invention, as shown in fig. 7, step 310 specifically includes:
step 710: determining a unit moving grid size, wherein the unit moving grid size is consistent with a sub-grid image size;
step 720: determining an initial grid of sample image objects;
specifically, as shown in the figure, step 720 specifically includes:
step 810: determining the center of gravity of a sample image object;
step 820: and taking the unit moving grid with the gravity center as the initial grid.
Specifically, the initial grid determining process is as follows: let the boundary of the sample image object be composed of n points, and the coordinate sequence thereof be { (x)1,y1),(x2,y2),…,(xn,yn),(xn+1,yn+1) Therein ofx1=xn+1And y is1=yn+1Then the center of gravity of the sample image object
Figure BDA0003192738610000081
The angle θ between the principal direction of the sample image object and the x-axis can be calculated by the following formula:
Figure BDA0003192738610000082
Figure BDA0003192738610000083
Figure BDA0003192738610000084
Figure BDA0003192738610000085
Figure BDA0003192738610000086
Figure BDA0003192738610000091
Figure BDA0003192738610000092
Figure BDA0003192738610000093
wherein Q isxRepresenting the moment, Q, of the image object about the x-axisyRepresenting the moment of the image object about the y-axis, IxxRepresenting the moment of inertia of the image object about the x-axis, IyyRepresenting the moment of inertia of the image object about the y-axis, IxyThe product of inertia of the image object is represented, and a represents the area of the image object.
A local coordinate system can be established by taking the gravity center of the sample image object as a coordinate origin and the main direction of the sample image object as an x-axis, a straight line with x equal to 0 is taken, the straight line has a plurality of intersection points with the boundary of the sample image object, and the vertical coordinate value is { y'1,y′2,…y′nGet the mean value of two adjacent ordinate values
Figure BDA0003192738610000094
If point
Figure BDA0003192738610000095
Within the sample image object, a mesh of size sxs (i.e., the unit motion mesh has a size sxs) is created at the point center, which serves as an initial mesh expressed by the sample image object mesh.
Step 730: taking the initial grid as a center, taking the size of a unit moving grid as a moving unit to move, and when the unit moving grid contains pixels of a sample image object, determining an image of the position of the unit moving grid as the sub-grid image;
and searching other grids around the initial grid as the center, and adding the grid into a grid image set of the sample image object if the grid contains the pixels of the sample image object.
Step 740: determining the weight of each sub-grid image.
In this step, the weight of the sub-grid image is the number of pixels belonging to the sample image object in the grid divided by the total number of pixels of the sample image object.
In the embodiment of the present invention, in combination with step 330 and step 740, the process of performing the aggregation on the sub-depth features specifically includes: the shape of the sub-depth feature extracted from each sub-grid image is s × s × m, where s is the length and width of the sub-depth feature, and m is the dimension of the sub-depth feature. The depth features of each sub-grid image can be aggregated into an m-dimensional feature vector by aggregating an aggregation operator on the length and width of the sub-depth features, wherein the aggregation operator can be a mean value, a maximum value and the like. And finally, weighting and summing the feature vectors of all the sub-grid images of the sample image object to obtain the depth feature of the sample image object.
The remote sensing image classification device provided in the embodiment of the present invention is described below, and the remote sensing image classification device described below and the remote sensing image classification method described above may be referred to in correspondence with each other, as shown in fig. 9, and an embodiment of the present invention provides a remote sensing image classification device including:
an image object obtaining unit 910, configured to obtain an image object to be classified;
in the embodiment of the invention, the remote sensing image which is subjected to land covering/utilization classification in advance is subjected to image segmentation to obtain the image object to be classified. Preferably, multi-resolution segmentation is adopted when the remote sensing image is segmented, namely, different remote sensing image areas are segmented by adopting different resolutions according to actual needs.
The image object classification unit 920 is configured to input the image object to be classified into a classification model, so as to obtain a classification result output by the classification model;
the classification model is obtained by training based on the object features and the depth features of the sample image objects and the class labels of the sample image objects.
In the embodiment of the invention, each image object to be classified obtained by segmenting the remote sensing image is respectively input into the trained classification model, and the corresponding classification results are respectively output.
In the embodiment of the present invention, the image object classifying unit 920 includes a classification model training subunit, and the classification model training subunit specifically includes:
the sample acquisition subunit is used for acquiring a sample image object;
an object feature extraction subunit, configured to extract an object feature of the sample image object;
the object feature extraction subunit extracts object features such as spectrum, geometry, texture and the like of the obtained image by a face-to-face image analysis method.
A depth feature extraction subunit, configured to extract a depth feature of the sample image object;
the characteristic synthesis subunit is used for superposing the object characteristics and the depth characteristics to obtain the synthesis characteristics of the sample image object;
and the training subunit is used for training to obtain the classification model through the comprehensive characteristics of the sample image object and the class label of the sample image object.
In the embodiment of the invention, the object characteristics of the sample image object are extracted by an object-oriented image analysis method, the depth characteristics of the sample image object are extracted based on a depth learning method, a classification model is trained based on the depth characteristics and the object characteristics which are fused, for example, the classification model adopts a random forest model, and the classification of each image object is identified by the trained classification model, so that a land cover/utilization thematic map is obtained.
Wherein, the depth feature extraction subunit specifically includes:
and a mesh dividing subunit, configured to divide a mesh on the sample image object by using an image mesh expression method to obtain a mesh image set, where the mesh image set includes a plurality of sub-mesh images with the same mesh size.
The sub-depth feature extraction subunit is used for respectively carrying out depth feature extraction on each sub-grid image to obtain corresponding sub-depth features, wherein the sub-depth features are obtained by inputting the sub-grid images into a depth convolution neural network model; the sub-grid images are input into a trained deep convolutional neural network model, and the sub-depth features of the sub-grid images can be extracted by selecting the last convolutional layer as the depth features.
And the self-depth feature aggregation subunit is used for aggregating the sub-depth features by a depth feature aggregation method to obtain the depth features of the sample image object.
In the embodiment of the present invention, because the sample image object has multiple irregular deformations, and in the embodiment of the present invention, in order to obtain the depth features of the image object, the input of the adopted depth convolution neural network model is generally a regular image block, so the main purpose of the mesh expression of the sample image object is to solve the gap between the irregular shape of the sample image object and the regular image block required to be input by the input of the depth convolution neural network model. Therefore, the size of the grid of each sub-grid image in the grid image set obtained by the grid dividing and dividing unit is consistent with the size of the input image required by the deep convolutional neural network model.
Wherein, the grid planning molecule unit specifically includes:
a unit moving grid size determining subunit, configured to determine a unit moving grid size, where the unit moving grid size is consistent with a sub-grid image size;
an initial mesh determining subunit of the sample image object, configured to determine an initial mesh of the sample image object;
specifically, the determining the initial grid subunit of the sample image object specifically includes:
the sample image object gravity center determining subunit is used for determining the gravity center of the sample image object;
an initial grid determining subunit, configured to use a unit moving grid with the center of gravity as the initial grid.
A grid searching subunit, configured to use the initial grid as a center, use the unit moving grid size as a moving unit, and determine, when a unit moving grid includes pixels of a sample image object, an image at the unit moving grid position as the sub-grid image;
and searching other grids around the initial grid as the center, and adding the grid into a grid image set of the sample image object if the grid contains the pixels of the sample image object.
And the sub-grid image weight determining subunit is used for determining the sub-grid image weight.
In an embodiment of the present invention, the weight of the sub-grid image is the number of pixels belonging to the sample image object in the grid divided by the total number of pixels of the sample image object.
An entity structure schematic diagram of an electronic device provided in an embodiment of the present invention is described below with reference to fig. 10, and as shown in fig. 10, the electronic device may include: a processor (processor)1010, a communication Interface (Communications Interface)1020, a memory (memory)1030, and a communication bus 1040, wherein the processor 1010, the communication Interface 1020, and the memory 1030 communicate with each other via the communication bus 1040. Processor 1010 may invoke logic instructions in memory 1030 to perform a method of remote sensing image classification, the method comprising: acquiring an image object to be classified; inputting the image object to be classified into a classification model to obtain a classification result output by the classification model; the classification model is obtained by training based on the object features and the depth features of the sample image objects and the class labels of the sample image objects.
Furthermore, the logic instructions in the memory 1030 can be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or make a contribution to the prior art, or may be implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, an embodiment of the present invention further provides a computer program product, where the computer program product includes a computer program stored on a non-transitory computer-readable storage medium, the computer program includes program instructions, and when the program instructions are executed by a computer, the computer is capable of executing the remote sensing image classification method provided by the above methods, where the method includes: acquiring an image object to be classified; inputting the image object to be classified into a classification model to obtain a classification result output by the classification model; the classification model is obtained by training based on the object features and the depth features of the sample image objects and the class labels of the sample image objects.
In another aspect, an embodiment of the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to, when executed by a processor, perform the foregoing remote sensing image classification method: acquiring an image object to be classified; inputting the image object to be classified into a classification model to obtain a classification result output by the classification model; the classification model is obtained by training based on the object features and the depth features of the sample image objects and the class labels of the sample image objects.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1.一种遥感影像分类方法,其特征在于,包括:1. a remote sensing image classification method, is characterized in that, comprises: 获取待分类影像对象;Get the image object to be classified; 将所述待分类影像对象输入分类模型,得到所述分类模型输出的分类结果;Inputting the image object to be classified into a classification model to obtain a classification result output by the classification model; 其中,所述分类模型是基于样本影像对象的对象特征、深度特征及样本影像对象的类别标签训练得到的。Wherein, the classification model is obtained by training based on object features, depth features of the sample image objects and class labels of the sample image objects. 2.根据权利要求1所述的遥感影像分类方法,其特征在于,所述分类模型是基于样本影像对象的对象特征、深度特征及样本影像对象的类别标签训练得到的,具体包括:2. The remote sensing image classification method according to claim 1, wherein the classification model is obtained by training based on the object feature, depth feature of the sample image object and the class label of the sample image object, specifically comprising: 获取所述样本影像对象;obtaining the sample image object; 提取所述样本影像对象的对象特征;extracting object features of the sample image object; 提取所述样本影像对象的深度特征;extracting the depth feature of the sample image object; 将所述对象特征与深度特征进行叠加,得到所述样本影像对象的综合特征;Superimposing the object feature and the depth feature to obtain the comprehensive feature of the sample image object; 通过样本影像对象的综合特征以及样本影像对象的类别标签,训练得到所述分类模型。The classification model is obtained by training through the comprehensive features of the sample image objects and the category labels of the sample image objects. 3.根据权利要求2所述的遥感影像分类方法,其特征在于,所述提取所述样本影像对象的深度特征,具体包括:3. The remote sensing image classification method according to claim 2, wherein the extracting the depth feature of the sample image object specifically comprises: 通过影像格网表达方法在所述样本影像对象上划分格网,得到格网图像集,其中,所述格网图像集包括多个格网尺寸一致的子格网图像;A grid image set is obtained by dividing a grid on the sample image object by an image grid expression method, wherein the grid image set includes a plurality of sub-grid images with the same grid size; 对各子格网图像分别进行深度特征提取得到对应的子深度特征,其中,所述子深度特征是将子格网图像输入深度卷积神经网络模型得到的;Perform depth feature extraction on each sub-grid image to obtain corresponding sub-depth features, wherein the sub-depth features are obtained by inputting the sub-grid images into a deep convolutional neural network model; 通过深度特征聚合方法将各子深度特征进行聚合,得到所述样本影像对象的深度特征。Each sub-depth feature is aggregated by a depth feature aggregation method to obtain the depth feature of the sample image object. 4.根据权利要求3所述的遥感影像分类方法,其特征在于,所述通过影像格网表达方法在所述样本影像对象上划分格网,得到格网图像集,其中,所述格网图像集包括多个格网尺寸一致的子格网图像,具体包括:4 . The remote sensing image classification method according to claim 3 , wherein the sample image object is divided into grids by an image grid expression method to obtain a grid image set, wherein the grid image The set includes multiple sub-grid images of consistent grid size, including: 确定单位移动格网尺寸,其中所述单位移动格网尺寸与子格网图像尺寸一致;determining a unit moving grid size, wherein the unit moving grid size is consistent with the sub-grid image size; 确定样本影像对象的初始格网;Determine the initial grid of the sample image object; 以所述初始格网为中心,以单位移动格网尺寸为移动单位进行移动,当单位移动格网内包含样本影像对象的像素时,确定单位移动格网位置的图像为所述子格网图像;Taking the initial grid as the center and taking the unit moving grid size as the moving unit to move, when the unit moving grid contains the pixels of the sample image object, the image of the unit moving grid position is determined as the sub-grid image ; 确定各子格网图像权重。Determines the weight of each subgrid image. 5.根据权利要求4所述的遥感影像分类方法,其特征在于,所述确定样本影像对象的初始格网,具体包括:5. The remote sensing image classification method according to claim 4, wherein the determining the initial grid of the sample image object specifically comprises: 确定样本影像对象重心;Determine the center of gravity of the sample image object; 将以所述重心为中心的单位移动格网作为所述初始格网。A unit moving grid centered on the center of gravity is used as the initial grid. 6.根据权利要求4所述的遥感影像分类方法,其特征在于,所述通过深度特征聚合方法将各子深度特征进行聚合,得到所述样本影像对象的深度特征,具体包括:根据各子格网图像的权重及对应的子深度特征,聚合形成所述样本影像对象的深度特征。6 . The remote sensing image classification method according to claim 4 , wherein the sub-depth features are aggregated by a depth feature aggregation method to obtain the depth features of the sample image objects, specifically comprising: according to each sub-grid. 7 . The weight of the web image and the corresponding sub-depth features are aggregated to form the depth feature of the sample image object. 7.根据权利要求3所述的遥感影像分类方法,其特征在于,所述子深度特征是将子格网图像输入深度卷积神经网络模型得到的,具体包括:将所述子格网图像输入深度卷积神经网络模型,将深度卷积神经网络模型的最后一个卷积层作为所述子深度特征并输出。7. The remote sensing image classification method according to claim 3, wherein the sub-depth feature is obtained by inputting a sub-grid image into a deep convolutional neural network model, specifically comprising: inputting the sub-grid image into a deep convolutional neural network model. In the deep convolutional neural network model, the last convolutional layer of the deep convolutional neural network model is used as the sub-depth feature and output. 8.一种遥感影像分类装置,其特征在于,包括:8. A remote sensing image classification device, comprising: 影像对象获取单元,用于获取待分类影像对象;an image object acquisition unit, configured to acquire an image object to be classified; 影像对象分类单元,用于将所述待分类影像对象输入分类模型,得到所述分类模型输出的分类结果;an image object classification unit, configured to input the image object to be classified into a classification model, and obtain a classification result output by the classification model; 其中,所述分类模型是基于样本影像对象的对象特征、深度特征及样本影像对象的类别标签训练得到的。Wherein, the classification model is obtained by training based on object features, depth features of the sample image objects and class labels of the sample image objects. 9.一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现如权利要求1至7任一项所述遥感影像分类方法的步骤。9. An electronic device, comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the processor implements the program as claimed in claim 1 when executing the program Steps of any one of the remote sensing image classification methods described in to 7. 10.一种非暂态计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述遥感影像分类方法的步骤。10. A non-transitory computer-readable storage medium on which a computer program is stored, characterized in that, when the computer program is executed by a processor, a method for classifying remote sensing images according to any one of claims 1 to 7 is realized. step.
CN202110882078.0A 2021-08-02 2021-08-02 Remote sensing image classification method and device Active CN113723464B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110882078.0A CN113723464B (en) 2021-08-02 2021-08-02 Remote sensing image classification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110882078.0A CN113723464B (en) 2021-08-02 2021-08-02 Remote sensing image classification method and device

Publications (2)

Publication Number Publication Date
CN113723464A true CN113723464A (en) 2021-11-30
CN113723464B CN113723464B (en) 2023-10-03

Family

ID=78674738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110882078.0A Active CN113723464B (en) 2021-08-02 2021-08-02 Remote sensing image classification method and device

Country Status (1)

Country Link
CN (1) CN113723464B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321830A (en) * 2019-06-28 2019-10-11 北京邮电大学 A kind of Chinese character string picture OCR recognition methods neural network based
CN111325259A (en) * 2020-02-14 2020-06-23 武汉大学 Remote sensing image classification method based on deep learning and binary coding
CN111968156A (en) * 2020-07-28 2020-11-20 国网福建省电力有限公司 Adaptive hyper-feature fusion visual tracking method
CN111985487A (en) * 2020-08-31 2020-11-24 香港中文大学(深圳) Remote sensing image target extraction method, electronic equipment and storage medium
CN112101271A (en) * 2020-09-23 2020-12-18 台州学院 Hyperspectral remote sensing image classification method and device
CN112257496A (en) * 2020-09-14 2021-01-22 中国电力科学研究院有限公司 Deep learning-based power transmission channel surrounding environment classification method and system
CN113139532A (en) * 2021-06-22 2021-07-20 中国地质大学(武汉) Classification method based on multi-output classification model, computer equipment and medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321830A (en) * 2019-06-28 2019-10-11 北京邮电大学 A kind of Chinese character string picture OCR recognition methods neural network based
CN111325259A (en) * 2020-02-14 2020-06-23 武汉大学 Remote sensing image classification method based on deep learning and binary coding
CN111968156A (en) * 2020-07-28 2020-11-20 国网福建省电力有限公司 Adaptive hyper-feature fusion visual tracking method
CN111985487A (en) * 2020-08-31 2020-11-24 香港中文大学(深圳) Remote sensing image target extraction method, electronic equipment and storage medium
CN112257496A (en) * 2020-09-14 2021-01-22 中国电力科学研究院有限公司 Deep learning-based power transmission channel surrounding environment classification method and system
CN112101271A (en) * 2020-09-23 2020-12-18 台州学院 Hyperspectral remote sensing image classification method and device
CN113139532A (en) * 2021-06-22 2021-07-20 中国地质大学(武汉) Classification method based on multi-output classification model, computer equipment and medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHUNJU ZHANG 等: "Multi-Scale Dense Networks for Hyperspectral Remote Sensing Image Classification", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》, vol. 57, no. 11, pages 9201 - 9222, XP011755632, DOI: 10.1109/TGRS.2019.2925615 *
刘大伟 等: "基于深度学习的高分辨率遥感影像分类研究", 《光学学报》, vol. 36, no. 4, pages 1 - 9 *
孙玉: "应用特征组合的基于对象卷积神经网络在高分辨率影像分类中的研究", 《中国优秀硕士学位论文全文数据库》, pages 1 - 104 *

Also Published As

Publication number Publication date
CN113723464B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN110929607B (en) A remote sensing identification method and system for urban building construction progress
CN109753885B (en) Target detection method and device and pedestrian detection method and system
Zhao et al. Transfer learning with fully pretrained deep convolution networks for land-use classification
CN107918776B (en) A land use planning method, system and electronic device based on machine vision
CN108596108B (en) Aerial remote sensing image change detection method based on triple semantic relation learning
CN113168510A (en) Segmenting objects a priori by refining shape
CN110135354B (en) Change detection method based on live-action three-dimensional model
CN109409384A (en) Image-recognizing method, device, medium and equipment based on fine granularity image
CN113177456B (en) Remote sensing target detection method based on single-stage full convolution network and multi-feature fusion
CN107784288A (en) A kind of iteration positioning formula method for detecting human face based on deep neural network
CN117237808A (en) Remote sensing image target detection method and system based on ODC-YOLO network
CN114519819B (en) Remote sensing image target detection method based on global context awareness
CN113537180B (en) Tree obstacle identification method and device, computer equipment and storage medium
CN114463503B (en) Method and device for integrating three-dimensional model and geographic information system
CN111583148A (en) Rock core image reconstruction method based on generation countermeasure network
CN115272887A (en) Coastal garbage identification method, device and equipment based on UAV detection
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
CN116740135B (en) Infrared dim target tracking method and device, electronic equipment and storage medium
CN112668461B (en) Intelligent supervision system with wild animal identification function
CN112528058A (en) Fine-grained image classification method based on image attribute active learning
CN115953330B (en) Texture optimization method, device, equipment and storage medium for virtual scene image
CN113723464A (en) Remote sensing image classification method and device
CN117576303A (en) Three-dimensional image generation method, device, equipment and storage medium
CN115393264A (en) Pin-level defect identification method for unmanned aerial vehicle inspection image
Betsas et al. Incad: 2D Vector Drawings Creation Using Instance Segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant