CN113723464B - Remote sensing image classification method and device - Google Patents

Remote sensing image classification method and device Download PDF

Info

Publication number
CN113723464B
CN113723464B CN202110882078.0A CN202110882078A CN113723464B CN 113723464 B CN113723464 B CN 113723464B CN 202110882078 A CN202110882078 A CN 202110882078A CN 113723464 B CN113723464 B CN 113723464B
Authority
CN
China
Prior art keywords
grid
image
sub
sample image
image object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110882078.0A
Other languages
Chinese (zh)
Other versions
CN113723464A (en
Inventor
杜世宏
刘波
杜守航
张修远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202110882078.0A priority Critical patent/CN113723464B/en
Publication of CN113723464A publication Critical patent/CN113723464A/en
Application granted granted Critical
Publication of CN113723464B publication Critical patent/CN113723464B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a remote sensing image classification method and a device, comprising the following steps: acquiring an image object to be classified; inputting an image object to be classified into a classification model to obtain a classification result output by the classification model; the classification model is obtained by training based on object features and depth features of the sample image object and class labels of the sample image object. By training the classification model based on object features of the sample image object, such as characteristic image shape, space and the like, and the depth features of the deep learning, a better model training effect can be obtained, so that the classification result of the image object to be classified can be better obtained, the remote sensing image can be rapidly and accurately classified, and the method can be applied to land covering/utilizing drawing based on the remote sensing image.

Description

Remote sensing image classification method and device
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to a remote sensing image classification method and device.
Background
The remote sensing image classification method goes from a pixel-based image classification method to an object-oriented image classification method to a deep learning-based image classification method. The pixel-based image classification method mainly classifies remote sensing images with lower spatial resolution in early remote sensing development. Along with the improvement of the spatial resolution of the remote sensing image, the internal heterogeneity of the ground object and the similarity between the classes are larger and larger, and the object-oriented image classification method is more suitable for the situation. Subsequent rise in deep learning has also developed many remote sensing classification methods based on deep learning.
The object-oriented image classification method and the image classification method based on deep learning are more suitable for high-resolution remote sensing image classification. The main method for combining the two image classification methods at present is to select some pixels from the image objects, cut an image block with the pixels as the center, use deep learning to classify the image block, and then vote based on the categories of the image blocks to obtain the categories of the image objects. The method does not effectively combine an object-oriented image classification method and a deep learning-based image classification method, the object-oriented image classification method only plays a role of classification boundary constraint, classification is still carried out by taking pixels as a unit in the actual classification process, and more effective manual definition features are not used in the method.
Disclosure of Invention
The invention provides a remote sensing image classification method and a remote sensing image classification device, which are used for solving the defect that an object-oriented image classification method and a deep learning-based image classification method in the prior art cannot be fully combined and applied, and realizing the effective combination of the object-oriented image classification method and the deep learning-based image classification method.
The invention provides a remote sensing image classification method, which comprises the following steps:
acquiring an image object to be classified;
inputting the image object to be classified into a classification model to obtain a classification result output by the classification model;
the classification model is obtained by training based on object features and depth features of the sample image object and class labels of the sample image object.
According to the remote sensing image classification method provided by the invention, the classification model is obtained based on object characteristics and depth characteristics of a sample image object and class labels of the sample image object, and specifically comprises the following steps:
acquiring the sample image object;
extracting object features of the sample image object;
extracting depth characteristics of the sample image object;
superposing the object features and the depth features to obtain comprehensive features of the sample image object;
and training to obtain the classification model through the comprehensive characteristics of the sample image object and the class labels of the sample image object.
According to the remote sensing image classification method provided by the invention, the depth features of the sample image object are extracted, and the method specifically comprises the following steps:
dividing grids on the sample image object through an image grid expression method to obtain a grid image set, wherein the grid image set comprises a plurality of sub-grid images with the same grid size;
respectively extracting depth features of each sub-grid image to obtain corresponding sub-depth features, wherein the sub-depth features are obtained by inputting the sub-grid images into a depth convolutional neural network model;
and polymerizing each sub-depth feature by a depth feature aggregation method to obtain the depth feature of the sample image object.
According to the remote sensing image classification method provided by the invention, the grids are divided on the sample image object through the image grid expression method to obtain a grid image set, wherein the grid image set comprises a plurality of sub-grid images with the same grid size, and the method specifically comprises the following steps:
determining a unit mobile grid size, wherein the unit mobile grid size is consistent with a sub-grid image size;
determining an initial grid of a sample image object;
moving the initial grid by taking the size of the unit moving grid as a moving unit, and determining an image of the position of the unit moving grid as the sub-grid image when the unit moving grid contains pixels of the sample image object;
and determining the weight of each sub-grid image.
According to the remote sensing image classification method provided by the invention, the initial grid for determining the sample image object specifically comprises the following steps:
determining the center of gravity of a sample image object;
and taking the unit mobile grid taking the gravity center as the initial grid.
According to the remote sensing image classification method provided by the invention, each sub-depth feature is aggregated by a depth feature aggregation method to obtain the depth feature of the sample image object, which comprises the following steps: and according to the weights of the sub-grid images and the corresponding sub-depth features, the depth features of the sample image object are formed in an aggregation mode.
According to the remote sensing image classification method provided by the invention, the sub-depth features are obtained by inputting sub-grid images into a depth convolution neural network model, and specifically comprise the following steps: and inputting the sub-grid images into a deep convolutional neural network model, taking the last convolutional layer of the deep convolutional neural network model as the sub-depth feature, and outputting the sub-depth feature.
The invention also provides a remote sensing image classification device, which comprises:
the image object acquisition unit is used for acquiring the image objects to be classified;
the image object classifying unit is used for inputting the image objects to be classified into a classifying model to obtain a classifying result output by the classifying model;
the classification model is obtained by training based on object features and depth features of the sample image object and class labels of the sample image object.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the steps of any one of the remote sensing image classification methods are realized when the processor executes the program.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the remote sensing image classification method as described in any of the above.
According to the remote sensing image classification method and device, the object characteristics of the sample image object, such as the characteristic image shape and the space, and the classification model trained by the deep learning depth characteristics are combined with the object-oriented image classification method for extracting the object characteristics and the deep learning image classification method for extracting the depth characteristics to perform model training, so that the classification result of the image object to be classified can be better obtained, the rapid and accurate classification of the remote sensing image is realized, and the remote sensing image classification method and device can be applied to land coverage/utilization drawing based on the remote sensing image.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a remote sensing image classification method provided by the invention;
FIG. 2 is a flow chart of a classification model training method provided by the invention;
FIG. 3 is a detailed flow chart of step 230 of FIG. 2;
FIG. 4 is a schematic representation of an image grid provided by the present invention;
FIG. 5 is a schematic diagram of a deep convolutional neural network model provided by the present invention;
FIG. 6 is a schematic diagram of a sub-depth feature aggregation process provided by the present invention;
FIG. 7 is a detailed flow chart of step 310 of FIG. 3;
FIG. 8 is a detailed flow chart of step 720 in FIG. 7;
FIG. 9 is a schematic diagram of a remote sensing image classification apparatus according to the present invention;
fig. 10 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The classification of remote sensing images essentially needs to solve four problems: what is the classification unit? What is the classification sample? How does the classification feature extract? How does the classification model select? In an actual classification process, the samples are generally determined. Therefore, the classification unit, the feature extraction and the classification model are different from the different remote sensing image classification methods.
The main difference between the object-oriented image classification method and the pixel-based image analysis method is the classification unit and the feature extraction. When the spatial resolution of the image is higher, one ground object is often composed of a plurality of pixels, at this time, heterogeneity often exists in the ground object, and the image classification method based on the pixels cannot completely classify the ground object. Therefore, adjacent and similar pixels in an image need to be aggregated into one image object before classification, called image segmentation. In addition, the object-oriented image classification method can extract the characteristics of the shape, the spatial relationship and the like of the image object for image classification, but the characteristics cannot be extracted based on pixels, so the object-oriented image classification method has more advantages compared with the pixel-based image classification method when classifying the high-resolution remote sensing image.
The main difference between the image classification method based on deep learning and the two methods is that the feature extraction, the image classification method based on pixels and the image classification method based on objects need to define some features manually, then the image classification is carried out by utilizing the features, and the image classification method based on deep learning automatically learns effective classification features under the guidance of samples. Compared with manually defined features, the classification features automatically learned under the guidance of the sample can represent the implicit information of the original data, and are favorable for classifying the remote sensing images.
In conclusion, the object-oriented image classification method and the image classification method based on deep learning are more suitable for high-resolution remote sensing image classification. Because the object-oriented image analysis method uses the insufficient feature expression capability caused by the manually defined features and the image analysis method based on the deep learning cannot effectively model the geographic object caused by the classification based on pixels, the two image classification methods are combined to realize the rapid and accurate classification of the remote sensing images and can be applied to the land coverage/utilization drawing based on the remote sensing images.
As shown in fig. 1, the present invention provides a remote sensing image classification method combining an object-oriented image classification method and a deep learning-based remote sensing image classification method, comprising:
step 110: acquiring an image object to be classified;
in the embodiment of the invention, the remote sensing image which is subjected to land coverage/classification is subjected to image segmentation to obtain the image object to be classified. Preferably, when the remote sensing image is segmented, multi-resolution segmentation is adopted, namely, different remote sensing image areas are segmented according to actual needs by adopting different resolutions.
Step 120: inputting the image object to be classified into a classification model to obtain a classification result output by the classification model;
the classification model is obtained by training based on object features and depth features of the sample image object and class labels of the sample image object.
In the step, each image object to be classified obtained by dividing the remote sensing image is respectively input into a trained classification model, and corresponding classification results are respectively output.
In the embodiment of the present invention, as shown in fig. 2, the training process of the classification model specifically includes:
step 210: acquiring the sample image object;
step 220: extracting object features of the sample image object;
the method comprises the step of extracting and obtaining object features such as spectrum, geometry, texture and the like of an image by a method of analyzing the image.
Step 230: extracting depth characteristics of the sample image object;
step 240: superposing the object features and the depth features to obtain comprehensive features of the sample image object;
step 250: and training to obtain the classification model through the comprehensive characteristics of the sample image object and the class labels of the sample image object.
In the embodiment of the invention, the object characteristics of the sample image object are extracted by an object-oriented image classification method, the depth characteristics of the sample image object are extracted based on a deep learning method, a classification model is trained based on the fused depth characteristics and object characteristics, for example, the classification model adopts a random forest model, the trained classification model identifies the category of each image object, and a soil coverage/utilization thematic map is obtained.
In the embodiment of the present invention, as shown in fig. 3, step 230 specifically includes:
step 310: dividing grids on the sample image object through an image grid expression method to obtain a grid image set, wherein the grid image set comprises a plurality of sub-grid images with the same grid size; fig. 4 is a schematic diagram showing the sample image object expressed as a grid image set. The step is to express the grid of the sample image object.
Step 320: respectively extracting depth features of each sub-grid image to obtain corresponding sub-depth features, wherein the sub-depth features are obtained by inputting the sub-grid images into a depth convolutional neural network model;
as shown in fig. 5, the deep convolutional neural network model is generally composed of a convolutional layer, a pooling layer, and a fully-connected layer. The sample image object with the category label can be used for training a deep convolutional neural network model after grid expression.
The sub-depth features of the sub-grid images can be extracted by inputting the sub-grid images into the trained deep convolutional neural network model and selecting the last convolutional layer as the depth feature.
Step 330: and polymerizing each sub-depth feature by a depth feature aggregation method to obtain the depth feature of the sample image object.
A schematic diagram of the depth profile polymerization process is shown in fig. 6.
In the embodiment of the present invention, since the sample image object is irregularly deformed, and the input of the depth convolutional neural network model used in step 230 is generally a regular image block, the main purpose of the grid expression of the sample image object is to solve the gap between the irregular shape of the sample image object and the input requirement of the depth convolutional neural network model for inputting the regular image block. The mesh size of each sub-mesh image in the set of mesh images obtained in step 310 is therefore consistent with the size of the input image required by the deep convolutional neural network model.
In the embodiment of the present invention, as shown in fig. 7, step 310 specifically includes:
step 710: determining a unit mobile grid size, wherein the unit mobile grid size is consistent with a sub-grid image size;
step 720: determining an initial grid of a sample image object;
specifically, as shown, step 720 specifically includes:
step 810: determining the center of gravity of a sample image object;
step 820: and taking the unit mobile grid taking the gravity center as the initial grid.
Specifically, the initial mesh determination process is: let the boundary of the sample image object be composed of n points, and its coordinate sequence is { (x) 1 ,y 1 ),(x 2 ,y 2 ),…,(x n ,y n ),(x n+1 ,y n+1 ) X, where x 1 =x n+1 And y is 1 =y n+1 The center of gravity of the sample image objectAnd the included angle theta between the main direction of the sample image object and the x-axis can be calculated by the following formula:
wherein Q is x Representing the moment of the image object about the x-axis, Q y Representing the moment of the image object about the y-axis, I xx Representing moment of inertia of an image object about an x-axis, I yy Representing moment of inertia of the image object about the y-axis, I xy The inertia product of the image object is represented, and a represents the area of the image object.
Image object by sampleA local coordinate system can be established by taking the center of gravity of the sample image object as the origin of coordinates and taking the main direction of the sample image object as the x-axis, a straight line with x=0 is taken, the straight line can have a plurality of intersection points with the boundary of the sample image object, and the ordinate values of the straight line are { y' 1 ,y′ 2 ,…y′ n Mean value of two adjacent ordinate valuesIf it isInside the sample image object, a grid with a size of s×s (i.e., a size of s×s of a unit moving grid) is created at the center of the point, and the grid is used as an initial grid expressed by the sample image object grid.
Step 730: moving the initial grid by taking the size of the unit moving grid as a moving unit, and determining an image of the position of the unit moving grid as the sub-grid image when the unit moving grid contains pixels of the sample image object;
and searching other grids around the initial grid as a center, and adding the grids into the grid image set of the sample image object if the grids contain pixels of the sample image object.
Step 740: and determining the weight of each sub-grid image.
In this step, the weight of the sub-grid image is the number of pixels belonging to the sample image object divided by the total number of pixels of the sample image object in the grid.
In the embodiment of the present invention, the process of aggregating sub-depth features in combination with step 330 and step 740 specifically includes: the shape of the sub-depth feature extracted from each sub-grid image is sxsxm, where s is the length and width of the sub-depth feature and m is the dimension of the sub-depth feature. The depth features of each sub-grid image can be aggregated into an m-dimensional feature vector by an aggregation operator over the length and width of the sub-depth features, wherein the aggregation operator can be a mean value, a maximum value, and the like. And finally, weighting and summing the feature vectors of all the sub-grid images of the sample image object to obtain the depth features of the sample image object.
The remote sensing image classification device provided in the embodiment of the present invention is described below, and the remote sensing image classification device described below and the remote sensing image classification method described above can be referred to correspondingly, as shown in fig. 9, and the embodiment of the present invention provides a remote sensing image classification device, which includes:
an image object obtaining unit 910, configured to obtain an image object to be classified;
in the embodiment of the invention, the remote sensing image which is subjected to land coverage/classification is subjected to image segmentation to obtain the image object to be classified. Preferably, when the remote sensing image is segmented, multi-resolution segmentation is adopted, namely, different remote sensing image areas are segmented according to actual needs by adopting different resolutions.
The image object classification unit 920 is configured to input the image object to be classified into a classification model, and obtain a classification result output by the classification model;
the classification model is obtained by training based on object features and depth features of the sample image object and class labels of the sample image object.
In the embodiment of the invention, each image object to be classified obtained by dividing the remote sensing image is respectively input into a trained classification model, and corresponding classification results are respectively output.
In the embodiment of the present invention, the image object classifying unit 920 includes a classifying model training subunit, and the classifying model training subunit specifically includes:
the sample acquisition subunit is used for acquiring a sample image object;
an object feature extraction subunit, configured to extract object features of the sample image object;
the object feature extraction subunit extracts object features such as spectrum, geometry, texture and the like of the obtained image by a method of image analysis.
A depth feature extraction subunit, configured to extract a depth feature of the sample image object;
the characteristic synthesis subunit is used for superposing the object characteristics and the depth characteristics to obtain the comprehensive characteristics of the sample image object;
and the training subunit is used for training to obtain the classification model through the comprehensive characteristics of the sample image object and the class label of the sample image object.
In the embodiment of the invention, the object features of the sample image object are extracted by an object-oriented image analysis method, the depth features of the sample image object are extracted based on a deep learning method, a classification model is trained based on the fused depth features and object features, for example, the classification model adopts a random forest model, the trained classification model identifies the category of each image object, and a soil coverage/utilization thematic map is obtained.
The depth feature extraction subunit specifically includes:
the grid dividing subunit is used for dividing grids on the sample image object through an image grid expression method to obtain a grid image set, wherein the grid image set comprises a plurality of sub-grid images with the same grid size.
The sub-depth feature extraction subunit is used for respectively extracting depth features of each sub-grid image to obtain corresponding sub-depth features, wherein the sub-depth features are obtained by inputting the sub-grid images into a depth convolutional neural network model; the sub-depth features of the sub-grid images can be extracted by inputting the sub-grid images into the trained deep convolutional neural network model and selecting the last convolutional layer as the depth feature.
And the self-depth feature aggregation subunit is used for aggregating all the sub-depth features through a depth feature aggregation method to obtain the depth features of the sample image object.
In the embodiment of the invention, since the sample image object is irregularly and multi-deformed, in the embodiment of the invention, in order to obtain the depth characteristic of the image object, the input of the adopted depth convolution neural network model is generally a regular image block, so that the main purpose of the grid expression of the sample image object is to solve the gap between the irregular shape of the sample image object and the input requirement of the depth convolution neural network model for inputting the regular image block. The size of the mesh of each sub-mesh image in the mesh image set obtained by the mesh dividing sub-unit is consistent with the size of the input image required by the deep convolutional neural network model.
Wherein the grid dividing subunit specifically comprises:
a unit moving mesh size determining subunit configured to determine a unit moving mesh size, wherein the unit moving mesh size is consistent with a sub-mesh image size;
an initial grid determining subunit of the sample image object, configured to determine an initial grid of the sample image object;
specifically, the initial grid determining subunit of the sample image object specifically includes:
a sample image object gravity center determining subunit, configured to determine a sample image object gravity center;
an initial mesh determination subunit configured to take a unit moving mesh centered on the center of gravity as the initial mesh.
A grid searching subunit, configured to move with the initial grid as a center and with a unit moving grid size as a moving unit, and determine, when the unit moving grid contains pixels of the sample image object, that an image of the unit moving grid position is the sub-grid image;
and searching other grids around the initial grid as a center, and adding the grids into the grid image set of the sample image object if the grids contain pixels of the sample image object.
And the sub-grid image weight determining subunit is used for determining the sub-grid image weight.
In the embodiment of the present invention, the weight of the sub-grid image is the number of pixels belonging to the sample image object divided by the total number of pixels of the sample image object in the grid.
An embodiment of the present invention provides an entity structure diagram of an electronic device, as shown in fig. 10, and the electronic device may include: a processor 1010, a communication interface (Communications Interface) 1020, a memory 1030, and a communication bus 1040, wherein the processor 1010, the communication interface 1020, and the memory 1030 communicate with each other via the communication bus 1040. Processor 1010 may invoke logic instructions in memory 1030 to perform a telemetry image classification method comprising: acquiring an image object to be classified; inputting the image object to be classified into a classification model to obtain a classification result output by the classification model; the classification model is obtained by training based on object features and depth features of the sample image object and class labels of the sample image object.
Further, the logic instructions in the memory 1030 described above may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product. Based on such understanding, the technical solution of the embodiments of the present invention may be embodied in essence or a part contributing to the prior art or a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, an embodiment of the present invention further provides a computer program product, including a computer program stored on a non-transitory computer readable storage medium, the computer program including program instructions which, when executed by a computer, enable the computer to perform the remote sensing image classification method provided by the above methods, the method including: acquiring an image object to be classified; inputting the image object to be classified into a classification model to obtain a classification result output by the classification model; the classification model is obtained by training based on object features and depth features of the sample image object and class labels of the sample image object.
In yet another aspect, an embodiment of the present invention further provides a non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor is implemented to perform the above-provided remote sensing image classification methods: acquiring an image object to be classified; inputting the image object to be classified into a classification model to obtain a classification result output by the classification model; the classification model is obtained by training based on object features and depth features of the sample image object and class labels of the sample image object.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. The remote sensing image classification method is characterized by comprising the following steps of:
acquiring an image object to be classified;
inputting the image object to be classified into a classification model to obtain a classification result output by the classification model;
the classification model is obtained by training based on object features and depth features of the sample image object and class labels of the sample image object;
the classification model is obtained by training based on object features and depth features of a sample image object and class labels of the sample image object, and specifically comprises the following steps:
acquiring the sample image object;
extracting object features of the sample image object; the object features are spectrum, geometry or texture of the obtained image extracted by a face-to-face image analysis method;
extracting depth characteristics of the sample image object;
superposing the object features and the depth features to obtain comprehensive features of the sample image object;
training to obtain the classification model through comprehensive characteristics of the sample image objects and class labels of the sample image objects;
the extracting the depth feature of the sample image object specifically includes:
dividing a grid on the sample image object to obtain a grid image set, wherein the grid image set comprises a plurality of sub-grid images with the same grid size;
respectively extracting depth features of each sub-grid image to obtain corresponding sub-depth features, wherein the sub-depth features are obtained by inputting the sub-grid images into a depth convolutional neural network model;
and aggregating all the sub-depth features to obtain the depth features of the sample image object.
2. The remote sensing image classification method according to claim 1, wherein the dividing a grid on the sample image object to obtain a grid image set, wherein the grid image set includes a plurality of sub-grid images with a uniform grid size, specifically includes:
determining a unit mobile grid size, wherein the unit mobile grid size is consistent with a sub-grid image size;
determining an initial grid of a sample image object;
moving the initial grid by taking the size of the unit moving grid as a moving unit, and determining an image of the position of the unit moving grid as the sub-grid image when the unit moving grid contains pixels of the sample image object;
and determining the weight of each sub-grid image.
3. The method of claim 2, wherein determining the initial grid of the sample image object specifically comprises:
determining the center of gravity of a sample image object;
and taking the unit mobile grid taking the gravity center as the initial grid.
4. The remote sensing image classification method according to claim 2, wherein the aggregating each sub-depth feature to obtain the depth feature of the sample image object specifically comprises: and according to the weights of the sub-grid images and the corresponding sub-depth features, the depth features of the sample image object are formed in an aggregation mode.
5. The remote sensing image classification method according to claim 1, wherein the sub-depth feature is obtained by inputting a sub-grid image into a deep convolutional neural network model, and specifically comprises: and inputting the sub-grid images into a deep convolutional neural network model, taking the last convolutional layer of the deep convolutional neural network model as the sub-depth feature, and outputting the sub-depth feature.
6. A remote sensing image classification device, comprising:
the image object acquisition unit is used for acquiring the image objects to be classified;
the image object classifying unit is used for inputting the image objects to be classified into a classifying model to obtain a classifying result output by the classifying model;
the classification model is obtained by training based on object features and depth features of the sample image object and class labels of the sample image object;
the image object classifying unit comprises a classifying model training subunit, and the classifying model training subunit specifically comprises:
the sample acquisition subunit is used for acquiring a sample image object;
an object feature extraction subunit, configured to extract object features of the sample image object;
the object feature extraction subunit extracts spectrum, geometry or texture of an obtained image by a face-to-face image analysis method;
a depth feature extraction subunit, configured to extract a depth feature of the sample image object;
the characteristic synthesis subunit is used for superposing the object characteristics and the depth characteristics to obtain the comprehensive characteristics of the sample image object;
the training subunit is used for training to obtain the classification model through the comprehensive characteristics of the sample image object and the class labels of the sample image object;
the depth feature extraction subunit specifically includes:
a grid dividing subunit, configured to divide a grid on the sample image object to obtain a grid image set, where the grid image set includes a plurality of sub-grid images with a uniform grid size;
the sub-depth feature extraction subunit is used for respectively extracting depth features of each sub-grid image to obtain corresponding sub-depth features, wherein the sub-depth features are obtained by inputting the sub-grid images into a depth convolutional neural network model;
and the sub-depth feature aggregation subunit is used for aggregating all the sub-depth features to obtain the depth features of the sample image object.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor performs the steps of the remote sensing image classification method of any one of claims 1 to 5 when the program is executed.
8. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor performs the steps of the remote sensing image classification method of any of claims 1 to 5.
CN202110882078.0A 2021-08-02 2021-08-02 Remote sensing image classification method and device Active CN113723464B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110882078.0A CN113723464B (en) 2021-08-02 2021-08-02 Remote sensing image classification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110882078.0A CN113723464B (en) 2021-08-02 2021-08-02 Remote sensing image classification method and device

Publications (2)

Publication Number Publication Date
CN113723464A CN113723464A (en) 2021-11-30
CN113723464B true CN113723464B (en) 2023-10-03

Family

ID=78674738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110882078.0A Active CN113723464B (en) 2021-08-02 2021-08-02 Remote sensing image classification method and device

Country Status (1)

Country Link
CN (1) CN113723464B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321830A (en) * 2019-06-28 2019-10-11 北京邮电大学 A kind of Chinese character string picture OCR recognition methods neural network based
CN111325259A (en) * 2020-02-14 2020-06-23 武汉大学 Remote sensing image classification method based on deep learning and binary coding
CN111968156A (en) * 2020-07-28 2020-11-20 国网福建省电力有限公司 Adaptive hyper-feature fusion visual tracking method
CN111985487A (en) * 2020-08-31 2020-11-24 香港中文大学(深圳) Remote sensing image target extraction method, electronic equipment and storage medium
CN112101271A (en) * 2020-09-23 2020-12-18 台州学院 Hyperspectral remote sensing image classification method and device
CN112257496A (en) * 2020-09-14 2021-01-22 中国电力科学研究院有限公司 Deep learning-based power transmission channel surrounding environment classification method and system
CN113139532A (en) * 2021-06-22 2021-07-20 中国地质大学(武汉) Classification method based on multi-output classification model, computer equipment and medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321830A (en) * 2019-06-28 2019-10-11 北京邮电大学 A kind of Chinese character string picture OCR recognition methods neural network based
CN111325259A (en) * 2020-02-14 2020-06-23 武汉大学 Remote sensing image classification method based on deep learning and binary coding
CN111968156A (en) * 2020-07-28 2020-11-20 国网福建省电力有限公司 Adaptive hyper-feature fusion visual tracking method
CN111985487A (en) * 2020-08-31 2020-11-24 香港中文大学(深圳) Remote sensing image target extraction method, electronic equipment and storage medium
CN112257496A (en) * 2020-09-14 2021-01-22 中国电力科学研究院有限公司 Deep learning-based power transmission channel surrounding environment classification method and system
CN112101271A (en) * 2020-09-23 2020-12-18 台州学院 Hyperspectral remote sensing image classification method and device
CN113139532A (en) * 2021-06-22 2021-07-20 中国地质大学(武汉) Classification method based on multi-output classification model, computer equipment and medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Multi-Scale Dense Networks for Hyperspectral Remote Sensing Image Classification;Chunju Zhang 等;《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》;第57卷(第11期);第9201-9222页 *
基于深度学习的高分辨率遥感影像分类研究;刘大伟 等;《光学学报》;第36卷(第4期);第1-9页 *
应用特征组合的基于对象卷积神经网络在高分辨率影像分类中的研究;孙玉;《中国优秀硕士学位论文全文数据库》;第1-104页 *

Also Published As

Publication number Publication date
CN113723464A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
CN107292256B (en) Auxiliary task-based deep convolution wavelet neural network expression recognition method
CN110929607B (en) Remote sensing identification method and system for urban building construction progress
CN108510504B (en) Image segmentation method and device
CN110163813B (en) Image rain removing method and device, readable storage medium and terminal equipment
CN108108751B (en) Scene recognition method based on convolution multi-feature and deep random forest
CN108596108B (en) Aerial remote sensing image change detection method based on triple semantic relation learning
WO2020062360A1 (en) Image fusion classification method and apparatus
CN113168510A (en) Segmenting objects a priori by refining shape
CN111126258A (en) Image recognition method and related device
CN107808138B (en) Communication signal identification method based on FasterR-CNN
CN113177456B (en) Remote sensing target detection method based on single-stage full convolution network and multi-feature fusion
CN113095333B (en) Unsupervised feature point detection method and unsupervised feature point detection device
CN113537180B (en) Tree obstacle identification method and device, computer equipment and storage medium
CN112132145B (en) Image classification method and system based on model extended convolutional neural network
CN110991444A (en) Complex scene-oriented license plate recognition method and device
CN107871314A (en) A kind of sensitive image discrimination method and device
CN111160114A (en) Gesture recognition method, device, equipment and computer readable storage medium
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
CN109784171A (en) Car damage identification method for screening images, device, readable storage medium storing program for executing and server
CN111666813B (en) Subcutaneous sweat gland extraction method of three-dimensional convolutional neural network based on non-local information
CN112507888A (en) Building identification method and device
CN114882306B (en) Topography scale identification method and device, storage medium and electronic equipment
CN111428191A (en) Antenna downward inclination angle calculation method and device based on knowledge distillation and storage medium
CN112418256A (en) Classification, model training and information searching method, system and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant