CN115187867A - Multi-source remote sensing image fusion method and system based on deep learning - Google Patents

Multi-source remote sensing image fusion method and system based on deep learning Download PDF

Info

Publication number
CN115187867A
CN115187867A CN202210880890.4A CN202210880890A CN115187867A CN 115187867 A CN115187867 A CN 115187867A CN 202210880890 A CN202210880890 A CN 202210880890A CN 115187867 A CN115187867 A CN 115187867A
Authority
CN
China
Prior art keywords
remote sensing
sensing images
image fusion
pixel
source remote
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210880890.4A
Other languages
Chinese (zh)
Inventor
李玲玲
赵雪专
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University of Aeronautics
Original Assignee
Zhengzhou University of Aeronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University of Aeronautics filed Critical Zhengzhou University of Aeronautics
Priority to CN202210880890.4A priority Critical patent/CN115187867A/en
Publication of CN115187867A publication Critical patent/CN115187867A/en
Priority to LU502959A priority patent/LU502959B1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/32Normalisation of the pattern dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0499Feedforward networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention is suitable for the technical field of image fusion, and particularly relates to a multi-source remote sensing image fusion method and system based on deep learning, wherein the method comprises the following steps: acquiring remote sensing images of all different sources; preprocessing all remote sensing images of different sources, and counting the position distribution of each pixel point; dividing characteristic regions according to the position distribution of the pixel points, and identifying to obtain region characteristics; and matching the remote sensing images of different sources according to the regional characteristics to complete image fusion. According to the multi-source remote sensing image fusion method based on deep learning, provided by the embodiment of the invention, a plurality of groups of different-source remote sensing images are identified, the characteristics contained in the different groups of different-source remote sensing images are respectively determined, and then the relative position relation among the characteristics and the shape characteristics of the characteristics are determined, so that the matching among the characteristics in the different-source remote sensing images is rapidly completed, the accuracy of characteristic identification is ensured, and the efficiency of characteristic matching is also improved.

Description

Multi-source remote sensing image fusion method and system based on deep learning
Technical Field
The invention belongs to the technical field of image fusion, and particularly relates to a multi-source remote sensing image fusion method and system based on deep learning.
Background
Deep learning is one of machine learning, and machine learning is a must-pass path for implementing artificial intelligence. The concept of deep learning is derived from the research of artificial neural networks, and a multi-layer perceptron comprising a plurality of hidden layers is a deep learning structure. Deep learning forms a more abstract class or feature of high-level representation properties by combining low-level features to discover a distributed feature representation of the data.
In remote sensing, data fusion belongs to attribute fusion, and is characterized in that multi-source remote sensing image data in the same area are intelligently synthesized to generate estimation and judgment which is more accurate, more complete and more reliable than a single information source. Its advantages are high robustness, high resolution and definition of image, high accuracy and reliability of plane mapping, high interpreting and dynamic monitoring power, low ambiguity, and high utilization rate of remote-sensing image data.
In the current multi-source remote sensing picture fusion process, the picture characteristics are determined on the premise of ensuring the precision of the fusion picture, and when the characteristics are identified in the prior art, the identification speed is low, so that the fusion efficiency is low.
Disclosure of Invention
The embodiment of the invention aims to provide a multi-source remote sensing image fusion method based on deep learning, and aims to solve the problem that in the prior art, when characteristics are identified, the identification speed is low, and the fusion efficiency is low.
The embodiment of the invention is realized in such a way that a multi-source remote sensing image fusion method based on deep learning comprises the following steps:
acquiring remote sensing images of all different sources;
preprocessing all remote sensing images of different sources, and counting the position distribution of each pixel point;
dividing characteristic regions according to the position distribution of the pixel points, and identifying to obtain region characteristics;
and matching the remote sensing images of different sources according to the regional characteristics to complete image fusion.
Preferably, the step of preprocessing all the remote sensing images of different sources and counting the position distribution of each pixel point specifically includes:
carrying out gray level processing on remote sensing images of different sources to obtain gray level images;
establishing a coordinate system, and determining the coordinate of each pixel according to the pixel position;
and determining a gray value corresponding to each pixel.
Preferably, the step of dividing the feature region according to the position distribution of the pixel points and identifying the obtained region features specifically includes:
selecting pixel points one by one, taking the pixel points as reference pixel points, and calculating the difference value between the gray value of each other pixel point and the gray value of the reference pixel point;
dividing a connected region formed by pixel points of which the difference values are smaller than a first preset value into characteristic regions, wherein the dispersion of the pixel points in the characteristic regions is lower than a second preset value;
and determining the relative position among the characteristic regions, and determining the outline of the characteristic regions to obtain the region characteristics.
Preferably, the step of matching the remote sensing images of different sources according to the regional characteristics to complete image fusion specifically includes:
classifying the regional characteristics according to the positions of the regional characteristics in the corresponding different source remote sensing images;
and adjusting the positions of different source remote sensing images according to the corresponding relation among the same type of region features to complete image fusion.
Preferably, before image fusion, the remote sensing images of different sources are cut to the same size.
Preferably, at least two regional characteristics which are overlapped in position are arranged between the remote sensing images of different sources.
Another objective of an embodiment of the present invention is to provide a multi-source remote sensing image fusion system based on deep learning, where the system includes:
the image acquisition module is used for acquiring remote sensing images of all different sources;
the pixel counting module is used for preprocessing all the remote sensing images of different sources and counting the position distribution of each pixel point;
the characteristic identification module is used for dividing characteristic areas according to the position distribution of the pixel points and identifying to obtain area characteristics;
and the image fusion module is used for matching the remote sensing images of different sources according to the regional characteristics to complete image fusion.
Preferably, the pixel statistics module includes:
the image preprocessing unit is used for carrying out gray processing on the remote sensing images of different sources to obtain gray images;
the coordinate identification unit is used for establishing a coordinate system and determining the coordinate of each pixel according to the pixel position;
and the gray value determining unit is used for determining the gray value corresponding to each pixel.
Preferably, the feature recognition module includes:
the gray value calculation unit is used for selecting the pixel points one by one, using the pixel points as reference pixel points and calculating the difference value between the gray value of each other pixel point and the gray value of the reference pixel point;
the region feature dividing unit is used for dividing a connected region formed by pixel points with difference values smaller than a first preset value into feature regions, and the dispersion of the pixel points in the feature regions is lower than a second preset value;
and the characteristic generating unit is used for determining the relative positions of the characteristic areas and determining the outlines of the characteristic areas to obtain the area characteristics.
Preferably, the image fusion module includes:
the characteristic classification unit is used for classifying the regional characteristics according to the positions of the regional characteristics in the corresponding different source remote sensing images;
and the positioning fusion unit is used for adjusting the positions of different source remote sensing images according to the corresponding relation between the same type of region characteristics to complete image fusion.
According to the multi-source remote sensing image fusion method based on deep learning, provided by the embodiment of the invention, a plurality of groups of different-source remote sensing images are identified, the characteristics contained in the different groups of different-source remote sensing images are respectively determined, and then the relative position relation among the characteristics and the shape characteristics of the characteristics are determined, so that the matching among the characteristics in the different-source remote sensing images is rapidly completed, the accuracy of characteristic identification is ensured, and the efficiency of characteristic matching is also improved.
Drawings
FIG. 1 is a flow chart of a multi-source remote sensing image fusion method based on deep learning according to an embodiment of the present invention;
fig. 2 is a flowchart of a step of preprocessing all remote sensing images of different sources and counting the position distribution of each pixel point according to the embodiment of the present invention;
fig. 3 is a flowchart of a step of dividing a feature region according to the position distribution of a pixel point and identifying and obtaining a region feature according to the embodiment of the present invention;
fig. 4 is a flowchart of a step of matching different remote sensing images according to regional characteristics to complete image fusion according to the embodiment of the present invention;
fig. 5 is an architecture diagram of a multi-source remote sensing image fusion system based on deep learning according to an embodiment of the present invention;
FIG. 6 is a block diagram of a pixel statistics module according to an embodiment of the present invention;
FIG. 7 is an architecture diagram of a feature recognition module according to an embodiment of the present invention;
fig. 8 is an architecture diagram of an image fusion module according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms unless otherwise specified. These terms are only used to distinguish one element from another. For example, a first xx script may be referred to as a second xx script, and similarly, a second xx script may be referred to as a first xx script, without departing from the scope of the present application.
In remote sensing, data fusion belongs to attribute fusion, and is characterized in that multi-source remote sensing image data in the same area are intelligently synthesized to generate estimation and judgment which is more accurate, more complete and more reliable than a single information source. Its advantages are high robustness, high resolution and definition of image, high accuracy and reliability of planar mapping and classification, high interpreting and dynamic monitoring power, low ambiguity, and high utilization rate of remote-sensing image data. In the current multi-source remote sensing picture fusion process, the picture characteristics are determined on the premise of ensuring the precision of the fusion picture, and when the characteristics are identified in the prior art, the identification speed is low, so that the fusion efficiency is low.
In the invention, a plurality of groups of different source remote sensing images are identified, the characteristics contained in the different source remote sensing images are respectively determined, and then the relative position relation among the characteristics and the shape characteristics of the characteristics are determined, so that the matching among the characteristics in the different source remote sensing images is quickly completed, the accuracy of characteristic identification is ensured, and the efficiency of characteristic matching is also improved.
As shown in fig. 1, a flowchart of a multi-source remote sensing image fusion method based on deep learning according to an embodiment of the present invention is provided, where the method includes:
and S100, acquiring remote sensing images of all different sources.
In the step, all remote sensing images of different sources are obtained, the remote sensing images can be multi-temporal images, multispectral images, multi-sensor images and multi-platform images of the same region, and the remote sensing images are fused to obtain more reliable remote sensing images.
S200, preprocessing all the remote sensing images of different sources, and counting the position distribution of each pixel point.
In the step, all remote sensing images of different sources are preprocessed, wherein the preprocessing at least comprises gray processing, the images are converted into gray images through the gray processing, so that the data processing amount is reduced, each pixel is analyzed, the gray value of each pixel point is determined, the coordinate of each pixel is determined in a manner of constructing a coordinate system, and the position distribution of each pixel can be obtained.
S300, dividing the characteristic region according to the position distribution of the pixel points, and identifying to obtain the region characteristics.
In the step, feature areas are divided according to the position distribution of pixel points, the pixel points are divided according to the gray value of each pixel point, the difference value of the gray values is smaller than a preset value, and adjacent pixels jointly form a communicated feature area.
And S400, matching the remote sensing images of different sources according to the regional characteristics to complete image fusion.
In this step, different remote sensing images from different sources are matched according to the regional characteristics, and since a plurality of remote sensing images from different sources exist, each remote sensing image has corresponding regional characteristics, for example, for the same building, corresponding regional characteristics exist in different remote sensing images, and further, the positional relationship between the remote sensing images is adjusted according to the positional relationship between the regional characteristics in different remote sensing images, and further, fusion processing is performed, so that a fusion image is obtained.
As shown in fig. 2, as a preferred embodiment of the present invention, the step of preprocessing all remote sensing images from different sources and counting the position distribution of each pixel point specifically includes:
s201, carrying out gray processing on remote sensing images of different sources to obtain gray images.
In the step, the remote sensing images of different sources are subjected to gray processing, so that the colors of all the pixel points are reflected through the gray values, the data processing amount is greatly reduced through the gray-processed images, and the data processing speed is improved.
S202, establishing a coordinate system, and determining the coordinate of each pixel according to the pixel position.
And S203, determining a corresponding gray value of each pixel.
In the step, a coordinate system is established, the coordinate system is established by taking any one pixel as an origin, so that the coordinate of each pixel is determined, the horizontal axis and the vertical axis of the coordinate are integers, the width of the remote sensing image is 1000 pixels by 1000 pixels, the height of the remote sensing image is 1000 pixels, the coordinate of the first row and the first column of pixels is (0, 0), the coordinate of the last row and the last column of pixels is (1000 ), and at the moment, each pixel has a corresponding coordinate, so that the gray value of each pixel is determined.
As shown in fig. 3, as a preferred embodiment of the present invention, the step of dividing the feature region according to the position distribution of the pixel points and identifying and obtaining the region feature specifically includes:
s301, selecting the pixel points one by one, taking the pixel points as reference pixel points, and calculating the difference value between the gray value of each other pixel point and the gray value of the reference pixel point.
In the step, pixel points are selected one by one, specifically, the pixel points are selected according to pixel coordinates to serve as reference pixel points, the reference pixel points traverse all pixels in a group of remote sensing images, and after any one pixel point is selected, the difference value of the gray value between the pixel point and all other pixel points is calculated.
S302, dividing a connected region formed by the pixels with the difference values smaller than a first preset value into characteristic regions, wherein the dispersion of the pixels in the characteristic regions is lower than a second preset value.
In the step, a connected region formed by pixel points with difference values smaller than a first preset value is divided into characteristic regions, all the pixel points with difference values smaller than the first preset value are counted, region division is carried out according to the connected relation between the pixel points, a plurality of characteristic regions are obtained through division, the dispersion of the pixel points in the characteristic regions is calculated, and the dispersion is represented by standard deviation, namely the standard deviation of the pixel points in the characteristic regions is calculated, the standard deviation is lower than a second preset value and can be used as the characteristic regions, and if the number of the pixel points in the characteristic regions is lower than a preset threshold value, the pixel points are abandoned.
And S303, determining the relative position among the characteristic regions, determining the outline of the characteristic regions, and obtaining the region characteristics.
In this step, the relative positions of the feature areas are determined, specifically, each feature area is labeled, if three groups of remote sensing images, namely, a, B, and C exist, the three groups of remote sensing images respectively include three groups of feature areas, namely, A1, A2, and A3, B1, B2, and B3, and C1, C2, and C3, the central points of the feature areas are calculated, line segments are connected with the central points corresponding to all the feature areas in the same remote sensing image to obtain a feature area connection diagram, the feature area connection diagrams corresponding to different remote sensing images are overlapped by rotating and/or scaling the feature area connection diagrams, at this time, the different feature areas are overlapped with each other, the overlapped feature areas correspond to each other, the outline of the feature areas is determined, and the area features are obtained.
As shown in fig. 4, as a preferred embodiment of the present invention, the step of matching different remote sensing images according to regional features to complete image fusion specifically includes:
s401, classifying the regional characteristics according to the positions of the regional characteristics in the corresponding different-source remote sensing images.
S402, adjusting the positions of different source remote sensing images according to the corresponding relation among the same type of region features, and completing image fusion.
In the step, the regional features are classified, specifically, the regional features overlapped on different remote sensing images are used as the same classification, the regional features of the same type on different remote sensing images are overlapped through scaling, so that a plurality of remote sensing images are completely overlapped, the remote sensing images of different sources are cut into the same size, and at the moment, fusion processing can be directly carried out to obtain a fusion image; at least two regions with overlapped positions are characterized among different source remote sensing images.
As shown in fig. 5, a multi-source remote sensing image fusion system based on deep learning provided in an embodiment of the present invention includes:
the image obtaining module 100 is configured to obtain remote sensing images of all different sources.
In the system, the image acquisition module 100 acquires all remote sensing images of different sources, the remote sensing images can be multi-temporal images, multispectral images, multi-sensor images and multi-platform images of the same region, and the remote sensing images can be fused to obtain more reliable remote sensing images.
And the pixel counting module 200 is used for preprocessing all the remote sensing images of different sources and counting the position distribution of each pixel point.
In the system, the pixel statistics module 200 preprocesses all remote sensing images from different sources, wherein the preprocessing at least includes gray processing, and converts the remote sensing images into gray images through the gray processing, so as to reduce data processing amount, further analyze each pixel, determine the gray value of each pixel point, and determine the coordinate of each pixel in a manner of constructing a coordinate system, so that the position distribution of each pixel can be obtained.
The feature identification module 300 is configured to divide a feature region according to the position distribution of the pixel points, and identify and obtain a region feature.
In the system, the feature recognition module 300 divides feature regions according to the position distribution of pixel points, divides the pixel points according to the gray value of each pixel point, the difference value of the gray values is smaller than a preset value, and adjacent pixels jointly form a communicated feature region.
And the image fusion module 400 is used for matching the remote sensing images of different sources according to the regional characteristics to complete image fusion.
In the system, the image fusion module 400 matches remote sensing images of different sources according to the regional characteristics, and since a plurality of remote sensing images of different sources exist, each remote sensing image has corresponding regional characteristics, for example, for the same building, corresponding regional characteristics exist in different remote sensing images, and further, the positional relationship between the remote sensing images is adjusted according to the positional relationship between the regional characteristics in different remote sensing images, and further, fusion processing is performed, so that a fusion image is obtained.
As shown in fig. 6, as a preferred embodiment of the present invention, the pixel statistics module 200 includes:
and the image preprocessing unit 201 is used for performing gray processing on the remote sensing images of different sources to obtain a gray image.
In this module, the image preprocessing unit 201 performs gray processing on different source remote sensing images, and further reflects the color of each pixel point through the gray value, so that the data processing amount is greatly reduced and the data processing speed is increased through the gray-processed images.
And the coordinate identification unit 202 is used for establishing a coordinate system and determining the coordinates of each pixel according to the pixel position.
A gray value determining unit 203, configured to determine a gray value corresponding to each pixel.
In the module, a coordinate system is established, the coordinate system is established by taking any pixel as an origin, so that the coordinate of each pixel is determined, the horizontal axis and the vertical axis of the coordinate are integers, the width of the remote sensing image is 1000 pixels by 1000 pixels, the height of the remote sensing image is 1000 pixels, the coordinate of the first row and the first column of pixels is (0, 0), the coordinate of the last row and the last column of pixels is (1000 ), and at the moment, each pixel has a corresponding coordinate, so that the gray value of each pixel is determined.
As shown in fig. 7, as a preferred embodiment of the present invention, the feature recognition module 300 includes:
the gray value calculating unit 301 is configured to select pixel points one by one, use the pixel points as reference pixel points, and calculate a difference between the gray value of each of the other pixel points and the gray value of the reference pixel point.
In this module, the gray value calculation unit 301 selects pixel points one by one, specifically selects the pixel points according to the pixel coordinates, uses the pixel points as reference pixel points, traverses all pixels in a group of remote sensing images, and calculates the difference value of the gray value between the pixel point and all other pixel points after selecting any one pixel point.
The region feature dividing unit 302 is configured to divide a connected region formed by pixels with difference values smaller than a first preset value into feature regions, where dispersion of the pixels in the feature regions is lower than a second preset value.
In this module, the region feature dividing unit 302 divides a connected region composed of pixels having differences smaller than a first preset value into feature regions, counts all the pixels having differences smaller than the first preset value, performs region division according to the connection relationship between the pixels, divides the regions to obtain a plurality of feature regions, calculates the dispersion of the pixels in the feature regions, and characterizes the pixels by standard deviation, that is, calculates the standard deviation of the pixels in the feature regions, and if the standard deviation is lower than a second preset value, the pixels in the feature regions can be used as the feature regions, and if the number of the pixels in the feature regions is lower than a preset threshold, the pixels are discarded.
The feature generating unit 303 is configured to determine relative positions between the feature regions, determine an outline of the feature region, and obtain a region feature.
In this module, the feature generation unit 303 determines the relative position between each feature region, specifically, labels are performed for each feature region, if there are three groups of remote sensing images, which are a, B, and C, the three groups of remote sensing images respectively include three groups of feature regions, which are A1, A2, and A3, B1, B2, and B3, and C1, C2, and C3, calculates the central point of each feature region, and connects the central points corresponding to all feature regions in the same remote sensing image by line segments to obtain a feature region connecting line graph, and causes the feature region connecting line graphs corresponding to different remote sensing images to coincide by rotating and/or scaling the feature region connecting line graphs, at this time, different feature regions coincide with each other, and the coincident feature regions correspond to each other, and determines the outline of the feature regions to obtain the region features.
As shown in fig. 8, as a preferred embodiment of the present invention, the image fusion module 400 includes:
and the feature classification unit 401 is configured to classify the region features according to positions of the region features in the corresponding different-source remote sensing images.
And the positioning fusion unit 402 is configured to adjust positions of different source remote sensing images according to a corresponding relationship between the same type of region features, so as to complete image fusion.
In the module, the regional characteristics are classified, specifically, the regional characteristics overlapped on different remote sensing images are used as the same classification, the same regional characteristics on different remote sensing images are overlapped through scaling, so that a plurality of remote sensing images are completely overlapped, the remote sensing images of different sources are cut into the same size, and at the moment, fusion processing can be directly carried out to obtain a fusion image; at least two regional characteristics with overlapped positions are arranged between different source remote sensing images.
In one embodiment, a computer device is proposed, the computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring remote sensing images of all different sources;
preprocessing all remote sensing images of different sources, and counting the position distribution of each pixel point;
dividing characteristic regions according to the position distribution of the pixel points, and identifying to obtain region characteristics;
and matching the remote sensing images of different sources according to the regional characteristics to complete image fusion.
In one embodiment, a computer readable storage medium is provided, having a computer program stored thereon, which, when executed by a processor, causes the processor to perform the steps of:
acquiring remote sensing images of all different sources;
preprocessing all remote sensing images of different sources, and counting the position distribution of each pixel point;
dividing characteristic regions according to the position distribution of the pixel points, and identifying to obtain region characteristics;
and matching the remote sensing images of different sources according to the regional characteristics to complete image fusion.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in various embodiments may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by a computer program, which may be stored in a non-volatile computer readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that various changes and modifications can be made by those skilled in the art without departing from the spirit of the invention, and these changes and modifications are all within the scope of the invention. Therefore, the protection scope of the present patent should be subject to the appended claims.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A multi-source remote sensing image fusion method based on deep learning is characterized by comprising the following steps:
acquiring remote sensing images of all different sources;
preprocessing all remote sensing images of different sources, and counting the position distribution of each pixel point;
dividing characteristic regions according to the position distribution of the pixel points, and identifying to obtain region characteristics;
and matching the remote sensing images of different sources according to the regional characteristics to complete image fusion.
2. The multi-source remote sensing image fusion method based on deep learning of claim 1, wherein the step of preprocessing all the remote sensing images of different sources and counting the position distribution of each pixel point specifically comprises:
carrying out gray processing on remote sensing images of different sources to obtain gray images;
establishing a coordinate system, and determining the coordinate of each pixel according to the pixel position;
and determining a gray value corresponding to each pixel.
3. The multi-source remote sensing image fusion method based on deep learning of claim 1, wherein the step of dividing the feature region according to the position distribution of the pixel points and identifying the obtained region features specifically comprises:
selecting pixel points one by one, taking the pixel points as reference pixel points, and calculating the difference value between the gray value of each other pixel point and the gray value of the reference pixel point;
dividing a connected region formed by pixel points with difference values smaller than a first preset value into characteristic regions, wherein the dispersion of the pixel points in the characteristic regions is lower than a second preset value;
and determining the relative position among the characteristic regions, and determining the outline of the characteristic regions to obtain the region characteristics.
4. The multi-source remote sensing image fusion method based on deep learning of claim 1, wherein the step of matching different remote sensing images according to regional characteristics to complete image fusion specifically comprises:
classifying the regional characteristics according to the positions of the regional characteristics in the corresponding different source remote sensing images;
and adjusting the positions of different source remote sensing images according to the corresponding relation among the same type of region features to complete image fusion.
5. The deep learning-based multi-source remote sensing image fusion method according to claim 4, wherein different source remote sensing images are cut to the same size before image fusion.
6. The deep learning-based multi-source remote sensing image fusion method according to claim 4, characterized in that at least two area features with overlapped positions exist between different source remote sensing images.
7. A multi-source remote sensing image fusion system based on deep learning is characterized in that the system comprises:
the image acquisition module is used for acquiring remote sensing images of all different sources;
the pixel counting module is used for preprocessing all remote sensing images of different sources and counting the position distribution of each pixel point;
the characteristic identification module is used for dividing characteristic areas according to the position distribution of the pixel points and identifying to obtain area characteristics;
and the image fusion module is used for matching different source remote sensing images according to the regional characteristics to complete image fusion.
8. The deep learning-based multi-source remote sensing image fusion system according to claim 7, wherein the pixel statistics module comprises:
the image preprocessing unit is used for carrying out gray processing on remote sensing images of different sources to obtain gray images;
the coordinate identification unit is used for establishing a coordinate system and determining the coordinate of each pixel according to the pixel position;
and the gray value determining unit is used for determining the gray value corresponding to each pixel.
9. The deep learning-based multi-source remote sensing image fusion system according to claim 7, wherein the feature recognition module comprises:
the gray value calculation unit is used for selecting the pixel points one by one, taking the pixel points as reference pixel points, and calculating the difference value between the gray value of each other pixel point and the gray value of the reference pixel point;
the region feature dividing unit is used for dividing a connected region formed by pixel points with difference values smaller than a first preset value into feature regions, and the dispersion of the pixel points in the feature regions is lower than a second preset value;
and the characteristic generating unit is used for determining the relative positions of the characteristic areas and determining the outlines of the characteristic areas to obtain the area characteristics.
10. The deep learning-based multi-source remote sensing image fusion system according to claim 7, wherein the image fusion module comprises:
the characteristic classification unit is used for classifying the regional characteristics according to the positions of the regional characteristics in the corresponding different-source remote sensing images;
and the positioning fusion unit is used for adjusting the positions of different source remote sensing images according to the corresponding relation between the same type of region characteristics to complete image fusion.
CN202210880890.4A 2022-07-26 2022-07-26 Multi-source remote sensing image fusion method and system based on deep learning Pending CN115187867A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210880890.4A CN115187867A (en) 2022-07-26 2022-07-26 Multi-source remote sensing image fusion method and system based on deep learning
LU502959A LU502959B1 (en) 2022-07-26 2022-10-26 a multi-source remote sensing image fusion method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210880890.4A CN115187867A (en) 2022-07-26 2022-07-26 Multi-source remote sensing image fusion method and system based on deep learning

Publications (1)

Publication Number Publication Date
CN115187867A true CN115187867A (en) 2022-10-14

Family

ID=83521872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210880890.4A Pending CN115187867A (en) 2022-07-26 2022-07-26 Multi-source remote sensing image fusion method and system based on deep learning

Country Status (2)

Country Link
CN (1) CN115187867A (en)
LU (1) LU502959B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342449A (en) * 2023-03-29 2023-06-27 银河航天(北京)网络技术有限公司 Image enhancement method, device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342449A (en) * 2023-03-29 2023-06-27 银河航天(北京)网络技术有限公司 Image enhancement method, device and storage medium
CN116342449B (en) * 2023-03-29 2024-01-16 银河航天(北京)网络技术有限公司 Image enhancement method, device and storage medium

Also Published As

Publication number Publication date
LU502959B1 (en) 2023-04-27

Similar Documents

Publication Publication Date Title
Zhang et al. Jointly modeling motion and appearance cues for robust RGB-T tracking
Zhao et al. Fusion of 3D LIDAR and camera data for object detection in autonomous vehicle applications
CN108921925B (en) Semantic point cloud generation method and device based on laser radar and visual fusion
CN113936198B (en) Low-beam laser radar and camera fusion method, storage medium and device
CN110827202A (en) Target detection method, target detection device, computer equipment and storage medium
CN112825192B (en) Object identification system and method based on machine learning
CN112241952B (en) Brain midline identification method, device, computer equipment and storage medium
CN111461036B (en) Real-time pedestrian detection method using background modeling to enhance data
CN114119992B (en) Multi-mode three-dimensional target detection method and device based on fusion of image and point cloud
Gu et al. Lidar-based urban road detection by histograms of normalized inverse depths and line scanning
CN116681730A (en) Target tracking method, device, computer equipment and storage medium
CN115187867A (en) Multi-source remote sensing image fusion method and system based on deep learning
CN115586749A (en) Workpiece machining track control method based on machine vision and related device
Burger et al. Fast dual decomposition based mesh-graph clustering for point clouds
Song et al. Automatic detection and classification of road, car, and pedestrian using binocular cameras in traffic scenes with a common framework
CN113763412B (en) Image processing method and device, electronic equipment and computer readable storage medium
Vajak et al. A rethinking of real-time computer vision-based lane detection
Wang et al. High accuracy and low complexity LiDAR place recognition using unitary invariant frobenius norm
CN114926536B (en) Semantic-based positioning and mapping method and system and intelligent robot
Khalil et al. Licanext: Incorporating sequential range residuals for additional advancement in joint perception and motion prediction
CN115797665A (en) Image feature-based image and single-frame millimeter wave radar target matching method
Carrillo et al. Fast obstacle detection using sparse edge-based disparity maps
CN117612128B (en) Lane line generation method, device, computer equipment and storage medium
Oana Disparity image segmentation for free-space detection
Shi et al. MFF-Net: Multimodal Feature Fusion Network for 3D Object Detection.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination