CN108230376B - Remote sensing image processing method and device and electronic equipment - Google Patents

Remote sensing image processing method and device and electronic equipment Download PDF

Info

Publication number
CN108230376B
CN108230376B CN201611264368.4A CN201611264368A CN108230376B CN 108230376 B CN108230376 B CN 108230376B CN 201611264368 A CN201611264368 A CN 201611264368A CN 108230376 B CN108230376 B CN 108230376B
Authority
CN
China
Prior art keywords
remote sensing
sensing image
color
mapping
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611264368.4A
Other languages
Chinese (zh)
Other versions
CN108230376A (en
Inventor
李聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201611264368.4A priority Critical patent/CN108230376B/en
Publication of CN108230376A publication Critical patent/CN108230376A/en
Application granted granted Critical
Publication of CN108230376B publication Critical patent/CN108230376B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Abstract

The application discloses a remote sensing image processing method and device and electronic equipment. The method comprises the following steps: matching the characteristics of the remote sensing image to be color-homogenized with the characteristics of the reference remote sensing image to obtain a plurality of characteristic pairs; determining a feature mapping relationship between the images based on the plurality of feature pairs; determining an overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image according to the characteristic mapping relation between the images; and carrying out color homogenizing treatment on the remote sensing image to be color homogenized according to the overlapping area of the remote sensing image to be color homogenized and the overlapping area of the reference remote sensing image to obtain the remote sensing image after color homogenizing. According to the embodiment, a large amount of unnecessary redundant data is removed in the processing process of the remote sensing image, the processing efficiency of the remote sensing image is improved, and the processing precision of the remote sensing image is improved.

Description

Remote sensing image processing method and device and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to the field of computer image processing technologies, and in particular, to a method and an apparatus for processing remote sensing images, and an electronic device.
Background
In the application of remote sensing images, images which completely cover a research area are often required to be obtained, and because most images are often obtained in different time phases, the images are subject to factors such as seasons, weather, illumination, different areas and the like, and obvious differences such as colors, brightness and the like exist among the images. If different influences are directly spliced into a complete image, the visual effect is poor, and meanwhile, in many qualitative or quantitative remote sensing image information extraction services, the information extraction precision and the extraction efficiency are seriously influenced.
In order to improve the quality of remote sensing image processing, in conventional remote sensing image processing, the remote sensing image processing is generally performed by the following steps: firstly, selecting a reference remote sensing image close to or similar to the remote sensing image to be processed according to the remote sensing image to be processed, then matching the remote sensing image to be homogenized with the reference remote sensing image, and then splicing the remote sensing image to be processed and the reference remote sensing image according to a matching result. When image matching is carried out, two schemes of geographic coordinate registration and feature registration are mainly adopted.
Disclosure of Invention
The application provides a technical scheme for processing remote sensing images.
In a first aspect, the present application provides a method for processing a remote sensing image, including: matching the characteristics of the remote sensing image to be color-homogenized with the characteristics of the reference remote sensing image to obtain a plurality of characteristic pairs; determining a feature mapping relationship between the images based on the plurality of feature pairs; determining an overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image according to the characteristic mapping relation between the images; and carrying out color homogenizing treatment on the remote sensing image to be color homogenized according to the overlapping area of the remote sensing image to be color homogenized and the overlapping area of the reference remote sensing image to obtain the remote sensing image after color homogenizing.
In some embodiments, determining the overlap region between the remote sensing image to be homogenized and the reference remote sensing image according to the inter-image feature mapping relationship comprises: determining an initial overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image according to the plurality of feature pairs; carrying out cloud detection on an initial overlapping area between the remote sensing image to be color-homogenized and the reference remote sensing image to obtain a cloud mask area; and removing the cloud mask area in the initial overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image to obtain the overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image.
In some embodiments, performing cloud detection on an initial overlapping region between the remote sensing image to be homogenized and the reference remote sensing image to obtain a cloud mask region includes: and carrying out cloud detection on an initial overlapping area between the to-be-homogenized remote sensing image and the reference remote sensing image based on the deep neural network to obtain a cloud mask area.
In some embodiments, the color homogenizing process is performed on the remote sensing image to be color homogenized according to the overlapping area of the remote sensing image to be color homogenized and the overlapping area of the reference remote sensing image, and obtaining the remote sensing image after color homogenizing comprises: determining a mapping relation between the overlapping region of the remote sensing image to be color-leveled and the overlapping region of the reference remote sensing image according to the overlapping region of the remote sensing image to be color-leveled and the overlapping region of the reference remote sensing image; according to the mapping relation between the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image, mapping the overlapping area of the reference remote sensing image to the overlapping area of the remote sensing image to be color-leveled to obtain the mapped overlapping area of the remote sensing image to be color-leveled; and carrying out color homogenizing treatment on the overlapped area after mapping the remote sensing image to be color homogenized to obtain the remote sensing image after color homogenizing.
In some embodiments, determining the mapping relationship between the overlapping region of the remote sensing image to be color evened and the overlapping region of the reference remote sensing image according to the overlapping region of the remote sensing image to be color evened and the overlapping region of the reference remote sensing image comprises: the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image respectively comprise at least one mapping area which corresponds to each other; and determining the mapping relation between the mapping areas corresponding to each other in the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image.
In some embodiments, mapping the overlapping region of the reference remote sensing image to the overlapping region of the remote sensing image to be color homogenized according to the mapping relationship between the overlapping region of the remote sensing image to be color homogenized and the overlapping region of the reference remote sensing image, and obtaining the overlapping region after mapping the remote sensing image to be color homogenized includes: and mapping the overlapping region of the reference remote sensing image to the overlapping region of the remote sensing image to be color-leveled according to the mapping relation between the overlapping region of the remote sensing image to be color-leveled and the corresponding mapping regions in the overlapping region of the reference remote sensing image to obtain the mapped overlapping region of the remote sensing image to be color-leveled.
In some embodiments, determining the mapping relationship between the mutually corresponding respective mapping regions in the overlapping region of the remote sensing image to be homogenized and the overlapping region of the reference remote sensing image comprises any one of: determining a mapping relation between the mapping regions corresponding to each other based on the color difference values of the feature pairs in the mapping regions corresponding to each other; performing machine learning on the colors of the feature pairs in the corresponding mapping regions, and determining the mapping relation between the corresponding mapping regions; and determining the mapping relation between the mapping regions corresponding to each other based on the histogram matching between the mapping regions corresponding to each other.
In some embodiments, the color homogenizing process is performed on the overlapped area after mapping the color-homogenized remote sensing image, and obtaining the color-homogenized remote sensing image comprises: selecting at least one expansion area with a preset size along the boundary of the overlapped area after mapping the remote sensing image to be color-leveled; and respectively fusing the extension areas to obtain the uniform-color remote sensing image.
In some embodiments, the fusing the extended regions respectively to obtain the uniform color image of the extended region includes: for each extended area, obtaining a homogenized image of the extended area according to any one of the following items: fusing the extended area by adopting a Poisson fusion algorithm to obtain a uniform-color image of the extended area; performing linear weighted fusion on the expansion region according to the distance between the pixel in the expansion region and the boundary to obtain a uniform color image of the expansion region; and splicing the minimum splicing path determined in the expanded area based on a dynamic programming algorithm to obtain a uniform-color image of the expanded area.
In some embodiments, the fusing the extended region by using a poisson fusion algorithm, and obtaining the homogenized image of the extended region includes: calculating an energy function of the expansion area; establishing a constraint equation based on the boundary of the extension area; and based on an energy function and a constraint equation, fusing the extended area by adopting a Poisson fusion algorithm to obtain the image after color homogenization.
In some embodiments, the extended area comprises more than one local window conforming to a predetermined size, each local window comprising at least one pixel.
In some embodiments, fusing the extended region by using a poisson fusion algorithm based on the energy function and the constraint equation to obtain the homogenized image includes: sequentially fusing each window by adopting a Poisson fusion algorithm based on an energy function and a constraint equation to obtain a window image after color homogenization; and splicing all the window images after color homogenizing to obtain images after color homogenizing.
In some embodiments, based on the energy function and the constraint equation, the extended region is fused by using a poisson fusion algorithm, and the uniform image obtained includes any one of the following items: fusing the extended region by adopting a Poisson fusion algorithm based on the gradient and a constraint equation; fusing the extended area by adopting a Poisson fusion algorithm based on the divergence and constraint equation; fusing the extended area by using a Poisson fusion algorithm based on the angular point measurement and the constraint equation; and fusing the extended region by adopting a Poisson fusion algorithm based on the gradient histogram and the constraint equation.
In some embodiments, determining the mapping relationship between the overlapping region of the remote sensing image to be color evened and the overlapping region of the reference remote sensing image according to the overlapping region of the remote sensing image to be color evened and the overlapping region of the reference remote sensing image comprises: and determining a mapping relation between the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image according to at least one channel of the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image.
In a second aspect, the present application provides a remote sensing image processing apparatus, comprising: the characteristic pair matching unit is used for matching the characteristics of the remote sensing image to be color-homogenized with the characteristics of the reference remote sensing image to obtain a plurality of characteristic pairs; a mapping relation determining unit configured to determine a feature mapping relation between the images based on the plurality of feature pairs; the overlapping area determining unit is used for determining the overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image according to the characteristic mapping relation between the images; and the color homogenizing processing unit is used for performing color homogenizing processing on the remote sensing image to be homogenized according to the overlapping area of the remote sensing image to be homogenized and the overlapping area of the reference remote sensing image to obtain the homogenized remote sensing image.
In some embodiments, the overlap region determining unit includes: the initial region determining subunit is used for determining an initial overlapping region between the remote sensing image to be color-leveled and the reference remote sensing image according to the plurality of feature pairs; the mask region detection subunit is used for carrying out cloud detection on an initial overlapping region between the remote sensing image to be color-homogenized and the reference remote sensing image to obtain a cloud mask region; and the mask region removing subunit is used for removing the cloud mask region in the initial overlapping region between the remote sensing image to be color-homogenized and the reference remote sensing image to obtain the overlapping region between the remote sensing image to be color-homogenized and the reference remote sensing image.
In some embodiments, the mask region detecting subunit is further configured to: and carrying out cloud detection on an initial overlapping area between the to-be-homogenized remote sensing image and the reference remote sensing image based on the deep neural network to obtain a cloud mask area.
In some embodiments, the color shading processing unit comprises: the mapping relation determining subunit is used for determining the mapping relation between the overlapping area of the remote sensing images to be color-leveled and the overlapping area of the reference remote sensing image according to the overlapping area of the remote sensing images to be color-leveled and the overlapping area of the reference remote sensing image; the overlapping area mapping subunit is used for mapping the overlapping area of the reference remote sensing image to the overlapping area of the remote sensing image to be color-leveled according to the mapping relation between the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image to obtain the overlapping area after the remote sensing image to be color-leveled is mapped; and the color homogenizing processing subunit is used for performing color homogenizing processing on the overlapped area after mapping the remote sensing image to be homogenized to obtain the homogenized remote sensing image.
In some embodiments, the mapping relationship determining subunit is further configured to: the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image respectively comprise at least one mapping area which corresponds to each other; and determining the mapping relation between the mapping areas corresponding to each other in the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image.
In some embodiments, the overlap region mapping subunit is further to: and mapping the overlapping region of the reference remote sensing image to the overlapping region of the remote sensing image to be color-leveled according to the mapping relation between the overlapping region of the remote sensing image to be color-leveled and the corresponding mapping regions in the overlapping region of the reference remote sensing image to obtain the mapped overlapping region of the remote sensing image to be color-leveled.
In some embodiments, the mapping-determining subunit is further configured to any one of: determining a mapping relation between the mapping regions corresponding to each other based on the color difference values of the feature pairs in the mapping regions corresponding to each other; performing machine learning on the colors of the feature pairs in the corresponding mapping regions, and determining the mapping relation between the corresponding mapping regions; and determining the mapping relation between the mapping regions corresponding to each other based on the histogram matching between the mapping regions corresponding to each other.
In some embodiments, the color shading processing subunit is further to: selecting at least one expansion area with a preset size along the boundary of the overlapped area after mapping the remote sensing image to be color-leveled; and respectively fusing the extension areas to obtain the uniform-color remote sensing image.
In some embodiments, the color shading processing subunit is further to: for each extended area, obtaining a homogenized image of the extended area according to any one of the following items: fusing the extended area by adopting a Poisson fusion algorithm to obtain a uniform-color image of the extended area; performing linear weighted fusion on the expansion region according to the distance between the pixel in the expansion region and the boundary to obtain a uniform color image of the expansion region; and splicing the minimum splicing path determined in the expanded area based on a dynamic programming algorithm to obtain a uniform-color image of the expanded area.
In some embodiments, the color shading processing subunit is further to: calculating an energy function of the expansion area; establishing a constraint equation based on the boundary of the extension area; and based on an energy function and a constraint equation, fusing the extended area by adopting a Poisson fusion algorithm to obtain the image after color homogenization.
In some embodiments, the extended region in the shading processing subunit comprises more than one partial window of a predetermined size, each partial window comprising at least one pixel.
In some embodiments, the color shading processing subunit is further to: sequentially fusing each window by adopting a Poisson fusion algorithm based on an energy function and a constraint equation to obtain a window image after color homogenization; and splicing all the window images after color homogenizing to obtain images after color homogenizing.
In some embodiments, the color homogenizing processing subunit is further configured to any one of: fusing the extended region by adopting a Poisson fusion algorithm based on the gradient and a constraint equation; fusing the extended area by adopting a Poisson fusion algorithm based on the divergence and constraint equation; fusing the extended area by using a Poisson fusion algorithm based on the angular point measurement and the constraint equation; and fusing the extended region by adopting a Poisson fusion algorithm based on the gradient histogram and the constraint equation.
In some embodiments, the mapping relationship determining subunit is further configured to: and determining a mapping relation between the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image according to at least one channel of the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image.
In a third aspect, the present application provides an electronic device, comprising: a memory storing executable instructions; one or more processors in communication with the memory to execute the executable instructions to: matching the characteristics of the remote sensing image to be color-homogenized with the characteristics of the reference remote sensing image to obtain a plurality of characteristic pairs; determining a feature mapping relationship between the images based on the plurality of feature pairs; determining an overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image according to the characteristic mapping relation between the images; and carrying out color homogenizing treatment on the remote sensing image to be color homogenized according to the overlapping area of the remote sensing image to be color homogenized and the overlapping area of the reference remote sensing image to obtain the remote sensing image after color homogenizing.
According to the remote sensing image processing method, the remote sensing image processing device and the electronic equipment, a plurality of feature pairs are obtained by firstly matching the features of the remote sensing image to be color-homogenized with the features of the reference remote sensing image, then the feature mapping relation between the images is determined based on the feature pairs, then the overlapping area between the remote sensing image to be color-homogenized and the reference remote sensing image is determined according to the feature mapping relation between the images, finally the color homogenization is carried out on the remote sensing image to be color-homogenized according to the overlapping area between the remote sensing image to be color-homogenized and the reference remote sensing image, and the remote sensing image after color homogenization is obtained, so that a large amount of unnecessary redundant data is removed in the processing process of the remote sensing image, the processing efficiency of the remote sensing image is improved, and the processing precision of the remote sensing image is.
Further, in some embodiments, the remote sensing image processing method, the remote sensing image processing device and the electronic device provided by the application also provide how to determine the overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image when the image to be color-leveled includes the cloud mask area, so that the remote sensing image can be prevented from being affected by the image of the bright ground object such as cloud or snow, and the color of the cloud or snow area is prevented from being seriously distorted, and the accuracy of subsequent image processing is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a schematic flow chart diagram of one embodiment of a method for remote sensing image processing according to the present application;
FIG. 2 is a schematic diagram of an exemplary application scenario in which the remote sensing image to be homogenized and the reference remote sensing image according to the application obtain an overlap region based on pairs of feature points;
FIG. 3 is a schematic flow chart diagram of one embodiment of a method for determining an overlap region between a remote sensing image to be homogenized and a reference remote sensing image from an inter-image feature mapping relationship;
FIG. 4 is a schematic diagram of an exemplary application scenario of a cloud mask image obtained by deep learning based cloud detection of a remote sensing image to be homogenized and a reference remote sensing image according to the present application;
FIG. 5 is a schematic flow chart diagram of one embodiment of a method of shading a remote sensing image to be shaded to obtain a shaded remote sensing image based on an overlap region of the remote sensing image to be shaded and an overlap region of a reference remote sensing image;
FIG. 6a is a schematic diagram of an exemplary application scenario of a cumulative histogram of an overlapping region of reference remote sensing images and a cumulative histogram of an overlapping region of remote sensing images to be smoothed according to the present application;
FIG. 6b is a schematic illustration of a cumulative histogram of the overlapping area of the mapped remote sensing image to be homogenized, approximating the overlapping area of the reference remote sensing image, according to the present application;
FIG. 7 is a schematic diagram of an exemplary application scenario for establishing an extended region for use in uniform color boundary processing according to the overlapping regions of the present application;
FIG. 8a is a schematic diagram of one embodiment of a remote sensing image to be color homogenized prior to application of the remote sensing image processing method of the present application;
FIG. 8b is an image after color homogenization by applying the remote sensing image processing method of the embodiment of the application to FIG. 8 a;
FIG. 9a is a schematic diagram of yet another embodiment of a remote sensing image to be color homogenized prior to application of the remote sensing image processing method of the present application;
FIG. 9b is an image after color homogenization by applying the remote sensing image processing method of the embodiment of the application to FIG. 9 a;
FIG. 10 is a schematic structural diagram of one embodiment of a remote sensing image processing apparatus according to the present application;
fig. 11 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
FIG. 1 shows a schematic flow chart diagram of one embodiment of a method of remote sensing image processing according to the present application.
As shown in fig. 1, a method 100 for processing a remote sensing image includes the following steps:
and 101, matching the characteristics of the remote sensing image to be color homogenized with the characteristics of the reference remote sensing image to obtain a plurality of characteristic pairs.
In this embodiment, the electronic device (for example, the server shown in fig. 1) on which the remote sensing image processing method operates may extract the features of the remote sensing image to be color-homogenized and the features of the reference remote sensing image, respectively, and then match the extracted features of the two images to obtain a feature pair. The reference remote sensing image is a history remote sensing image which is manually selected to be similar or similar to the remote sensing image to be homogenized and has higher image quality (such as high definition, no covering and the like).
The matching of the characteristic of the remote sensing image to be color-homogenized and the characteristic of the reference remote sensing image can be a matching of the characteristic of the remote sensing image to be color-homogenized and the characteristic point of the reference remote sensing image, a matching of the characteristic of the remote sensing image to be color-homogenized and the characteristic line of the reference remote sensing image, and a matching of the characteristic of the remote sensing image to be color-homogenized and other characteristics of the reference remote sensing image.
By way of example, the following description will be given of the extraction and matching of the feature points of the remote sensing image to be color-leveled and the feature points of the reference remote sensing image, taking the feature points as an example.
The feature points are also called interest points and key points, and are some obvious points which are highlighted in the image and have representative meanings. The characteristic point extraction is to find the most easily recognized pixel points, such as corner points, in two images to be matched, wherein the pixel points are formed by joint points of some geometric structures in the images, and many of the pixel points are intersection points generated between lines, such as edge points of objects with rich textures. The algorithm for extracting the feature points may be an algorithm that can resist scale, color, and affine change to a certain extent in the prior art or a technology developed in the future, and has a robust feature point extraction capability, and the present application is not limited thereto. For example, the feature points may be extracted by a Scale Invariant Feature Transform (SIFT) algorithm.
Extracting feature points may generally include the following two steps: extracting a detector: searching pixel points (corner points) which are most easily identified in two images to be matched, such as edge points of objects with rich textures and the like; and extracting the descriptor, wherein the descriptor can be described by using some mathematical features, such as a gradient histogram, a local random binary feature and the like, to obtain the descriptor of the feature point.
After the characteristic points are extracted, the characteristic points of the remote sensing image to be color-homogenized and the characteristic points of the reference remote sensing image can be matched. First, the corresponding relationship between the feature points in the two images can be determined, and the algorithm used for determining the corresponding relationship between the feature points in the two images can be an algorithm used in the prior art or a future developed technology, which is not limited in this application. For example, a fast nearest neighbor algorithm (FLANN) of high-dimensional data may be used to determine the correspondence between descriptors in two images. In order to remove the wrong matching points and keep the correct matching points, a denoising algorithm in the prior art or a technique developed in the future may also be used to denoise the obtained descriptor, which is not limited in the present application. For example, the resulting descriptors may be denoised using a random sample consensus (RANSAC) algorithm.
Step 102, determining a characteristic mapping relation between the images based on a plurality of characteristic pairs.
In this embodiment, on the basis of the feature point pairs obtained in step 101, a feature mapping relationship between images may be obtained. The feature mapping relationship here refers to a relationship of "correspondence" between the features of the remote sensing image to be color-homogenized and the features of the reference remote sensing image, and may have various expression forms, which is not limited in the present application. For example, the feature mapping relationship may be expressed in the expression of a mapping matrix.
The following describes the process of determining the mapping matrix between images when the feature mapping relationship is the mapping matrix, taking the feature point pair as an example:
suppose (x)1,y1) And (x)2,y1) For one pair of the same-name image points (a pair of pixel points representing the same position in the actual ground object, called the same-name image points) between two images extracted by the SIFT algorithm, the following mapping relationship is satisfied between the same-name image points:
Figure BDA0001200373090000101
wherein A is an affine transformation matrix, wherein the parameter a11...a33The specific parameters of translation, scaling, rotation, shearing and the like in the relative relationship between the images are jointly expressed, so that the method can be used for benefitingAnd solving the parameter matrix A in the formula by using the image points with the same name, namely obtaining an inter-image mapping matrix. The parameter matrix may be determined by using an algorithm for determining the parameter matrix in the prior art or in the future, which is not limited in this application. For example, the RANSAC algorithm may be employed to determine the parameter matrix.
And 103, determining an overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image according to the characteristic mapping relation between the images.
In this embodiment, based on the inter-image feature mapping relationship obtained in step 102, a conversion point set obtained by converting the vertex of the reference remote sensing image is calculated, and a polygon intersection is calculated from the vertex set of the remote sensing image to be color-leveled and the conversion point set of the reference remote sensing image. And according to the inverse of the characteristic mapping relation between the images, calculating an original point set of the intersection of the polygons in the reference remote sensing image to obtain an overlapping area.
Illustratively, taking the inter-image mapping matrix determined based on the feature point pairs as an example, the inter-image overlapping region is calculated:
referring to fig. 2, in fig. 2, based on the inter-image mapping matrix, the overlapping area of the a image can be calculated: the overlapping region of the A image is a polygon, and the vertices of the polygon are set as points P1、P2、P3、P4Point P2As the coordinates of the lower right point of the A image, point P1For the coordinate of the point of the same name of the upper left point of the B image on the A image, P can be obtained from the mapping relation obtained in step 103 in FIG. 11At the actual coordinate point in image A, at the calculation point P3The coordinates of the image point P on the A image can be calculated from the coordinates of the lower left point of the B image3' coordinate, line P1P3′,P1P3' the intersection with the lower boundary of the A image is point P3Coordinates; in a similar manner, at the calculation point P4Can calculate the coordinate of the upper right point of the B image as the homonymous image point P on the A image4' coordinate, line P1P4′,P1P4' the intersection with the right boundary of the A image is point P4And (4) coordinates. Calculate to obtain a point P1、P2、P3、P4The coordinates in image a, i.e. the overlapping area of image a, can be obtained from the mapping relationship obtained in step 103 in fig. 1, so as to obtain the boundary of the overlapping area: p1P3、P3P2、P2P4、P4P1
Returning to fig. 1, step 104, according to the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image, performing color-leveling processing on the remote sensing image to be color-leveled to obtain a remote sensing image after color leveling.
In this embodiment, the overlapping region of the remote sensing images to be color homogenized may be processed according to the overlapping region of the reference remote sensing images to obtain the overlapping region of the processed remote sensing images to be color homogenized, and then the boundary of the overlapping region of the processed remote sensing images to be color homogenized may be color homogenized to obtain the remote sensing images after color homogenization.
As mentioned in the background, conventional remote sensing image processing typically includes the steps of: firstly, selecting a reference remote sensing image similar or similar to the remote sensing image to be homogenized according to the remote sensing image to be homogenized, then matching the remote sensing image to be homogenized with the reference remote sensing image, and then splicing the remote sensing image to be processed and the reference remote sensing image by feathering in a splicing area according to a matching result. When image matching is carried out, two schemes of geographic coordinate registration and feature registration are mainly adopted.
However, when image matching is performed in the traditional remote sensing data image color homogenizing processing, geographic coordinate registration is limited in registration accuracy due to geographic coordinate errors and image distortion, and for image decryption, images do not have geographic coordinates in many times. For this reason, the feature matching scheme becomes a more general scheme, and although the mainstream method has implemented automatic feature extraction, a large number of mismatching still exists in matching. When the remote sensing image to be processed and the reference remote sensing image are spliced according to the matching result, processing is usually performed in the full image range, the processed data comprises a large amount of redundant data, the processing efficiency of the remote sensing image is low, the processing precision is poor, and the feathering processing is adopted in the splicing area, so that the details are blurred.
Compared with the prior art, the remote sensing image processing method provided by the embodiment of the application obtains a plurality of feature pairs by matching the features of the remote sensing image to be color-homogenized with the features of the reference remote sensing image; determining a feature mapping relationship between the images based on the plurality of feature pairs; determining an overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image according to the characteristic mapping relation between the images; and carrying out color homogenizing treatment on the remote sensing image to be color homogenized according to the overlapping area of the remote sensing image to be color homogenized and the overlapping area of the reference remote sensing image to obtain the remote sensing image after color homogenizing, thereby removing a large amount of unnecessary redundant data in the processing process of the remote sensing image, improving the processing efficiency of the remote sensing image and improving the processing precision of the remote sensing image.
With further reference to fig. 3, fig. 3 shows a schematic flow chart of an embodiment of a method of determining an overlap region between a remote sensing image to be homogenized and a reference remote sensing image from an inter-image feature mapping relationship.
The method for determining the overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image according to the characteristic mapping relation between the images shows how to determine the overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image when the image to be color-leveled comprises a cloud mask area.
As shown in fig. 3, the method 300 for determining the overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image according to the feature mapping relationship between the images includes:
in step 301, an initial overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image is determined according to the plurality of feature pairs.
In this embodiment, according to a plurality of feature pairs, an inter-image feature mapping relationship between the remote sensing image to be color-homogenized and the reference remote sensing image may be determined, then a conversion point set obtained by converting a vertex of the reference remote sensing image is calculated according to the inter-image feature mapping relationship, and a polygon intersection is calculated from the vertex set of the remote sensing image to be color-homogenized and the conversion point set of the reference remote sensing image. And according to the inverse of the characteristic mapping relation between the images, calculating an original point set of the intersection of the polygons in the reference remote sensing image, thereby determining an overlapping region between the remote sensing images to be homogenized, and taking the determined overlapping region as an initial overlapping region.
In step 302, cloud detection is performed on an initial overlapping area between the remote sensing image to be color homogenized and the reference remote sensing image to obtain a cloud mask area.
In this embodiment, color statistics is performed on the overlapped region of the uniform image during color equalization, and since the cloud is usually a highlighted white region on the image, the true color of the ground object cannot be truly reflected and a large error is caused, in order to obtain a more ideal color equalization effect, the range of the cloud needs to be known and the region needs to be ignored. It should be understood that cloud detection herein may detect the shape of shadows caused by clouds, snow, and other obstructions, and for convenience of description, cloud detection is used herein for exemplary purposes, and so forth. The cloud mask herein refers to a cloud for covering an image or an object.
The method for cloud detection of the overlapping area in the application can adopt a cloud detection method for remote sensing images in the prior art or the future development technology, and the application does not limit the method. For example, a physical method in which multispectral physical features are applied to individual pixels for detection, a detection method based on texture and spatial characteristics of a cloud, a pattern recognition detection method, an optimization detection method used by multiple algorithms in combination, and the like may be used to obtain the cloud mask region of the overlapping region.
Fig. 4 shows the cloud detection effect based on the deep learning, and in fig. 4, the overlapping region P of the image a and the image B is processed by the deep learning algorithm1、P2、P3、P4Cloud mask images of image a and image B, in which cloud mask regions 410, 420, and 430 are respectively shown, may be obtained by performing cloud detection. The cloud mask image here refers to: the image identifying the cloud location is typically a binary image, and the cloud regions in the cloud mask image are ignored in subsequent histogram conversions.
Returning to fig. 3, in some optional implementation manners of this embodiment, performing cloud detection on an initial overlapping region between the remote sensing image to be color-homogenized and the reference remote sensing image, and obtaining a cloud mask region includes: and carrying out cloud detection on an initial overlapping area between the to-be-homogenized remote sensing image and the reference remote sensing image based on the deep neural network to obtain a cloud mask area.
In the implementation mode, when the initial overlapping area between the uniform color remote sensing image and the reference remote sensing image is subjected to cloud detection based on the deep neural network to obtain the cloud mask area, the neural network has strong nonlinear fitting capability and simple learning rule, is convenient for computer implementation and has strong robustness, memory capability, nonlinear mapping capability and strong self-learning capability, so that the identification precision of shadows caused by cloud, snow and other shelters during cloud detection can be improved, and the accuracy of identification results can be improved.
In step 303, the cloud mask region in the initial overlapping region between the remote sensing image to be color-leveled and the reference remote sensing image is removed, and the overlapping region between the remote sensing image to be color-leveled and the reference remote sensing image is obtained.
In this embodiment, after the cloud mask region of the overlap region is obtained in step 302, regions except the cloud mask region in the two overlap regions, that is, the overlap region of the remote sensing image to be color-homogenized and the overlap region of the reference remote sensing image, may be obtained respectively, and the obtained regions are used as the overlap regions. At the moment, the overlapping area eliminates the influence of the images of the bright ground objects such as the cloud, the snow and the like, so that the accuracy of subsequent processing can be improved, the data redundancy is reduced, and the efficiency of the subsequent processing is improved.
According to the method for determining the overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image according to the characteristic mapping relation between the images, provided by the embodiment of the application, the serious color distortion of the cloud or snow area caused by the influence of the images of the bright ground objects such as the cloud or snow on the remote sensing image can be avoided, and therefore the accuracy of subsequent image processing is improved.
With further reference to fig. 5, fig. 5 shows a schematic flow chart of an embodiment of a method of shading a remote sensing image to be shaded according to an overlapping area of the remote sensing image to be shaded and an overlapping area of a reference remote sensing image to obtain a shaded remote sensing image.
As shown in fig. 5, a schematic flowchart 500 of a method for performing color equalization processing on a remote sensing image to be color equalized according to an overlapping region of the remote sensing image to be color equalized and an overlapping region of a reference remote sensing image to obtain a color-equalized remote sensing image includes:
in step 501, according to the overlapping region of the remote sensing image to be color-leveled and the overlapping region of the reference remote sensing image, the mapping relationship between the overlapping region of the remote sensing image to be color-leveled and the overlapping region of the reference remote sensing image is determined.
In this embodiment, the mapping relationship between the overlapping region of the remote sensing image to be color-leveled and the overlapping region of the reference remote sensing image may be determined according to at least partially mutually corresponding sub-regions or pixels in the overlapping region of the remote sensing image to be color-leveled and the overlapping region of the reference remote sensing image.
In some optional implementations of this embodiment, determining, according to the overlapping region of the remote sensing image to be color-leveled and the overlapping region of the reference remote sensing image, a mapping relationship between the overlapping region of the remote sensing image to be color-leveled and the overlapping region of the reference remote sensing image includes: the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image respectively comprise at least one mapping area which corresponds to each other; and determining the mapping relation between the mapping areas corresponding to each other in the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image.
In the implementation mode, the size of the remote sensing image is considered to be large, so when the mapping relation between the overlapping region of the remote sensing image to be homogenized and the overlapping region of the reference remote sensing image is determined according to the overlapping region of the remote sensing image to be homogenized and the overlapping region of the reference remote sensing image, the overlapping region can be divided into more than one mapping region, and then the mapping relation between the remote sensing image to be homogenized and the reference remote sensing image is determined according to the more than one mapping region, so that the mapping relation can be determined based on more detailed contents of the corresponding mapping regions, the accuracy of the mapping relation is improved, and the processing efficiency of the remote sensing image can be improved due to less data processed at a time. The mapping region herein refers to a sub-region for mapping in the overlapping region.
In some optional implementations of this embodiment, determining, according to the overlapping region of the remote sensing image to be color-leveled and the overlapping region of the reference remote sensing image, a mapping relationship between the overlapping region of the remote sensing image to be color-leveled and the overlapping region of the reference remote sensing image includes: and determining a mapping relation between the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image according to at least one channel of the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image.
In the implementation mode, because the remote sensing image to be color-leveled and the reference remote sensing image both have a plurality of channels, the matching can be carried out channel by channel to obtain the mapping relation of the overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image. For example, the mapping relationship between the overlapping region of the remote sensing image to be color-smoothed and the overlapping region of the reference remote sensing image may be determined based on the cumulative histograms of the pixel values in the respective channels. The channel of the remote sensing image refers to the ability of a sensor for collecting the remote sensing image to selectively receive electromagnetic waves, and actually refers to the working band of the sensor. Each operating band of the sensor may be referred to as a channel. One sensor can receive several electromagnetic bands, which is called a few-channel sensor. For example, if an operating band is 10 to 20nm, then 50 to 100 spectral channels are required for a spectral region of 0.4 to 2.5 microns.
In step 502, according to the mapping relation between the overlapping region of the remote sensing image to be color-leveled and the overlapping region of the reference remote sensing image, the overlapping region of the reference remote sensing image is mapped to the overlapping region of the remote sensing image to be color-leveled, and the mapped overlapping region of the remote sensing image to be color-leveled is obtained.
In this embodiment, according to the mapping relationship between the overlapping region of the remote sensing image to be color-leveled and the overlapping region of the reference remote sensing image, at least part of sub-regions or pixels in the overlapping region of the reference remote sensing image are mapped to at least part of corresponding sub-regions or pixels in the overlapping region of the remote sensing image to be color-leveled, so as to obtain the mapped overlapping region of the remote sensing image to be color-leveled.
In some optional implementation manners of this embodiment, mapping the overlapping region of the reference remote sensing image to the overlapping region of the remote sensing image to be color-leveled according to a mapping relationship between the overlapping region of the remote sensing image to be color-leveled and the overlapping region of the reference remote sensing image, and obtaining the overlapping region after mapping the remote sensing image to be color-leveled includes: and mapping the overlapping region of the reference remote sensing image to the overlapping region of the remote sensing image to be color-leveled according to the mapping relation between the overlapping region of the remote sensing image to be color-leveled and the corresponding mapping regions in the overlapping region of the reference remote sensing image to obtain the mapped overlapping region of the remote sensing image to be color-leveled.
In the implementation mode, the mapping relation among the mapping areas is adopted, the overlapping area of the reference remote sensing image is mapped to the overlapping area of the remote sensing image to be uniform, the overlapping area after the mapping of the remote sensing image to be uniform is obtained, the data amount needing to be processed at a time is small, the data processing efficiency is improved, when the overlapping area of the reference remote sensing image is mapped to the overlapping area of the remote sensing image to be uniform according to the mapping relation among the mapping areas, more local details in the mapping areas can be included, and the image precision of the mapped overlapping area can be improved.
In some optional implementation manners of this embodiment, determining a mapping relationship between mapping regions corresponding to each other in the overlapping region of the remote sensing image to be color-leveled and the overlapping region of the reference remote sensing image may include: determining a mapping relation between the mapping regions corresponding to each other based on the color difference values of the feature pairs in the mapping regions corresponding to each other; alternatively, the mapping relationship between the mapping regions corresponding to each other may be determined based on machine learning of colors of feature pairs in the mapping regions corresponding to each other; alternatively, the mapping relationship between the respective mapping regions corresponding to each other may be determined based on histogram matching between the respective mapping regions corresponding to each other.
In the implementation mode, the mapping relation between the mapping areas corresponding to each other is determined through the color difference value of the feature pairs in the mapping areas corresponding to each other, so that the mapping relation can be conveniently and quickly determined, and the determination speed of the mapping relation is improved; the mapping relation between the mapping regions corresponding to each other is determined by machine learning based on the colors of the feature pairs in the mapping regions corresponding to each other, so that the accuracy is high, the learning capability is strong, and the robustness and the fault tolerance to noise data are strong; by determining the mapping relation between the mapping regions corresponding to each other based on the histogram matching between the mapping regions corresponding to each other, the accuracy of the mapping result can be improved, and the details of the image can be retained.
With reference to fig. 6a and 6b, how to map the cumulative histogram between the overlapping area of the remote sensing images to be smoothed and the overlapping area of the reference remote sensing image is explained below.
When the histogram cumulative value of each pixel in the overlapping area of the reference remote sensing image is mapped to each pixel in the overlapping area of the remote sensing image to be color-leveled by adopting the histogram mapping relation, the remote sensing image has a plurality of channels, so that the channel-by-channel histogram matching can be carried out.
First, as shown in fig. 6a, the pixel values of each corresponding channel between the overlapping region of the reference remote sensing image and the overlapping region of the remote sensing image to be color-leveled may be counted, respectively, to obtain a cumulative histogram 601 of the overlapping region of the reference remote sensing image and a cumulative histogram 602 of the overlapping region of the remote sensing image to be color-leveled as shown in fig. 6a, and thereafter, the cumulative histograms 601 and 602 obtained in FIG. 6a are used to establish the mapping relationship of the pixel values between the overlapping region of the remote sensing image to be color-leveled and the overlapping region of the reference remote sensing image, so that the same pixel value in the overlapping region of the remote sensing image to be color-leveled and the same histogram cumulative value in the overlapping region of the reference remote sensing image have the same histogram cumulative value, thus, the pixel values of the overlapping region of the remote sensing images to be uniform are mapped and transformed, so that the overlapping region of the remote sensing images to be uniform obtains a cumulative histogram 603 similar to the overlapping region of the reference remote sensing image as shown in fig. 6 b.
The histogram accumulated value here refers to an accumulated probability distribution of the composition of the image at a gray level, and each probability value represents a probability smaller than or equal to the gray level.
Returning to fig. 5, in step 503, the overlapping area mapped to the remote sensing image to be color-homogenized is color-homogenized to obtain a remote sensing image after color homogenization.
In this embodiment, since the overlapped area after mapping of the remote sensing image to be color-leveled is the result of mapping from the reference remote sensing image, and there are differences in terms of color, brightness, and the like compared with other areas of the remote sensing image to be color-leveled except the overlapped area after mapping, it is necessary to perform color-leveling processing on the overlapped area after mapping of the remote sensing image to be color-leveled to obtain the remote sensing image after color-leveling.
In some optional implementation manners of this embodiment, performing color equalization processing on an overlapping region to which a remote sensing image to be color equalized is mapped, and obtaining the remote sensing image after color equalization includes: selecting at least one expansion area with a preset size along the boundary of the overlapped area after mapping the remote sensing image to be color-leveled; and respectively fusing the extension areas to obtain the uniform-color remote sensing image.
In this implementation, the predetermined size needs to be determined by determining the size of the extended area and the range of desired fusion based on the size of the remote sensing image. The expansion region may be expanded inward and/or outward along the boundary of the overlap region, which is not limited in this application.
In some optional implementation manners of this embodiment, the fusing the extension regions respectively to obtain the homogenized image of the extension region may include: for each extended area, obtaining a homogenized image of the extended area according to any one of the following items: fusing the expanded region by adopting a Poisson blending algorithm (Poisson blending) to obtain a uniform-color image of the expanded region; alternatively, linear weighted fusion can be performed on the expansion region according to the distance between the pixel in the expansion region and the boundary, so as to obtain a homogenized image of the expansion region; alternatively, the minimum splicing path determined in the extended region may be spliced based on a dynamic programming algorithm to obtain a homogenized image of the extended region.
In the implementation mode, the Poisson fusion algorithm is adopted to reconstruct the image pixels in the expansion region by using an interpolation method according to the energy function of the expansion region and the boundary information of the expansion region so as to obtain the image after color homogenization, the process of selecting the fusion region is simple and convenient, seamless color homogenization can be realized, and the color of the image splicing boundary region is in uniform transition.
The linear weighted fusion is carried out on the expansion region according to the distance between the pixel in the expansion region and the boundary to obtain the image pixel in the expansion region, and then the image after color homogenization is obtained, so that the transition effect of image fusion is more natural, the fusion efficiency is improved, the real-time requirement of a system during image processing is met, and the more ideal image fusion effect compared with the traditional algorithm is obtained.
The image pixels in the extended area are obtained by stitching according to the minimum stitching path determined by the dynamic programming algorithm in the extended area, and then the uniform color image is obtained, an optimal gap needs to be found in the extended area, the pixels above the gap are the least important, or the energy accumulation sum of the pixels experienced by the gap is the minimum (the optimal gap can be determined according to an energy function, wherein the energy function can be any one of the gradient, divergence, corner point measurement and gradient histogram of the image), the gap can vertically run through from top to bottom, the width is one pixel, the left run through to the right, the height is one pixel, and the optimal gap is the minimum stitching path. The minimum splicing path determined in the extended area is spliced through a dynamic programming algorithm, so that the image resolution in the extended area can be changed (important is reserved, secondary is removed, the importance and the secondary depend on an energy function), the image content can be emphasized (content amplification), and a specific object can be deleted, so that seamless color homogenizing processing can be simply and effectively realized, and the color of the image splicing boundary area is uniformly transited.
In some optional implementation manners of this embodiment, the fusing the extended region by using a poisson fusion algorithm, and obtaining the uniform-color image of the extended region includes: calculating an energy function of the expansion area; establishing a constraint equation based on the boundary of the extension area; and based on an energy function and a constraint equation, fusing the extended area by adopting a Poisson fusion algorithm to obtain the image after color homogenization.
In this implementation manner, when the energy function of the extended region is calculated, the energy function may be any one of a gradient, a divergence, an angular point measurement, and a gradient histogram, after the energy function is calculated, a constraint equation may be established based on a boundary of the extended region, and finally, based on the energy function and the constraint equation, the extended region is fused by using a poisson fusion algorithm to obtain an image after color homogenization. Therefore, the image boundary constraint condition can be added on the basis of the energy function, the uniform color calculation of the splicing boundary range is realized, and the uniform transition of colors at the splicing boundary is realized while the image detail information is kept as much as possible.
In some optional implementations of this embodiment, the extended area includes more than one partial window conforming to the predetermined size, and each partial window includes at least one pixel.
In the implementation mode, the data size related to the size of the processed image to be solved in the process of color homogenization is large due to the large size of the remote sensing image, and in order to solve the problem of insufficient memory, the extended area can be partitioned based on the window conforming to the preset size, so that the data processing efficiency is improved.
In some optional implementation manners of this embodiment, based on the energy function and the constraint equation, the fusion of the extended region by using the poisson fusion algorithm to obtain the homogenized image includes: sequentially fusing each window by adopting a Poisson fusion algorithm based on an energy function and a constraint equation to obtain a window image after color homogenization; and splicing all the window images after color homogenizing to obtain images after color homogenizing.
In the implementation mode, when the extended area is fused by adopting a Poisson fusion algorithm, the data volume needing to be processed at a single time can be reduced by sequentially processing each window in the extended area and finally traversing the whole buffer area, the problem of insufficient memory caused by large data volume related to the size of the processed image needing to be solved during color homogenizing is solved, and seamless color homogenizing processing is realized.
In some optional implementation manners of this embodiment, based on the energy function and the constraint equation, the extended region is fused by using a poisson fusion algorithm, and the obtained uniform-color image includes any one of the following items: fusing the extended region by adopting a Poisson fusion algorithm based on the gradient and a constraint equation; fusing the extended area by adopting a Poisson fusion algorithm based on the divergence and constraint equation; fusing the extended area by using a Poisson fusion algorithm based on the angular point measurement and the constraint equation; and fusing the extended region by adopting a Poisson fusion algorithm based on the gradient histogram and the constraint equation.
In this implementation, the gradient refers to the gradient of a scalar field, which in vector calculus is a vector field. The gradient at a point in the scalar field points in the direction in which the scalar field increases most rapidly, and the length of the gradient is the maximum rate of change corresponding to the direction in which the scalar field increases most rapidly. The extended region is fused by adopting a Poisson fusion algorithm based on the gradient and the constraint equation, the process of selecting the fused region is simple and convenient, and seamless color homogenizing processing can be realized, so that the color of the image splicing boundary region is uniformly transited.
Divergence (divergence) can be used for representing the divergence degree of vector fields of each point in space, and the significance of divergence is the activity of the field physically. When div F >0, it indicates that there is a positive source (divergent source) of the emitted flux at this point F; when div F <0 indicates that there is a negative source (hole or sink) of absorbed flux at this point F; when div F is 0, it indicates that point F is inactive. Based on divergence and a constraint equation, the extended region is fused by adopting a Poisson fusion algorithm, the process of selecting the fused region is simple and convenient, and seamless uniform color processing can be realized, so that the colors of the image splicing boundary region are uniformly transited.
The corner point measurement is data for measuring a corner point, which is a point where the brightness of the two-dimensional image changes drastically or a point where the curvature of the edge curve of the image is maximum. Based on the angular point measurement and the constraint equation, the Poisson fusion algorithm is adopted to fuse the extension area, so that the important features of the image graph can be kept, the data volume of the information can be effectively reduced, the content of the information is high, the calculation speed is effectively improved, and the reliable matching of the image is facilitated.
Gradient histograms, which represent structural features of edges (gradients), can form a rich set of features that can describe local shape information. Based on a gradient histogram and a constraint equation, the Poisson fusion algorithm is adopted to fuse the expansion region, the influence caused by translation and rotation can be inhibited based on the quantization of a position space and a direction space, and the influence caused by illumination change can be partially offset by normalizing the histogram in a local region.
Compared with the feathering of the splicing region in the prior art, the method for performing the uniform color processing on the uniform color remote sensing image according to the overlapping region of the to-be-uniform color remote sensing image and the overlapping region of the reference remote sensing image to obtain the uniform color remote sensing image realizes the smooth transition of colors and reserves the contrast or detail of data.
For example, as shown in fig. 7, an extended region for the uniform color boundary processing may be established based on the boundary of the overlap region, where the extended region is a buffer region 710 that is extended inward along the boundary of the overlap region. As shown by an arrow 711 in 710, each time a local area in the buffer is processed, the entire buffer area is processed in a traversal manner, and seamless color equalization processing is realized.
For example, fig. 8a shows a remote sensing image to be color homogenized before the application of the embodiment of the present application, the remote sensing image to be color homogenized 8a includes a highlighted cloud 810, and fig. 8b shows an image after the application of the remote sensing image processing method of the embodiment of the present application to fig. 8a, it can be seen that the color of the image after color homogenization in fig. 8b is not distorted in the cloud coverage area, and the image after color homogenization retains detail information.
For example, fig. 9a shows a remote sensing image to be color-leveled before the application of the embodiment of the present application, the remote sensing image to be color-leveled 9a includes a shadow 910 caused by an obstruction, fig. 9b shows an image after color leveling by applying the remote sensing image processing method of the embodiment of the present application to fig. 9a, it can be seen that the color of the image after color leveling in fig. 9b is not distorted in the shadow area caused by the obstruction, and the image after color leveling retains detail information.
With further reference to fig. 10, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of a remote sensing image processing apparatus, which corresponds to the embodiment of the method shown in fig. 1, and which is specifically applicable to various electronic devices.
As shown in fig. 10, the remote sensing image processing apparatus 1000 of the present embodiment may include:
and the feature pair matching unit 1010 is used for matching the features of the remote sensing image to be color-homogenized with the features of the reference remote sensing image to obtain a plurality of feature pairs.
A mapping relation determining unit 1020 for determining a feature mapping relation between the images based on the plurality of feature pairs.
And an overlapping area determining unit 1030, configured to determine an overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image according to the inter-image feature mapping relationship.
And the color homogenizing processing unit 1040 is configured to perform color homogenizing processing on the remote sensing image to be color homogenized according to the overlapping area of the remote sensing image to be color homogenized and the overlapping area of the reference remote sensing image, so as to obtain a color homogenized remote sensing image.
In some optional implementations of this embodiment (not shown in the figures), the overlapping area determining unit includes: the initial region determining subunit is used for determining an initial overlapping region between the remote sensing image to be color-leveled and the reference remote sensing image according to the plurality of feature pairs; the mask region detection subunit is used for carrying out cloud detection on an initial overlapping region between the remote sensing image to be color-homogenized and the reference remote sensing image to obtain a cloud mask region; and the mask region removing subunit is used for removing the cloud mask region in the initial overlapping region between the remote sensing image to be color-homogenized and the reference remote sensing image to obtain the overlapping region between the remote sensing image to be color-homogenized and the reference remote sensing image.
In some optional implementations of this embodiment (not shown in the drawings), the mask region detecting subunit is further configured to: and carrying out cloud detection on an initial overlapping area between the to-be-homogenized remote sensing image and the reference remote sensing image based on the deep neural network to obtain a cloud mask area.
In some optional implementations of this embodiment (not shown in the figures), the color homogenizing processing unit comprises: the mapping relation determining subunit is used for determining the mapping relation between the overlapping area of the remote sensing images to be color-leveled and the overlapping area of the reference remote sensing image according to the overlapping area of the remote sensing images to be color-leveled and the overlapping area of the reference remote sensing image; the overlapping area mapping subunit is used for mapping the overlapping area of the reference remote sensing image to the overlapping area of the remote sensing image to be color-leveled according to the mapping relation between the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image to obtain the overlapping area after the remote sensing image to be color-leveled is mapped; and the color homogenizing processing subunit is used for performing color homogenizing processing on the overlapped area after mapping the remote sensing image to be homogenized to obtain the homogenized remote sensing image.
In some optional implementations of this embodiment (not shown in the figure), the mapping determining subunit is further configured to: the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image respectively comprise at least one mapping area which corresponds to each other; and determining the mapping relation between the mapping areas corresponding to each other in the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image.
In some optional implementations of this embodiment (not shown in the figure), the overlap area mapping subunit is further configured to: and mapping the overlapping region of the reference remote sensing image to the overlapping region of the remote sensing image to be color-leveled according to the mapping relation between the overlapping region of the remote sensing image to be color-leveled and the corresponding mapping regions in the overlapping region of the reference remote sensing image to obtain the mapped overlapping region of the remote sensing image to be color-leveled.
In some optional implementations of this embodiment (not shown in the figures), the mapping determining subunit is further configured to any one of: determining a mapping relation between the mapping regions corresponding to each other based on the color difference values of the feature pairs in the mapping regions corresponding to each other; performing machine learning on the colors of the feature pairs in the corresponding mapping regions, and determining the mapping relation between the corresponding mapping regions; and determining the mapping relation between the mapping regions corresponding to each other based on the histogram matching between the mapping regions corresponding to each other.
In some optional implementations of this embodiment (not shown in the figures), the color homogenizing processing subunit is further configured to: selecting at least one expansion area with a preset size along the boundary of the overlapped area after mapping the remote sensing image to be color-leveled; and respectively fusing the extension areas to obtain the uniform-color remote sensing image.
In some optional implementations of this embodiment (not shown in the figures), the color homogenizing processing subunit is further configured to: for each extended area, obtaining a homogenized image of the extended area according to any one of the following items: fusing the extended area by adopting a Poisson fusion algorithm to obtain a uniform-color image of the extended area; performing linear weighted fusion on the expansion region according to the distance between the pixel in the expansion region and the boundary to obtain a uniform color image of the expansion region; and splicing the minimum splicing path determined in the expanded area based on a dynamic programming algorithm to obtain a uniform-color image of the expanded area.
In some optional implementations of this embodiment (not shown in the figures), the color homogenizing processing subunit is further configured to: calculating an energy function of the expansion area; establishing a constraint equation based on the boundary of the extension area; and based on an energy function and a constraint equation, fusing the extended area by adopting a Poisson fusion algorithm to obtain the image after color homogenization.
In some optional implementations of this embodiment (not shown in the figure), the extended area in the shading processing subunit includes more than one partial window conforming to a predetermined size, and each partial window includes at least one pixel.
In some optional implementations of this embodiment (not shown in the figures), the color homogenizing processing subunit is further configured to: sequentially fusing each window by adopting a Poisson fusion algorithm based on an energy function and a constraint equation to obtain a window image after color homogenization; and splicing all the window images after color homogenizing to obtain images after color homogenizing.
In some optional implementations of this embodiment (not shown in the figures), the color homogenizing processing subunit is further configured to any one of: fusing the extended region by adopting a Poisson fusion algorithm based on the gradient and a constraint equation; fusing the extended area by adopting a Poisson fusion algorithm based on the divergence and constraint equation; fusing the extended area by using a Poisson fusion algorithm based on the angular point measurement and the constraint equation; and fusing the extended region by adopting a Poisson fusion algorithm based on the gradient histogram and the constraint equation.
In some optional implementations of this embodiment (not shown in the figure), the mapping determining subunit is further configured to: and determining a mapping relation between the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image according to at least one channel of the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image.
It should be understood that the units recited in the apparatus 1000 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations and features described above for the remote sensing image processing method are equally applicable to the apparatus 1000 and the units contained therein, and are not described in detail here. Corresponding elements in the apparatus 1000 may cooperate with elements in the terminal device and/or the server to implement aspects of embodiments of the present application.
Those skilled in the art will appreciate that the above-described remote sensing image processing device 1000 may also include some other well-known structures, such as a processor, memory, etc., which are not shown in fig. 10 in order to not unnecessarily obscure embodiments of the present disclosure.
The present application further provides an electronic device, including: a memory storing executable instructions; one or more processors in communication with the memory to execute the executable instructions to: matching the characteristics of the remote sensing image to be color-homogenized with the characteristics of the reference remote sensing image to obtain a plurality of characteristic pairs; determining a feature mapping relationship between the images based on the plurality of feature pairs; determining an overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image according to the characteristic mapping relation between the images; and carrying out color homogenizing treatment on the remote sensing image to be color homogenized according to the overlapping area of the remote sensing image to be color homogenized and the overlapping area of the reference remote sensing image to obtain the remote sensing image after color homogenizing.
The embodiment of the invention also provides electronic equipment, which can be a mobile terminal, a Personal Computer (PC), a tablet computer, a server and the like. Referring now to fig. 11, shown is a schematic diagram of an electronic device 1100 suitable for use in implementing a terminal device or server of an embodiment of the present application: as shown in fig. 11, the computer system 1100 includes one or more processors, communication sections, and the like, for example: one or more Central Processing Units (CPU)1101, and/or one or more image processors (GPU)1113, etc., which may perform various suitable actions and processes in accordance with executable instructions stored in a Read Only Memory (ROM)1102 or loaded from a storage section 1108 into a Random Access Memory (RAM) 1103. Communications portion 1112 may include, but is not limited to, a network card, which may include, but is not limited to, an ib (infiniband) network card.
The processor may communicate with the read-only memory 1102 and/or the random access memory 1103 to execute the executable instructions, and is connected to the communication unit 1112 through the bus 1104 and communicates with other target devices through the communication unit 1112, so as to complete operations corresponding to any method provided by the embodiment of the present application, for example, obtain pre-labeled hyperspectral image data, where the pre-labeled hyperspectral image data includes labeling information of at least a part of image features in the hyperspectral image; randomly selecting a part of the pre-labeled hyperspectral image data as first image data; and training a preset hyperspectral image interpretation model by using the first image data as training data.
In addition, in the RAM1103, various programs and data necessary for the operation of the apparatus can also be stored. The CPU1101, ROM1102, and RAM1103 are connected to each other by a bus 1104. The ROM1102 is an optional module in case of the RAM 1103. The RAM1103 stores or writes executable instructions into the ROM1102 at runtime, which causes the processor 1101 to perform operations corresponding to the above-described communication methods. An input/output (I/O) interface 1105 is also connected to bus 1104. The communication unit 1112 may be integrated, or may be provided with a plurality of sub-modules (e.g., a plurality of IB network cards) and connected to the bus link.
The following components are connected to the I/O interface 1105: an input portion 1106 including a keyboard, mouse, and the like; an output portion 1107 including a signal output unit such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 1108 including a hard disk and the like; and a communication section 1109 including a network interface card such as a LAN card, a modem, or the like. The communication section 1109 performs communication processing via a network such as the internet. A driver 1110 is also connected to the I/O interface 1105 as necessary. A removable medium 1111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1110 as necessary, so that a computer program read out therefrom is mounted into the storage section 1108 as necessary.
It should be noted that the architecture shown in fig. 11 is only an optional implementation manner, and in a specific practical process, the number and types of the components in fig. 11 may be selected, deleted, added, or replaced according to actual needs; in different functional component settings, separate settings or integrated settings may also be used, for example, the GPU and the CPU may be separately set or the GPU may be integrated on the CPU, the communication part may be separately set or integrated on the CPU or the GPU, and so on. These alternative embodiments are all within the scope of the present disclosure.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated in the flowchart, the program code may include instructions corresponding to performing the method steps provided by embodiments of the present application, e.g., matching features of a panchromatic image with features of a multispectral image, resulting in a plurality of feature pairs; determining an inter-image mapping matrix based on the feature pairs; determining an overlapping area of the full-color image and the multispectral image according to the mapping matrix between the images; and fusing the overlapping region of the full-color image and the overlapping region of the multispectral image to obtain a fused remote sensing image. In such an embodiment, the computer program can be downloaded and installed from a network through the communication portion 11011 and/or installed from the removable medium 1111. The above-described functions defined in the method of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 1101.
The method, apparatus and electronic device of the present invention may be implemented in a number of ways. For example, the method, apparatus and electronic device of the present invention may be implemented by software, hardware, firmware or any combination of software, hardware and firmware. The above-described order for the steps of the method is for illustrative purposes only, and the steps of the method of the present invention are not limited to the order specifically described above unless specifically indicated otherwise. Furthermore, in some embodiments, the present invention may also be embodied as a program recorded in a recording medium, the program including machine-readable instructions for implementing a method according to the present invention. Thus, the present invention also covers a recording medium storing a program for executing the method according to the present invention.
The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to practitioners skilled in this art. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (29)

1. A method for processing remote sensing images, the method comprising:
matching the characteristics of the remote sensing image to be color-homogenized with the characteristics of the reference remote sensing image to obtain a plurality of characteristic pairs; the reference remote sensing image is an artificially selected historical remote sensing image which has similarity greater than or equal to a threshold value with the remote sensing image to be color-leveled and has image quality meeting quality screening conditions;
determining a feature mapping relationship between the images based on the plurality of feature pairs;
determining an overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image according to the characteristic mapping relation between the images;
according to the mapping relation between the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image, mapping the overlapping area of the reference remote sensing image to the overlapping area of the remote sensing image to be color-leveled to obtain the mapped overlapping area of the remote sensing image to be color-leveled;
and carrying out color homogenizing treatment on the overlapped area after the mapping of the remote sensing image to be subjected to color homogenizing treatment to obtain the remote sensing image subjected to color homogenizing.
2. The remote sensing image processing method according to claim 1, wherein said determining an overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image according to the inter-image feature mapping relationship comprises:
determining an initial overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image according to the plurality of feature pairs;
carrying out cloud detection on an initial overlapping area between the remote sensing image to be color-homogenized and the reference remote sensing image to obtain a cloud mask area;
and removing the cloud mask area in the initial overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image to obtain the overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image.
3. The remote sensing image processing method according to claim 2, wherein the performing cloud detection on the initial overlapping region between the remote sensing image to be color-leveled and the reference remote sensing image to obtain a cloud mask region comprises:
and carrying out cloud detection on the initial overlapping area between the remote sensing image to be homogenized and the reference remote sensing image based on a deep neural network to obtain a cloud mask area.
4. A method of processing remote sensing images according to any of claims 1-3, further comprising:
and determining a mapping relation between the overlapping area of the remote sensing images to be color-leveled and the overlapping area of the reference remote sensing image according to the overlapping area of the remote sensing images to be color-leveled and the overlapping area of the reference remote sensing image.
5. The remote sensing image processing method according to claim 4, wherein the determining a mapping relationship between the overlapping region of the remote sensing image to be color evened and the overlapping region of the reference remote sensing image according to the overlapping region of the remote sensing image to be color evened and the overlapping region of the reference remote sensing image comprises:
the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image respectively comprise at least one mapping area corresponding to each other;
and determining the mapping relation between the corresponding mapping areas in the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image.
6. The remote sensing image processing method according to claim 5, wherein the mapping the overlapping region of the reference remote sensing image to the overlapping region of the remote sensing image to be color homogenized according to the mapping relationship between the overlapping region of the remote sensing image to be color homogenized and the overlapping region of the reference remote sensing image to obtain the mapped overlapping region of the remote sensing image to be color homogenized comprises:
and mapping the overlapping area of the reference remote sensing image to the overlapping area of the remote sensing image to be color-leveled according to the mapping relation between the overlapping area of the remote sensing image to be color-leveled and the corresponding mapping areas in the overlapping area of the reference remote sensing image to obtain the mapped overlapping area of the remote sensing image to be color-leveled.
7. The remote sensing image processing method according to claim 5, wherein the determining of the mapping relationship between the overlapping area of the remote sensing image to be color-leveled and the corresponding mapping areas in the overlapping area of the reference remote sensing image comprises any one of:
determining a mapping relation between the mapping regions corresponding to each other based on the color difference values of the feature pairs in the mapping regions corresponding to each other;
determining a mapping relation between the mapping regions corresponding to each other based on machine learning of colors of the feature pairs in the mapping regions corresponding to each other; and
and determining the mapping relation between the mapping regions corresponding to each other based on the histogram matching between the mapping regions corresponding to each other.
8. The remote sensing image processing method according to claim 4, wherein the homogenizing the overlapping area after the mapping of the remote sensing images to be homogenized to obtain the homogenized remote sensing images comprises:
selecting at least one expansion area with a preset size along the boundary of the overlapped area after the mapping of the remote sensing image to be color homogenized;
and respectively fusing the extension areas to obtain the remote sensing image after color homogenization.
9. The remote sensing image processing method according to claim 8, wherein said fusing each of said extension areas to obtain a homogenized image comprises:
for each extended region, obtaining a homogenized image of the extended region according to any one of the following items:
fusing the extended area by adopting a Poisson fusion algorithm to obtain a uniform-color image of the extended area;
performing linear weighted fusion on the extended area according to the distance between the pixel in the extended area and the boundary to obtain a uniform color image of the extended area; and
and splicing the minimum splicing path determined in the extended area based on a dynamic programming algorithm to obtain a uniform-color image of the extended area.
10. The remote sensing image processing method according to claim 9, wherein the fusing the extended region by using a poisson fusion algorithm to obtain the homogenized image of the extended region comprises:
calculating an energy function of the extended region;
establishing a constraint equation based on the boundary of the extension area;
and fusing the extended area by adopting a Poisson fusion algorithm based on the energy function and the constraint equation to obtain the image after color homogenization.
11. A method of processing remote sensing images as defined in claim 8, wherein the extended area comprises more than one local window corresponding to the predetermined size, each of the local windows comprising at least one pixel.
12. The remote sensing image processing method according to claim 10, wherein said fusing the extended regions using a poisson fusion algorithm based on the energy function and the constraint equation to obtain a homogenized image comprises:
based on the energy function and the constraint equation, sequentially fusing each window by adopting a Poisson fusion algorithm to obtain a window image after color homogenization;
and splicing all the window images after color homogenizing to obtain images after color homogenizing.
13. The remote sensing image processing method according to claim 10, wherein the obtaining of the homogenized image by fusing the extended region with a poisson fusion algorithm based on the energy function and the constraint equation comprises any one of:
fusing the extended region by adopting a Poisson fusion algorithm based on the gradient and the constraint equation;
fusing the extended area by adopting a Poisson fusion algorithm based on the divergence and the constraint equation;
fusing the extended area by adopting a Poisson fusion algorithm based on the angular point measurement and the constraint equation; and
and fusing the extended region by adopting a Poisson fusion algorithm based on the gradient histogram and the constraint equation.
14. The remote sensing image processing method according to claim 4, wherein the determining a mapping relationship between the overlapping region of the remote sensing image to be color evened and the overlapping region of the reference remote sensing image according to the overlapping region of the remote sensing image to be color evened and the overlapping region of the reference remote sensing image comprises:
and determining a mapping relation between the overlapping area of the remote sensing images to be homogenized and the overlapping area of the reference remote sensing image according to at least one channel of the overlapping area of the remote sensing images to be homogenized and the overlapping area of the reference remote sensing image.
15. A remote sensing image processing apparatus, characterized in that the apparatus comprises:
the characteristic pair matching unit is used for matching the characteristics of the remote sensing image to be color-homogenized with the characteristics of the reference remote sensing image to obtain a plurality of characteristic pairs; the reference remote sensing image is an artificially selected historical remote sensing image which has similarity greater than or equal to a threshold value with the remote sensing image to be color-leveled and has image quality meeting quality screening conditions;
a mapping relation determining unit configured to determine a feature mapping relation between the images based on the plurality of feature pairs;
the overlapping area determining unit is used for determining the overlapping area between the remote sensing image to be color-leveled and the reference remote sensing image according to the characteristic mapping relation between the images;
the overlapping area mapping subunit is used for mapping the overlapping area of the reference remote sensing image to the overlapping area of the remote sensing image to be color-leveled according to the mapping relation between the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image to obtain the overlapping area of the remote sensing image to be color-leveled after mapping;
and the color homogenizing processing subunit is used for performing color homogenizing processing on the overlapped area after the mapping of the remote sensing image to be color homogenized to obtain the remote sensing image after color homogenizing.
16. The remote sensing image processing apparatus according to claim 15, wherein the overlap area determination unit includes:
an initial region determining subunit, configured to determine an initial overlapping region between the remote sensing image to be color-leveled and the reference remote sensing image according to the plurality of feature pairs;
a mask region detection subunit, configured to perform cloud detection on an initial overlapping region between the remote sensing image to be color-homogenized and the reference remote sensing image, to obtain a cloud mask region;
and the mask region removing subunit is used for removing the cloud mask region in the initial overlapping region between the remote sensing image to be color-leveled and the reference remote sensing image to obtain the overlapping region between the remote sensing image to be color-leveled and the reference remote sensing image.
17. A remote sensing image processing device according to claim 16, wherein the mask area detection subunit is further configured to:
and carrying out cloud detection on the initial overlapping area between the remote sensing image to be homogenized and the reference remote sensing image based on a deep neural network to obtain a cloud mask area.
18. A remote sensing image processing apparatus according to any of claims 15-17, further comprising:
and the mapping relation determining subunit is used for determining the mapping relation between the overlapping area of the remote sensing images to be uniform and the overlapping area of the reference remote sensing image according to the overlapping area of the remote sensing images to be uniform and the overlapping area of the reference remote sensing image.
19. The remote sensing image processing apparatus according to claim 18, wherein the mapping determining subunit is further configured to: the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image respectively comprise at least one mapping area corresponding to each other; and determining the mapping relation between the corresponding mapping areas in the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image.
20. The remote sensing image processing apparatus according to claim 19, wherein the overlap region mapping subunit is further configured to:
and mapping the overlapping area of the reference remote sensing image to the overlapping area of the remote sensing image to be color-leveled according to the mapping relation between the overlapping area of the remote sensing image to be color-leveled and the corresponding mapping areas in the overlapping area of the reference remote sensing image to obtain the mapped overlapping area of the remote sensing image to be color-leveled.
21. A remote sensing image processing apparatus according to claim 19, wherein the mapping determining subunit is further configured to any of:
determining a mapping relation between the mapping regions corresponding to each other based on the color difference values of the feature pairs in the mapping regions corresponding to each other;
determining a mapping relation between the mapping regions corresponding to each other based on machine learning of colors of the feature pairs in the mapping regions corresponding to each other; and
and determining the mapping relation between the mapping regions corresponding to each other based on the histogram matching between the mapping regions corresponding to each other.
22. The remote sensing image processing apparatus according to claim 18, wherein the color homogenizing processing subunit is further configured to:
selecting at least one expansion area with a preset size along the boundary of the overlapped area after the mapping of the remote sensing image to be color homogenized;
and respectively fusing the extension areas to obtain the remote sensing image after color homogenization.
23. The remote sensing image processing device according to claim 22, wherein the color homogenizing processing subunit is further configured to:
for each extended region, obtaining a homogenized image of the extended region according to any one of the following items:
fusing the extended area by adopting a Poisson fusion algorithm to obtain a uniform-color image of the extended area;
performing linear weighted fusion on the extended area according to the distance between the pixel in the extended area and the boundary to obtain a uniform color image of the extended area; and
and splicing the minimum splicing path determined in the extended area based on a dynamic programming algorithm to obtain a uniform-color image of the extended area.
24. The remote sensing image processing apparatus according to claim 23, wherein the color homogenizing processing subunit is further configured to:
calculating an energy function of the extended region;
establishing a constraint equation based on the boundary of the extension area;
and fusing the extended area by adopting a Poisson fusion algorithm based on the energy function and the constraint equation to obtain the image after color homogenization.
25. The remote sensing image processing device according to claim 22, wherein said extended area in said shading subunit comprises more than one local window corresponding to said predetermined size, each of said local windows comprising at least one pixel.
26. The remote sensing image processing apparatus according to claim 24, wherein the color homogenizing processing subunit is further configured to:
based on the energy function and the constraint equation, sequentially fusing each window by adopting a Poisson fusion algorithm to obtain a window image after color homogenization;
and splicing all the window images after color homogenizing to obtain images after color homogenizing.
27. The remote sensing image processing device according to claim 24, wherein the color homogenizing processing subunit is further configured to any one of:
fusing the extended region by adopting a Poisson fusion algorithm based on the gradient and the constraint equation;
fusing the extended area by adopting a Poisson fusion algorithm based on the divergence and the constraint equation;
fusing the extended area by adopting a Poisson fusion algorithm based on the angular point measurement and the constraint equation; and
and fusing the extended region by adopting a Poisson fusion algorithm based on the gradient histogram and the constraint equation.
28. The remote sensing image processing apparatus according to claim 18, wherein the mapping determining subunit is further configured to:
and determining a mapping relation between the overlapping area of the remote sensing images to be homogenized and the overlapping area of the reference remote sensing image according to at least one channel of the overlapping area of the remote sensing images to be homogenized and the overlapping area of the reference remote sensing image.
29. An electronic device, comprising:
a memory storing executable instructions;
one or more processors in communication with the memory to execute the executable instructions to:
matching the characteristics of the remote sensing image to be color-homogenized with the characteristics of the reference remote sensing image to obtain a plurality of characteristic pairs; the reference remote sensing image is an artificially selected historical remote sensing image which has similarity greater than or equal to a threshold value with the remote sensing image to be color-leveled and has image quality meeting quality screening conditions;
determining a feature mapping relationship between the images based on the plurality of feature pairs;
according to the mapping relation between the overlapping area of the remote sensing image to be color-leveled and the overlapping area of the reference remote sensing image, mapping the overlapping area of the reference remote sensing image to the overlapping area of the remote sensing image to be color-leveled to obtain the mapped overlapping area of the remote sensing image to be color-leveled;
and carrying out color homogenizing treatment on the overlapped area after the mapping of the remote sensing image to be subjected to color homogenizing treatment to obtain the remote sensing image subjected to color homogenizing.
CN201611264368.4A 2016-12-30 2016-12-30 Remote sensing image processing method and device and electronic equipment Active CN108230376B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611264368.4A CN108230376B (en) 2016-12-30 2016-12-30 Remote sensing image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611264368.4A CN108230376B (en) 2016-12-30 2016-12-30 Remote sensing image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN108230376A CN108230376A (en) 2018-06-29
CN108230376B true CN108230376B (en) 2021-03-26

Family

ID=62657356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611264368.4A Active CN108230376B (en) 2016-12-30 2016-12-30 Remote sensing image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN108230376B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555818B (en) * 2019-09-09 2022-02-18 中国科学院遥感与数字地球研究所 Method and device for repairing cloud region of satellite image sequence
CN111563867A (en) * 2020-07-14 2020-08-21 成都中轨轨道设备有限公司 Image fusion method for improving image definition
CN112634169B (en) * 2020-12-30 2024-02-27 成都星时代宇航科技有限公司 Remote sensing image color homogenizing method and device
CN112884675B (en) * 2021-03-18 2023-04-18 国家海洋信息中心 Batch remote sensing image color matching engineering realization method
CN113096043B (en) * 2021-04-09 2023-02-17 杭州睿胜软件有限公司 Image processing method and device, electronic device and storage medium
CN113281270B (en) * 2021-04-26 2023-06-23 中国自然资源航空物探遥感中心 Hyperspectral band selection method, hyperspectral band selection device, hyperspectral band selection equipment and storage medium
CN113610940B (en) * 2021-08-10 2022-06-07 江苏天汇空间信息研究院有限公司 Ocean vector file and image channel threshold based coastal area color homogenizing method
CN114113137A (en) * 2021-11-10 2022-03-01 佛山科学技术学院 Defect detection system and method for thin film material
CN117456192A (en) * 2023-12-21 2024-01-26 广东省海洋发展规划研究中心 Remote sensing image color correction method, device, equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102226907A (en) * 2011-05-24 2011-10-26 武汉嘉业恒科技有限公司 License plate positioning method and apparatus based on multiple characteristics
CN102436666A (en) * 2011-08-31 2012-05-02 上海大学 Object and scene fusion method based on IHS (Intensity, Hue, Saturation) transform
CN102800058A (en) * 2012-07-06 2012-11-28 哈尔滨工程大学 Remote sensing image cloud removing method based on sparse representation
CN102937454A (en) * 2012-11-13 2013-02-20 航天恒星科技有限公司 Energy compensation and chromatic aberration removal method for total-reflection optical splicing cameras
CN104182949A (en) * 2014-08-18 2014-12-03 武汉大学 Image inking and fusing method and system based on histogram feature point registration
CN104881841A (en) * 2015-05-20 2015-09-02 南方电网科学研究院有限责任公司 Aerial high voltage electric tower image splicing method based on edge characteristics and point characteristics
CN105427244A (en) * 2015-11-03 2016-03-23 中南大学 Remote sensing image splicing method and device
CN105844228A (en) * 2016-03-21 2016-08-10 北京航空航天大学 Remote sensing image cloud detection method based on convolution nerve network
CN106127690A (en) * 2016-07-06 2016-11-16 李长春 A kind of quick joining method of unmanned aerial vehicle remote sensing image
CN106127683A (en) * 2016-06-08 2016-11-16 中国电子科技集团公司第三十八研究所 A kind of real-time joining method of unmanned aerial vehicle SAR image

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102226907A (en) * 2011-05-24 2011-10-26 武汉嘉业恒科技有限公司 License plate positioning method and apparatus based on multiple characteristics
CN102436666A (en) * 2011-08-31 2012-05-02 上海大学 Object and scene fusion method based on IHS (Intensity, Hue, Saturation) transform
CN102800058A (en) * 2012-07-06 2012-11-28 哈尔滨工程大学 Remote sensing image cloud removing method based on sparse representation
CN102937454A (en) * 2012-11-13 2013-02-20 航天恒星科技有限公司 Energy compensation and chromatic aberration removal method for total-reflection optical splicing cameras
CN104182949A (en) * 2014-08-18 2014-12-03 武汉大学 Image inking and fusing method and system based on histogram feature point registration
CN104881841A (en) * 2015-05-20 2015-09-02 南方电网科学研究院有限责任公司 Aerial high voltage electric tower image splicing method based on edge characteristics and point characteristics
CN105427244A (en) * 2015-11-03 2016-03-23 中南大学 Remote sensing image splicing method and device
CN105844228A (en) * 2016-03-21 2016-08-10 北京航空航天大学 Remote sensing image cloud detection method based on convolution nerve network
CN106127683A (en) * 2016-06-08 2016-11-16 中国电子科技集团公司第三十八研究所 A kind of real-time joining method of unmanned aerial vehicle SAR image
CN106127690A (en) * 2016-07-06 2016-11-16 李长春 A kind of quick joining method of unmanned aerial vehicle remote sensing image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于图切割的图像拼接技术研究;方贤勇等;《中国图象图形学报》;20071215;第12卷(第12期);正文第3节 *
多视角图像场景合成方法的研究;王文辉;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150215(第2期);摘要,正文第7-8、13-19、42-43页 *
王文辉.多视角图像场景合成方法的研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2015,(第2期),摘要,正文第7-8、13-19、42-43页. *

Also Published As

Publication number Publication date
CN108230376A (en) 2018-06-29

Similar Documents

Publication Publication Date Title
CN108230376B (en) Remote sensing image processing method and device and electronic equipment
US10885399B2 (en) Deep image-to-image network learning for medical image analysis
CN109615611B (en) Inspection image-based insulator self-explosion defect detection method
RU2680765C1 (en) Automated determination and cutting of non-singular contour of a picture on an image
CN109255776B (en) Automatic identification method for cotter pin defect of power transmission line
JP6660313B2 (en) Detection of nuclear edges using image analysis
CN104200461B (en) The remote sensing image registration method of block and sift features is selected based on mutual information image
US20150003725A1 (en) Depth constrained superpixel-based depth map refinement
CN107545207A (en) DM two-dimensional code identification methods and device based on image procossing
CN108230281B (en) Remote sensing image processing method and device and electronic equipment
US9911210B1 (en) Raster log digitization system and method
CN110569878A (en) Photograph background similarity clustering method based on convolutional neural network and computer
Hormese et al. Automated road extraction from high resolution satellite images
JP2012515399A (en) Method and system for representing image patches
EP1789920A1 (en) Feature weighted medical object contouring using distance coordinates
KR102212075B1 (en) Method and program for modeling 3-dimensional drawing of walls in 2-dimensional architechural drawing
CN109241867B (en) Method and device for recognizing digital rock core image by adopting artificial intelligence algorithm
CA3136674C (en) Methods and systems for crack detection using a fully convolutional network
JP2006053929A (en) Approach based on perception for planar shape morphing
Lo et al. Depth map super-resolution via Markov random fields without texture-copying artifacts
US8340378B2 (en) Ribcage segmentation
CN113609984A (en) Pointer instrument reading identification method and device and electronic equipment
Lee et al. Neural geometric parser for single image camera calibration
Väänänen et al. Inpainting occlusion holes in 3D built environment point clouds
CN114114457B (en) Fracture characterization method, device and equipment based on multi-modal logging data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant