CN116342449B - Image enhancement method, device and storage medium - Google Patents

Image enhancement method, device and storage medium Download PDF

Info

Publication number
CN116342449B
CN116342449B CN202310321440.6A CN202310321440A CN116342449B CN 116342449 B CN116342449 B CN 116342449B CN 202310321440 A CN202310321440 A CN 202310321440A CN 116342449 B CN116342449 B CN 116342449B
Authority
CN
China
Prior art keywords
image
remote sensing
area
quality
sensing image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310321440.6A
Other languages
Chinese (zh)
Other versions
CN116342449A (en
Inventor
张鹏
杨沐
高千峰
王伟
屈泉酉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Galaxy Aerospace Beijing Network Technology Co ltd
Original Assignee
Galaxy Aerospace Beijing Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Galaxy Aerospace Beijing Network Technology Co ltd filed Critical Galaxy Aerospace Beijing Network Technology Co ltd
Priority to CN202310321440.6A priority Critical patent/CN116342449B/en
Publication of CN116342449A publication Critical patent/CN116342449A/en
Application granted granted Critical
Publication of CN116342449B publication Critical patent/CN116342449B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image enhancement method, an image enhancement device and a storage medium. The image enhancement method comprises the following steps: acquiring a plurality of remote sensing images, wherein the remote sensing images correspond to the same ground area; dividing the remote sensing image into a plurality of remote sensing image areas respectively, wherein the remote sensing image areas positioned at the same area position in different remote sensing images correspond to each other; determining a quality weight of the corresponding remote sensing image area according to each area position, wherein the quality weight corresponds to the image quality of each corresponding remote sensing image area; fusing the corresponding remote sensing image areas at the same area position according to the quality weight to generate a fused image area corresponding to the area position; and splicing the fusion image areas corresponding to the positions of the areas, so as to generate the enhanced fusion image. Therefore, the information of the remote sensing image area with higher image quality in each remote sensing image can be effectively utilized for enhancement, and the image enhancement quality is improved.

Description

Image enhancement method, device and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image enhancement method, an image enhancement device, and a storage medium.
Background
Currently, image enhancement techniques are widely used to improve various aspects of image quality (e.g., to improve visual effects such as sharpness of an image). The more common image enhancement method comprises the following steps: acquiring a source image as an enhancement object; acquiring an enhanced image corresponding to the source image; and fusing the pixel values of the corresponding pixels in the source image and the enhanced image in a weighted fusion mode, so that the enhancement processing is carried out on the source image.
In addition, the satellite may capture different remote sensing images for the same ground area, for example, by a plurality of image sensors, or may capture different remote sensing images for the same ground area at different times within a short period of time, for example, by one image sensor. Therefore, a scheme of generating a fused image with higher quality by performing image enhancement by using different remote sensing images aiming at the same ground area is proposed.
However, it is easy for a remote sensing image of the same ground area to have a quality of one part of the image area that is superior to another remote sensing image, and a quality of another part of the image area that is inferior to another remote sensing image. That is, since the quality of the remote sensing image is not uniformly distributed in each image area, it is difficult to directly fuse the remote sensing image by means of weighted fusion when image fusion is performed.
In this case, it is actually difficult to determine which of the remote sensing images is the source image and which is the enhanced image. Therefore, since the quality distribution of the remote sensing images is not uniform, it is difficult to perform image enhancement using a plurality of remote sensing images, and a high-quality fused image is generated.
The publication number is CN115690597A, and the name is a remote sensing image city ground object change detection method based on depth background difference. Comprising the following steps: acquiring a multi-temporal remote sensing image of a region to be detected; preprocessing the multi-phase remote sensing image; wherein the pretreatment comprises: image registration and image enhancement; acquiring two-phase images from the preprocessed multi-phase remote sensing images; channel merging is carried out on the acquired two-phase images; and taking the combined result as the input of a detection model based on the deep neural network, and outputting the ground feature change detection result through the detection model.
The publication number is CN115022253A, and the name is an image transmission method based on Beidou third-generation satellite short messages and artificial intelligence. Which comprises the following steps: 1) The sending end selects an image to be sent and compresses the image to be sent to form an original compressed image; 2) Splitting an original compressed image by a transmitting end according to the Beidou third-generation satellite short message length limit and the Beidou third-generation satellite short message protocol to form a plurality of split original compressed image subsets; 3) The sending end packs the split original compressed image subsets to form a plurality of image data packets, and sends the image data packets in sequence after packing; 4) The receiving end receives the image data packets one by one, implements a synchronous response mechanism, and restores the image data packets in sequence to form a restored compressed image; 5) And the receiving end adopts an artificial intelligent image enhancement technology to carry out image enhancement processing on the restored compressed image.
Aiming at the technical problems that the quality distribution of the remote sensing images in the prior art is not uniform, so that the image enhancement is difficult to be carried out by utilizing a plurality of remote sensing images, and a high-quality fusion image is generated, no effective solution is proposed at present.
Disclosure of Invention
The embodiment of the disclosure provides an image enhancement method, an image enhancement device and a storage medium, which at least solve the technical problems that in the prior art, the quality distribution of remote sensing images is not uniform, so that a plurality of remote sensing images are difficult to enhance the images, and a high-quality fusion image is generated.
According to an aspect of the embodiments of the present disclosure, there is provided an image enhancement method including: acquiring a plurality of remote sensing images, wherein the remote sensing images correspond to the same ground area; dividing the remote sensing image into a plurality of remote sensing image areas respectively, wherein the remote sensing image areas positioned at the same area position in different remote sensing images correspond to each other; determining a quality weight of the corresponding remote sensing image area according to each area position, wherein the quality weight corresponds to the image quality of each corresponding remote sensing image area; fusing the corresponding remote sensing image areas at the same area position according to the quality weight to generate a fused image area corresponding to the area position; and splicing the fusion image areas corresponding to the positions of the areas, so as to generate the enhanced fusion image.
According to another aspect of the embodiments of the present disclosure, there is also provided a storage medium including a stored program, wherein the method described above is performed by a processor when the program is run.
According to another aspect of the embodiments of the present disclosure, there is also provided an image enhancement apparatus including: the image acquisition module is used for acquiring a plurality of remote sensing images, wherein the remote sensing images correspond to the same ground area; the image dividing module is used for dividing the remote sensing image into a plurality of remote sensing image areas respectively, wherein the remote sensing image areas positioned at the same area position in different remote sensing images correspond to each other; the quality weight determining module is used for determining the quality weight of the corresponding remote sensing image area according to the position of each area, wherein the quality weight corresponds to the image quality of each corresponding remote sensing image area; the image fusion module is used for fusing the corresponding remote sensing image areas at the same area position according to the quality weight to generate a fused image area corresponding to the area position; and the image stitching module is used for stitching the fused image areas corresponding to the positions of the areas so as to generate the enhanced fused image.
According to another aspect of the embodiments of the present disclosure, there is also provided an image enhancement apparatus including: a processor; and a memory, coupled to the processor, for providing instructions to the processor for processing the steps of: acquiring a plurality of remote sensing images, wherein the remote sensing images correspond to the same ground area; dividing the remote sensing image into a plurality of remote sensing image areas respectively, wherein the remote sensing image areas positioned at the same area position in different remote sensing images correspond to each other; determining a quality weight of the corresponding remote sensing image area according to each area position, wherein the quality weight corresponds to the image quality of each corresponding remote sensing image area; fusing the corresponding remote sensing image areas at the same area position according to the quality weight to generate a fused image area corresponding to the area position; and splicing the fusion image areas corresponding to the positions of the areas, so as to generate the enhanced fusion image.
In the embodiment of the disclosure, even under the condition that the quality distribution of the whole remote sensing image is not uniform, the weight of fusing the remote sensing image areas can be dynamically determined according to the actual image quality of the remote sensing image areas corresponding to the positions of the different remote sensing images and the areas. Therefore, the information of the remote sensing image area with higher image quality in each remote sensing image can be effectively utilized for enhancement, and the image enhancement quality is improved. The method solves the technical problems that in the prior art, the quality distribution of the remote sensing images is not uniform, so that the image enhancement is difficult to be carried out by utilizing a plurality of remote sensing images, and a high-quality fusion image is generated.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this application, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and do not constitute an undue limitation on the disclosure. In the drawings:
FIG. 1 is a block diagram of a hardware architecture of a computing device for implementing a method according to embodiment 1 of the present disclosure;
FIG. 2 is a flow chart of an image enhancement method according to a first aspect of embodiment 1 of the present disclosure;
FIG. 3 is a schematic diagram of acquiring a plurality of remote sensing images corresponding to the same ground area in a method according to a first aspect of embodiment 1 of the present disclosure;
FIG. 4 is a schematic diagram of quality contrast of different areas of two remote sensing images according to a first aspect of embodiment 1 of the present disclosure;
FIG. 5 is a schematic diagram of a method of dividing a remote sensing image into a plurality of remote sensing image areas according to a first aspect of embodiment 1 of the present disclosure;
FIG. 6 is a schematic diagram of a method according to a first aspect of embodiment 1 of the present disclosure, in which fused image regions corresponding to respective region positions are stitched together, thereby generating an enhanced fused image;
FIG. 7 is a schematic diagram of a remote sensing image formed of a plurality of image components divided into a plurality of remote sensing image areas in a method according to a first aspect of embodiment 1 of the present disclosure;
FIG. 8 is a schematic illustration of a fused image component generated based on a remote sensing image comprising a plurality of image components for a specified region location and further generated in a method according to the first aspect of embodiment 1 of the present disclosure;
FIG. 9 is a schematic diagram of determining quality weights for respective image component regions using a convolutional neural network classifier-based weight determination model in a method according to a first aspect of embodiment 1 of the present disclosure;
FIG. 10 is a schematic diagram of the architecture of a weight determination model in a method according to the first aspect of embodiment 1 of the present disclosure;
FIG. 11 is a detailed flow chart of a method according to the first aspect of embodiment 1 of the present disclosure;
fig. 12 is a schematic view of an image enhancement device according to embodiment 2 of the present disclosure; and
fig. 13 is a schematic view of an image enhancement device according to embodiment 3 of the present disclosure.
Detailed Description
In order to better understand the technical solutions of the present disclosure, the following description will clearly and completely describe the technical solutions of the embodiments of the present disclosure with reference to the drawings in the embodiments of the present disclosure. It will be apparent that the described embodiments are merely embodiments of a portion, but not all, of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure, shall fall within the scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to the present embodiment, there is provided a method embodiment of an image enhancement method, it being noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases the steps shown or described may be performed in an order other than that shown or described herein.
The method embodiments provided by the present embodiments may be performed in a mobile terminal, a computer terminal, a server, or similar computing device. FIG. 1 shows a block diagram of a hardware architecture of a computing device for implementing an image enhancement method. As shown in fig. 1, the computing device may include one or more processors (which may include, but are not limited to, a microprocessor MCU, a processing device such as a programmable logic device FPGA), memory for storing data, transmission means for communication functions, and input/output interfaces. Wherein the memory, the transmission device and the input/output interface are connected with the processor through a bus. In addition, the method may further include: a display connected to the input/output interface, a keyboard, and a cursor control device. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the computing device may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors and/or other data processing circuits described above may be referred to herein generally as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the data processing circuitry may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computing device. As referred to in the embodiments of the present disclosure, the data processing circuit acts as a processor control (e.g., selection of the variable resistance termination path to interface with).
The memory may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the image enhancement method in the embodiments of the present disclosure, and the processor executes the software programs and modules stored in the memory, thereby performing various functional applications and data processing, that is, implementing the image enhancement method of the application program. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory may further include memory remotely located with respect to the processor, which may be connected to the computing device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communications provider of the computing device. In one example, the transmission means comprises a network adapter (Network Interface Controller, NIC) connectable to other network devices via the base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computing device.
It should be noted herein that in some alternative embodiments, the computing device shown in FIG. 1 described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that fig. 1 is only one example of a particular specific example and is intended to illustrate the types of components that may be present in the computing devices described above.
In the above-described operating environment, according to a first aspect of the present embodiment, there is provided an image enhancement method implemented by the computing device shown in fig. 1. Fig. 2 shows a schematic flow chart of the method, and referring to fig. 2, the method includes:
s202: acquiring a plurality of remote sensing images, wherein the remote sensing images correspond to the same ground area;
s204: dividing the remote sensing image into a plurality of remote sensing image areas respectively, wherein the remote sensing image areas positioned at the same area position in different remote sensing images correspond to each other;
s206: determining a quality weight of the corresponding remote sensing image area according to each area position, wherein the quality weight corresponds to the image quality of each corresponding remote sensing image area;
S208: fusing the corresponding remote sensing image areas at the same area position according to the quality weight to generate a fused image area corresponding to the area position;
s210: and splicing the fusion image areas corresponding to the positions of the areas, so as to generate the enhanced fusion image.
In particular, referring to FIG. 3, a computing device may acquire a plurality of remote sensing imagesP 1 ~P m . Wherein,mis an integer greater than 1. Multiple remote sensing imagesP 1 ~P m Corresponding to the same ground area. For example, remote sensing imagesP 1 ~P m Different remote sensing images which are shot for the same ground area through a plurality of image sensors can be adopted, or for example, different remote sensing images which are shot for the same ground area at different moments in time through one image sensor can be adopted. Thus, the computing device may utilize different remote sensing images for the same ground areaP 1 ~P m An image enhancement is performed to generate a fused image of higher quality corresponding to the ground area (S202).
In addition, since the image quality of different remote sensing image areas of the same remote sensing image is sometimes non-uniform, there may be a case where the image quality of a part of an area of one remote sensing image is superior to the corresponding area of another remote sensing image, and the image quality of another part of the area is inferior to the corresponding area of another remote sensing image. For example, referring to FIG. 4, a remote sensing image is shown P 1 Image area in (a)Z 1 Is better than the remote sensing image in image qualityP 2 Image area at corresponding position in (a)Z 1 'Is a picture quality of the picture. However, remote sensing imagesP 1 Image area in (a)Z 2 Is inferior to the remote sensing image in image qualityP 2 Image area at corresponding position in (a)Z 2 'Is a picture quality of the picture. That is, the remote sensing imageP 1 Not all areas have higher image quality than the remote sensing imageP 2 Image quality of corresponding region in the remote sensing image is higher than that of partial regionP 2 The image quality of the corresponding region in the image sensor is lower than that of the remote sensing imageP 2 Image quality of the corresponding region in (c). Although the above is based on remote sensing imageP 1 AndP 2 for the purposes of example, it is to be understood thatP 1 ~P m Such a situation may exist between each other, and will not be described here.
The computing devices then respectively compare the remote sensing imagesP 1 ~P m Divided into a plurality of remote sensing image areas (S204). Referring to FIG. 5, a computing device will remote sense an imageP 1 ~P m Respectively scratchIs divided intonRemote sensing image areas, andnthe remote sensing image areas are respectivelynThe individual zone locations correspond. Wherein the method comprises the steps ofnIs an integer greater than 1, andnmay be set according to the size of the remote sensing image itself. Referring to FIG. 5, each remote sensing image area is identified as Z. Wherein the method comprises the steps ofZ i,j Represent the firstiFirst of the remote sensing imagesjA remote sensing image area, wherein 1.ltoreq.imAnd is less than or equal to 1jn
Wherein for different remote sensing imagesP 1 ~P m The remote sensing image areas located at the same area position correspond to each other. For example, referring to FIG. 5, for a remote sensing imageP 1 ~P m Remote sensing image area at area position 1Z 1,1 ~Z m,1 Corresponding to each other, displaying the same ground area; is positioned at the area positionjRemote sensing image area of (2)Z j1, ~Z m j, Corresponding to each other, displaying the same ground area; and so on, at the location of the regionnRemote sensing image area of (2)Z n1, ~Z m n, Corresponding to each other, the same ground area is displayed.
Then, the computing device determines the quality weight of the corresponding remote sensing image area for each area positionw i,j Wherein the quality weightw i,j Associated with each respective remote sensing image areaw i,j Corresponds to the image quality of (S206).
For example, referring to FIG. 5, the computing device determines respective remote sensing image regions for region position 1Z 1,1 ~Z m,1 Quality weights of (2)w 1,1 ~w m,1 The method comprises the steps of carrying out a first treatment on the surface of the For regional positionjDetermining the corresponding remote sensing image areas respectivelyZ j1, ~Z m j, Quality weights of (2)w j1, ~w m j, The method comprises the steps of carrying out a first treatment on the surface of the And so on for the region positionnRespectively determineCorresponding remote sensing image areaZ n1, ~Z m n, Quality weights of (2)w n1, ~w m n,
Wherein the quality weightw i,j Associated with each respective remote sensing image area Z i j, Corresponds to the image quality of (a). Remote sensing image areaZ i j, The higher the image quality of (a), the corresponding quality weightw i,j The larger.
For example, the computing device determines the first for zone position 1iIndividual remote sensing imagesP i Remote sensing image area of (2)Z i,1 Image quality relative to remote sensing image areaZ 1,1 ~Z m,1 Quality weight of image quality of (a)w i,1 The method comprises the steps of carrying out a first treatment on the surface of the For regional positionjDetermine the firstiIndividual remote sensing imagesP i Remote sensing image area of (2)Z i,j Image quality relative to remote sensing image areaZ j1, ~Z m j, Quality weight of image quality of (a)w i,j The method comprises the steps of carrying out a first treatment on the surface of the For regional locationnDetermine the firstiIndividual remote sensing imagesP i Remote sensing image area of (2)Z i,n Image quality relative to remote sensing image areaZ n1, ~Z m n, Quality weight of image quality of (a)w i,n . Wherein, for remote sensing image areaZ i,j Determining corresponding quality weightsw i,j The method of (2) will be described in detail below and will not be described in detail here.
The computing device then calculates a quality weight based on the calculated qualityw i,j The remote sensing image areas corresponding to the same area position are fused to generate a fused image area corresponding to the area position (S208).
For example, referring to FIG. 6, the computing device is configured to determine, for region position 1, a respective remote sensing image regionZ 1,1 ~Z m,1 Corresponding quality weightsw 1,1 ~w m,1 For remote sensing image areaZ 1,1 ~Z m,1 Fusion is performed to generate a fused image region corresponding to the region position 1 Zf 1 The method comprises the steps of carrying out a first treatment on the surface of the For regional positionjAccording to the remote sensing image areas respectivelyZ j1, ~Z m j, Corresponding quality weightsw j1, ~w m j, For remote sensing image areaZ j1, ~Z m j, Fusion is performed to generate a fusion with the region positionjCorresponding fused image regionsZf j The method comprises the steps of carrying out a first treatment on the surface of the And so on; for regional positionnAccording to the remote sensing image areas respectivelyZ n1, ~Z m n, Corresponding quality weightsw n1, ~w m n, For remote sensing image areaZ n1, ~Z m n, Fusion is performed to generate and the firstnFused image regions corresponding to respective region positionsZf n
Then, referring to FIG. 6, the computing device concatenates the fused image regions corresponding to the respective region locations to generate an enhanced fused imagePf(S210)
As described in the prior art, it is easy for a remote sensing image for the same ground area to have the following condition, and one remote sensing image may have a quality of a part of an image area superior to that of another remote sensing image, and another part of the image area may have a quality inferior to that of the other remote sensing image. That is, since the quality of the remote sensing image is not uniformly distributed in each image area, it is difficult to directly fuse the remote sensing image by means of weighted fusion when image fusion is performed. In this case, it is actually difficult to determine which of the remote sensing images is the source image and which is the enhanced image. Therefore, since the quality distribution of the remote sensing images is not uniform, it is difficult to perform image enhancement using a plurality of remote sensing images, and a high-quality fused image is generated.
In view of this, in the process of image enhancement of the remote sensing image, the technical solution of the present disclosure does not directly fuse the remote sensing image with the whole remote sensing image. Instead, each remote sensing image is first divided into a plurality of remote sensing image areas according to different area positions. And then, respectively determining the quality weight of each remote sensing image area according to the image quality of each remote sensing image area aiming at the remote sensing image area corresponding to each area position of each remote sensing image, and then carrying out weighted fusion on each remote sensing image area according to the quality weight of each remote sensing image area so as to generate a fused image area corresponding to the area position. And then, splicing the generated fusion image areas corresponding to the positions of the areas, thereby generating the enhanced fusion image.
Therefore, according to the technical scheme of the disclosure, even under the condition that the quality distribution of the whole remote sensing image is not uniform, the weight of the fusion of the remote sensing image areas can be dynamically determined according to the actual image quality of the remote sensing image areas corresponding to the positions of the different remote sensing images and the areas. Therefore, the information of the remote sensing image area with higher image quality in each remote sensing image can be effectively utilized for enhancement, and the image enhancement quality is improved. The method solves the technical problems that in the prior art, the quality distribution of the remote sensing images is not uniform, so that the image enhancement is difficult to be carried out by utilizing a plurality of remote sensing images, and a high-quality fusion image is generated.
Further, the computing device may perform weighted fusion on the remote sensing image area corresponding to the specified area location by means of weighted summation, for example.
Specifically, referring to fig. 6, the computing device is configured to determine, for the region position 1, a region according to the remote sensing image respectivelyZ 1,1 ~Z m,1 Corresponding quality weightsw 1,1 ~w m,1 Matrix operation is carried out on the remote sensing image area through the following formulaZ 1,1 ~Z m,1 Fusion is performed to generate a fused image area corresponding to the area position 1DomainZf 1
For regional positionjAccording to the remote sensing image areas respectivelyZ j1, ~Z m j, Corresponding quality weightsw j1, ~w m j, Matrix operation is carried out on the remote sensing image area through the following formulaZ j1, ~Z m j, Fusion is performed to generate a fusion with the region positionjCorresponding fused image regionsZf j
And so on for the region positionnAccording to the remote sensing image areas respectivelyZ n1, ~Z m n, Corresponding quality weightsw n1, ~w m n, Matrix operation is carried out on the remote sensing image area through the following formulaZ n1, ~Z m n, Fusion is performed to generate and the firstnFused image regions corresponding to respective region positionsZf n:
Optionally, the remote sensing image includes a plurality of image components, and the operation of determining the quality weights of the respective remote sensing image areas for each area location includes: determining a target image component from the plurality of image components; and determining the quality weight of the target image component of the corresponding remote sensing image area aiming at each area position. And further, fusing the corresponding remote sensing image areas at the same area position according to the quality weight, and generating a fused image area corresponding to the area position, which comprises the following steps: and fusing the target image components of the corresponding remote sensing image area at the same area position according to the quality weight, and generating a fused image component corresponding to the target image component and the area position.
Specifically, referring to fig. 7, according to the technical solution of the present disclosure, a remote sensing image is formed by, for examplerThe number of image components is chosen to be the same,ris an integer greater than 1. For example, a remote sensing image may be composed of image components of three color components of RGB. Further, other wavelength components such as infrared may be included. For convenience of description, in the technical scheme of the disclosurer=3, i.e. remote sensing imagesP i For example, 3 different image components may be includedP i,1 ~P i,3 . Wherein the image componentP i,1 ~P i,3 For example, three different color components of RGB may be respectively associated, and of course, color components of other different wavelength bands may be associated. And those skilled in the art may set the image components to 4 or more according to actual situations, which will not be described herein.
Thus, referring to FIG. 7, when the computing device is communicating the remote sensing imageP i Dividing the image into a plurality of image areasP i,1 ~P i,rr=3) divided into a plurality of image areas, respectively. Wherein,Z i j,k, for indicating the firstiFirst of the remote sensing imageskImage components (e.g., 1.ltoreq.in the disclosed embodiments)kNot more than 3)jImage component regions, and region positionsjCorresponding to the above.
Thus, in this disclosure, any image region divided by any remote sensing image Z i,j May be further represented as being made up of a plurality of image components. For example, remote sensing imagesP i Image component of (a)P i,1 Can be divided into image component areasZ i,1,1 ~Z i,n,1 Remote sensing imageP i Image component of (a)P i,2 Can be divided into image component areasZ i,1,2 ~Z i,n,2 Remote sensing imageP i Image component of (a)P i,3 Can be divided into image component areasZ i,1,3 ~Z i,n,3
Thus, as shown with reference to FIG. 8, according to the aspects of the present disclosure, for any zone location, e.g., zone locationj(1≤jn) The computing device first generates a first image component from each of the remote sensing imagesP i,1 Determining each remote sensing imageP 1 ~P m And the image componentP i,1 Corresponding image component areas. For example from remote sensing image componentsP 1 Is the first image component of (2)P 1,1 Determining the position of the regionjCorresponding image component areasZ j1,,1 The method comprises the steps of carrying out a first treatment on the surface of the From remote sensing image componentsP i Is the first image component of (2)P i,1 Determining an image component region corresponding to the region position jZ i j,,1 The method comprises the steps of carrying out a first treatment on the surface of the And so on; from remote sensing imagesP m Is the first image component of (2)P m,1 Determining an image component region corresponding to the region position jZ m j,,1
Then, as shown with reference to FIG. 8, for the region positionjThe computing device determines and respectively associates with the image component areasZ j1,,1 ~Z m j,,1 Corresponding quality weightsw j1,,1 ~w m,j,1 . Wherein the mass weightw j1,,1 ~w m,j,1 Respectively with the image component area Z 1,j,1 ~Z m,j,1 Correspondingly, and the higher the quality weight, the higher the image quality representing the corresponding image component area.
Then, referring to FIG. 8, the computing device calculates a weight w according to the quality 1,j,1 ~w m,j,1 For image component region Z 1,j,1 ~Z m,j,1 Weighted fusion is carried out to obtain the position j of the regionA first fused image component corresponding to the first image componentZf j,1 (i.e., a fused image component of the first image component corresponding to the region position j).
With further reference to fig. 8, the computing device generates the kth image component (according to the present solution, k=1 to 3) at the region position according to the manner described abovejIs a fused image component of (a)Zf j k, . Thus, the computing device generates a location of the region based on the generated and the regionjCorresponding fused image componentZf j k,k=1-3), generating corresponding fused image areasZf j j=1~n)。
Then, the computing device generates the position 1-1 of each region according to the modenCorresponding fused image regionsZf j Thereafter, the image areas are fusedZf j Stitching to generate a fused imagePf j
Therefore, the technical scheme of the present disclosure not only determines quality weights according to different image areas, but also according to different image components even the same image area, and performs fusion according to the quality weights to generate corresponding fusion image components. Therefore, image fusion can be carried out with finer granularity, and the image enhancement effect is further improved.
Optionally, for each region position, determining the quality weight of the target image component of the corresponding remote sensing image region comprises: inputting target image components of corresponding remote sensing image areas into a preset quality weight model based on a convolutional neural network classifier; and determining the quality weight of the target image component of the corresponding remote sensing image area according to the quality weight vector output by the weight model, wherein the elements of the quality weight vector are respectively used for indicating the quality weight of the target image component of the corresponding remote sensing image area.
Specifically, referring to fig. 9, the computing device sets different weight determination models for different image components, respectively. For example, the computing device calculates a first image component for each remote sensing imageP i,1 Determining the positions of each remote sensing image and each region by using a weight determination model 1jCorresponding image component regionZ i j,,1 Quality weights of (2)w i,j,1 (1≤im) The method comprises the steps of carrying out a first treatment on the surface of the Second image component of computing device for each remote sensing imageP i,2 Determining the positions of each remote sensing image and each region by using a weight determination model 2jCorresponding image component regionZ i j,,2 Quality weights of (2)w i,j,2 The method comprises the steps of carrying out a first treatment on the surface of the Third image component of each remote sensing image by computing deviceP i,3 Determining the image component area corresponding to the area position j of each remote sensing image by using the weight determination model 3 Z i j,,3 Quality weights of (2)w i,j,3 . The weight determining models 1-3 are weight determining models based on convolutional neural network classifiers. Thus, referring to fig. 9, the weight determination models 1 to 3 output the mass weights in the form of mass weight vectors. For example, the elements of the quality weight vector output by the weight determination model 1 arew ,j1,1 ~w m,j,1 The method comprises the steps of carrying out a first treatment on the surface of the The elements of the quality weight vector output by the weight determination model 2 arew ,j1,2 ~w m,j,2 The method comprises the steps of carrying out a first treatment on the surface of the And the elements of the quality weight vector output by the weight determination model 3 arew ,j1,3 ~w m,j,3 . Thus, by determining the model based on the weight of the deep learning, the quality weight vector of each image component area can be accurately determined.
For example, for region position 1, the computing device obtains a first color component of each remote sensing imageP 1,1 ~P m,1 Image component area corresponding to area position 1Z 1,1,1 ~Z m,1,1 . The computing device then regions the image componentsZ 1,1,1 ~Z m,1,1 Is input to the weight determination model 1 to determine the respective image component areasZ 1,1,1 ~Z m,1,1 Corresponding quality weightsw 1,1,1 ~w m,1,1 . Wherein the method comprises the steps ofw 1,1,1 Is associated with the image component areaZ 1,1,1 Corresponding mass weights;w i,1,1 is associated with the image component areaZ i,1,1 Corresponding mass weights; andw m,1,1 is associated with the image component areaZ m,1,1 And corresponding quality weights.
The computing device then obtains a second color component of each remote sensing image P 1,2 ~P m,2 Image component area corresponding to area position 1Z 1,1,2 ~Z m,1,2 . The computing device then regions the image componentsZ 1,1,2 ~Z m,1,2 Is input to the weight determination model 2 to determine the respective image component areasZ 1,1,2 ~Z m,1,2 Corresponding quality weightsw 1,1,2 ~w m,1,2 . Wherein the method comprises the steps ofw 1,1,2 Is associated with the image component areaZ 1,1,2 Corresponding mass weights;w i,1,2 is associated with the image component areaZ i,1,2 Corresponding mass weights; andw m,1,2 is associated with the image component areaZ m,1,2 And corresponding quality weights.
The computing device then obtains a third color component of each remote sensing imageP 1,3 ~P m,3 Image component area corresponding to the 1 st area positionZ 1,1,3 ~Z m,1,3 . The computing device then regions the image componentsZ 1,1,3 ~Z m,1,3 Is input to the weight determination model 3 to determine the respective image component areasZ 1,1,3 ~Z m,1,3 Corresponding quality weightsw 1,1,3 ~w m,1,3 . Wherein the method comprises the steps ofw 1,1,3 Is associated with the image component areaZ 1,1,3 Corresponding mass weights;w i,1,3 is associated with the image component areaZ i,1,3 Corresponding mass weights; andw m,1,3 is an image ofComponent areaZ m,1,3 And corresponding quality weights.
Therefore, the computing equipment can determine each remote sensing image through weight determination models 1-3 based on convolutional neural network classifiersP 1 ~P m The quality weight of the image component region corresponding to the 1 st region position.
Similarly, for any region positionj(1≤jn) The computing equipment can determine each remote sensing image through weight determination models 1-3 based on convolutional neural network classifier P 1 ~P m Different image components and region positions of (a)jQuality weight of the corresponding image component area.
Since the telemetry image in the scheme of the present disclosure includes 3 image components (i.e.,r=3), the quality weights corresponding to the different image components are determined by the 3 weight determination models. When the remote sensing image contains more image components, the number of weight determination models corresponds to the number of image components. And will not be described in detail herein.
Further, the structure of the weight determination model 1 to 3 will be described below by taking the weight determination model 1 as an example.
Specifically, fig. 10 shows an architectural diagram of the weight determination model 1. However, the weight determining models 1 to 3 may all adopt the architecture shown in fig. 10, and only the weight determining models 1 to 3 are trained by different samples, so as to have different parameters.
Specifically, the weight determination model shown in fig. 10 includes a backbone network composed of a plurality of convolution layers and pooling layers, a softmax classifier, and a fully connected layer between the backbone network and the softmax classifier.
Thereby each remote sensing imageP i Is the first image component of (2)P i,1 At the location of the areajImage component area of (a)Z j1,,1 ~Z m j,,1 After being input into the weight determination model, the characteristic diagram of the multiple channels is generated through a convolution layer and a pooling layer of the backbone network. Input of backbone network generated feature map to A full connection layer, whereby the full connection layer generates a plurality of data streams respectively associated withZ j1,,1 ~Z m j,,1 A corresponding integrated value. The softmax classifier outputs a quality weight vector according to the integral value of the full connection layer, wherein each element in the quality weight vector is respectively connected with the image component areaZ j1,,1 ~Z m j,,1 Corresponding quality weightsw j1,,1 ~w m j,,1 And whereinw j1,,1 ~w m j,,1 The sum is equal to 1.
Further, the weight determination model 1 shown in fig. 10 can be trained by the following method:
step 1, constructing a training sample set { according to the following methodS1,S2,S3,......}。
Each sample in the training sample set comprises m remote sensing images corresponding to the same ground area, specifically, the composition of each training sample is shown in table 1 below:
TABLE 1
Referring to table 1, the sample set includes a plurality of sample { s }S1,S2,S3,......}. Wherein each sample comprises a plurality of samples corresponding to the same ground areamAnd (3) remote sensing images. For example, sample S1 includes a plurality of samples corresponding to the same ground areamIndividual remote sensing imagesPS 1,1 ~PS 1,m The method comprises the steps of carrying out a first treatment on the surface of the Sample S2 includes a plurality of samples corresponding to the same ground areamIndividual remote sensing imagesPS 2,1 ~PS 2,m The method comprises the steps of carrying out a first treatment on the surface of the Sample S3 includes a plurality of samples corresponding to the same ground areamIndividual remote sensing imagesPS 3,1 ~PS 3,m The method comprises the steps of carrying out a first treatment on the surface of the And so on.
And each telemetry image comprises 3 image components as described above. Thus, sample S1mThe remote sensing images can be further divided into 3 sample image component sets according to different image components Closing devicePS 1,1,1 ~PS 1,m,1PS 1,1,2 ~PS 1,m,2 AndPS 1,1,3 ~PS 1,m,3 . And sample S1 also includes quality weights corresponding to respective sample component setsws 1,1,1 ~ws 1,m,1ws 1,1,2 ~ws 1,m,2 Andws 1,1,3 ~ws 1,m,3 . Thus when the sample image component is assembledPS 1,1,1 ~PS 1,m,1 According to the weight of qualityws 1,1,1 ~ws 1,m,1 The best enhancement effect can be achieved when the weighting fusion is carried out; when the sample image component is assembledPS 1,1,2 ~ PS 1,m,2 According to the weight of qualityws 1,1,2 ~ws 1,m,2 The best enhancement effect can be achieved when the weighting fusion is carried out; when the sample image component is assembledPS 1,1,3 ~PS 1,m,3 According to the weight of qualityws 1,1,3 ~ws 1,m,3 The best enhancement effect can be achieved when weighted fusion is performed.
Similarly, the other samples of the sample set are also constructed in the same manner as described above. Thus, the weight determination model 1 may be trained by extracting the first image component of the training sample and the corresponding quality weights from table 1 in the manner shown in table 2 below:
TABLE 2
Step 2: the weight determination model 1 described in fig. 10 was trained by a back propagation algorithm using the samples listed in table 2. And will not be described in detail herein.
Step 3: according to the architecture shown in fig. 10, weight determination model 2 was constructed, and then weight determination model 2 was trained by a back propagation algorithm using the samples listed in table 3 below.
TABLE 3 Table 3
Step 4: according to the architecture shown in fig. 10, the weight determination model 3 was constructed, and then the weight determination model 3 was trained by a back propagation algorithm using the samples listed in table 4 below.
TABLE 4 Table 4
According to the technical scheme, the weight determining model is built by using the convolutional neural network classifier based on deep learning, so that more training samples can be fully utilized for training, the weight of each image component area can be determined more accurately, and the effect of enhancing the image can be greatly improved.
Optionally, the operation of fusing the target image component of the corresponding remote sensing image area at the same area position according to the quality weight to generate the fused image component corresponding to the target image component and the area position includes: and fusing the target image components of the corresponding remote sensing image areas at the same area position by using a weighted summation mode to generate fused image components corresponding to the target image components and the area position.
Specifically, referring to FIG. 8, for the zone locationjThe computing device calculates the position of the region by means of weighted summationjCorresponding fused image componentZf j,k (1≤k≤3)。
Specifically, a first image component for each remote sensing imageP i,1 Computing device is determining and locating a regionjCorresponding image component areasZ j1,,1 ~Z m j,,1 And corresponding quality weightsw j1,,1 ~w m j,,1 After that, can be according toThe corresponding fusion image component is calculated by matrix operation according to the following formula Zf j,1
Second image component for each remote sensing imageP i,2 Computing device is determining and locating a regionjCorresponding image component areasZ j1,,2 ~Z m j,,2 And corresponding quality weightsw j1,,2 ~w m j,,2 Thereafter, the corresponding fused image components may be calculated by matrix operations according to the following formulaZf j,2
Third image component for each remote sensing imageP i,3 Computing device is determining and locating a regionjCorresponding image component areasZ j1,,3 ~Z m j,,3 And corresponding quality weightsw j1,,3 ~w m j,,3 Thereafter, the corresponding fused image components may be calculated by matrix operations according to the following formulaZf j,3
/>
Thus, in summary, for the first of the respective remote sensing imageskIndividual image componentsP i,k Computing device is determining and locating a regionjCorresponding image component areasZ j1,,k ~Z m j,,k And corresponding quality weightsw j1,,k ~w m j,,k Thereafter, the corresponding fused image components may be calculated by matrix operations according to the following formulaZf j,k
In the above manner, the fused image components are generated by weighted summation according to the quality weights of the image component areas of the remote sensing images. Therefore, the image component area with better image quality can obtain higher weight, and each image component of the fused image area can be ensured to obtain better enhancement effect and image quality. Thereby further ensuring that the spliced fusion image can obtain better image enhancement effect.
In addition, fig. 11 shows a detailed flowchart of the image enhancement method according to the technical solution of the present disclosure.
Referring to FIG. 3, a computing device first obtainsmRemote sensing images corresponding to the same ground areaP i And referring to fig. 7, a remote sensing image is shownP i IncludedrImage components, wherein 1.ltoreq.im(S1102)。
The computing device then uses the respective remote sensing imagesP i Respectively divided intonRemote sensing image areasnCorresponding to the position of the area, whereiniFirst of the remote sensing imagesjThe remote sensing image areas are expressed as remote sensing image areasZ i j, And remote sensing image areasZ i j, Is the first of (2)kThe individual image components are represented as image component areasZ i j,k, Wherein, is less than or equal to 1jn,1≤kr(S1104)。
Then start from zone position 1, i.ej=1 (S1106), according to different image components (e.g., image components 1 to 1106)r) For each remote sensing imageP i Quality weights are determined separately for different image component areas of (S1108).
For different image components, according to the determined quality weights, each remote sensing image is processedP i Is subjected to weighted fusion of the image component areas of (c),a fused image component corresponding to the different image component is obtained (S1110).
Generating a fused image region corresponding to region position 1 from fused image components corresponding to different image components Zf 1 (S1112)。
Repeating the operations of steps S1108-S1112 for each region position until a fused image region corresponding to the region position n is generatedZf n (S1114~S1116)。
The fused image region to be generatedZf 1 ~Zf n Stitching is performed to generate an enhanced fused image (S1118).
Further, referring to fig. 1, according to a second aspect of the present embodiment, there is provided a storage medium. The storage medium includes a stored program, wherein the method of any one of the above is performed by a processor when the program is run.
Therefore, according to the embodiment, even under the condition that the quality distribution of the whole remote sensing image is not uniform, the weight of the fusion of the remote sensing image areas can be dynamically determined according to the actual image quality of the remote sensing image areas corresponding to the positions of the different remote sensing images and the areas. Therefore, the information of the remote sensing image area with higher image quality in each remote sensing image can be effectively utilized for enhancement, and the image enhancement quality is improved. The method solves the technical problems that in the prior art, the quality distribution of the remote sensing images is not uniform, so that the image enhancement is difficult to be carried out by utilizing a plurality of remote sensing images, and a high-quality fusion image is generated.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
Example 1
Fig. 12 shows an image enhancement apparatus 1200 according to the present embodiment, the apparatus 1200 corresponding to the method according to the first aspect of embodiment 1. Referring to fig. 12, the apparatus 1200 includes: an image acquisition module 1210 for acquiring a plurality of remote sensing images, wherein the remote sensing images correspond to the same ground area; the image dividing module 1220 is configured to divide the remote sensing image into a plurality of remote sensing image areas, where the remote sensing image areas located at the same area position in different remote sensing images correspond to each other; a quality weight determining module 1230, configured to determine, for each region position, a quality weight of a corresponding remote sensing image region, where the quality weight corresponds to an image quality of each corresponding remote sensing image region; an image fusion module 1240, configured to fuse, according to the quality weight, the corresponding remote sensing image area at the same area position, and generate a fused image area corresponding to the area position; and an image stitching module 1250 for stitching the fused image areas corresponding to the respective area positions, thereby generating an enhanced fused image.
Optionally, the remote sensing image includes a plurality of image components, and the quality weight determination module 1230 includes: an image component determining unit configured to determine a target image component from a plurality of image components; and a quality weight determining unit for determining the quality weight of the target image component of the corresponding remote sensing image area for each area position.
Optionally, the image fusion module 1240 includes: and the image component fusion unit is used for fusing the target image components of the corresponding remote sensing image areas at the same area position according to the quality weight to generate fusion image components corresponding to the target image components and the area position.
Optionally, the quality weight determining unit includes: the input subunit is used for inputting the target image components of the corresponding remote sensing image areas into a preset quality weight model based on a convolutional neural network classifier; and a quality weight determining subunit, configured to determine quality weights of the target image components of the corresponding remote sensing image areas according to the quality weight vectors output by the weight models, where elements of the quality weight vectors are respectively used to indicate the quality weights of the target image components of the corresponding remote sensing image areas.
Therefore, according to the embodiment, even under the condition that the quality distribution of the whole remote sensing image is not uniform, the weight of the fusion of the remote sensing image areas can be dynamically determined according to the actual image quality of the remote sensing image areas corresponding to the positions of the different remote sensing images and the areas. Therefore, the information of the remote sensing image area with higher image quality in each remote sensing image can be effectively utilized for enhancement, and the image enhancement quality is improved. The method solves the technical problems that in the prior art, the quality distribution of the remote sensing images is not uniform, so that the image enhancement is difficult to be carried out by utilizing a plurality of remote sensing images, and a high-quality fusion image is generated.
Example 2
Fig. 13 shows an image enhancement apparatus 1300 according to the present embodiment, the apparatus 1300 corresponding to the method according to the first aspect of embodiment 1. Referring to fig. 13, the apparatus 1300 includes: a processor 1310; and a memory 1320 coupled to the processor 1310 for providing instructions to the processor 1310 for processing the steps of: acquiring a plurality of remote sensing images, wherein the remote sensing images correspond to the same ground area; dividing the remote sensing image into a plurality of remote sensing image areas respectively, wherein the remote sensing image areas positioned at the same area position in different remote sensing images correspond to each other; determining a quality weight of the corresponding remote sensing image area according to each area position, wherein the quality weight corresponds to the image quality of each corresponding remote sensing image area; fusing the corresponding remote sensing image areas at the same area position according to the quality weight to generate a fused image area corresponding to the area position; and splicing the fusion image areas corresponding to the positions of the areas, so as to generate the enhanced fusion image.
Optionally, the remote sensing image includes a plurality of image components, and the operation of determining the quality weights of the respective remote sensing image areas for each area location includes: determining a target image component from the plurality of image components; and determining the quality weight of the target image component of the corresponding remote sensing image area aiming at each area position.
Optionally, the operation of fusing the corresponding remote sensing image area with the same area position according to the quality weight to generate a fused image area corresponding to the area position includes: and fusing the target image components of the corresponding remote sensing image area at the same area position according to the quality weight, and generating a fused image component corresponding to the target image component and the area position.
Optionally, for each region position, determining the quality weight of the target image component of the corresponding remote sensing image region comprises: inputting target image components of corresponding remote sensing image areas into a preset quality weight model based on a convolutional neural network classifier; and determining the quality weight of the target image component of the corresponding remote sensing image area according to the quality weight vector output by the weight model, wherein the elements of the quality weight vector are respectively used for indicating the quality weight of the target image component of the corresponding remote sensing image area.
Therefore, according to the embodiment, even under the condition that the quality distribution of the whole remote sensing image is not uniform, the weight of the fusion of the remote sensing image areas can be dynamically determined according to the actual image quality of the remote sensing image areas corresponding to the positions of the different remote sensing images and the areas. Therefore, the information of the remote sensing image area with higher image quality in each remote sensing image can be effectively utilized for enhancement, and the image enhancement quality is improved. The method solves the technical problems that in the prior art, the quality distribution of the remote sensing images is not uniform, so that the image enhancement is difficult to be carried out by utilizing a plurality of remote sensing images, and a high-quality fusion image is generated.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (6)

1. An image enhancement method, comprising:
acquiring a plurality of remote sensing images, wherein the remote sensing images correspond to the same ground area;
dividing the remote sensing image into a plurality of remote sensing image areas respectively, wherein the remote sensing image areas positioned at the same area position in different remote sensing images correspond to each other;
determining a quality weight of the corresponding remote sensing image area according to each area position, wherein the quality weight corresponds to the image quality of each corresponding remote sensing image area;
fusing the corresponding remote sensing image areas at the same area position according to the quality weight to generate a fused image area corresponding to the area position;
stitching the fused image areas corresponding to the respective area positions to generate an enhanced fused image, wherein
The remote sensing image comprises a plurality of image components, and for each region position, the operation of determining the quality weight of the corresponding remote sensing image region comprises:
Determining a target image component from the plurality of image components;
determining, for each region location, a quality weight of a target image component of the respective remote sensing image region, and wherein
Further comprising training the weights to determine a model based on:
constructing a training sample set, wherein samples in the training sample set comprise a plurality of remote sensing images corresponding to the same ground area, and each remote sensing image comprises a sample image component set corresponding to a plurality of image components and a quality weight corresponding to the plurality of sample image component sets;
extracting sample image components for training a weight determination model and quality weights corresponding to the sample image components from the training sample set; and
training the weight determination model based on the extracted sample image components and quality weights corresponding to the sample image components using a back propagation algorithm, wherein
Fusing the corresponding remote sensing image areas at the same area position according to the quality weight to generate a fused image area corresponding to the area position, wherein the operation comprises the following steps:
and fusing the target image components of the corresponding remote sensing image areas at the same area position according to the quality weight to generate fused image components corresponding to the target image components and the area position.
2. The method of claim 1, wherein determining the quality weights of the target image components of the respective remote sensing image areas for each area location comprises:
inputting target image components of the corresponding remote sensing image areas into a preset weight determination model based on a convolutional neural network classifier; and
and determining the quality weight of the target image component of the corresponding remote sensing image area according to the quality weight vector output by the weight determination model, wherein the elements of the quality weight vector are respectively used for indicating the quality weight of the target image component of the corresponding remote sensing image area.
3. A storage medium comprising a stored program, wherein the method of any one of claims 1 to 2 is performed by a processor when the program is run.
4. An image enhancement apparatus, comprising:
the image acquisition module is used for acquiring a plurality of remote sensing images, wherein the remote sensing images correspond to the same ground area;
the image dividing module is used for dividing the remote sensing image into a plurality of remote sensing image areas respectively, wherein the remote sensing image areas positioned at the same area position in different remote sensing images correspond to each other;
The quality weight determining module is used for determining the quality weight of the corresponding remote sensing image area according to the position of each area, wherein the quality weight corresponds to the image quality of each corresponding remote sensing image area;
the image fusion module is used for fusing the corresponding remote sensing image areas at the same area position according to the quality weight to generate a fused image area corresponding to the area position;
an image stitching module for stitching the fused image areas corresponding to the respective area positions to generate an enhanced fused image, wherein
The remote sensing image includes a plurality of image components, and a quality weight determination module includes:
an image component determining unit configured to determine a target image component from the plurality of image components; and
a quality weight determination unit for determining, for each region position, a quality weight of a target image component of the respective remote sensing image region, and wherein
The apparatus further includes means for training the weights to determine a model based on:
a sample set construction module for constructing a training sample set, wherein samples in the training sample set comprise a plurality of remote sensing images corresponding to the same ground area, each remote sensing image comprising a sample image component set corresponding to a plurality of image components and a quality weight corresponding to the plurality of sample image component sets;
A sample extraction module for extracting a sample image component for training a weight determination model and a quality weight corresponding to the sample image component from the training sample set; and
a training module for training the weight determination model based on the extracted sample image components and quality weights corresponding to the sample image components by using a back propagation algorithm, wherein
The image fusion module comprises:
and the image component fusion unit is used for fusing the target image components of the corresponding remote sensing image areas at the same area position according to the quality weight to generate fusion image components of the target image components corresponding to the area position.
5. The apparatus according to claim 4, wherein the quality weight determining unit includes:
the input subunit is used for inputting the target image components of the corresponding remote sensing image areas into a preset quality weight model based on a convolutional neural network classifier; and
and the quality weight determining subunit is used for determining the quality weight of the target image component of the corresponding remote sensing image area according to the quality weight vector output by the weight model, wherein the elements of the quality weight vector are respectively used for indicating the quality weight of the target image component of the corresponding remote sensing image area.
6. An image enhancement apparatus, comprising:
a processor; and
a memory, coupled to the processor, for providing instructions to the processor to process the following processing steps:
acquiring a plurality of remote sensing images, wherein the remote sensing images correspond to the same ground area;
dividing the remote sensing image into a plurality of remote sensing image areas respectively, wherein the remote sensing image areas positioned at the same area position in different remote sensing images correspond to each other;
determining a quality weight of the corresponding remote sensing image area according to each area position, wherein the quality weight corresponds to the image quality of each corresponding remote sensing image area;
fusing the corresponding remote sensing image areas at the same area position according to the quality weight to generate a fused image area corresponding to the area position;
stitching the fused image areas corresponding to the respective area positions to generate an enhanced fused image, wherein
The remote sensing image comprises a plurality of image components, and for each region position, the operation of determining the quality weight of the corresponding remote sensing image region comprises:
determining a target image component from the plurality of image components;
Determining, for each region location, a quality weight of a target image component of the respective remote sensing image region, and wherein
Further comprising training the weights to determine a model based on:
constructing a training sample set, wherein samples in the training sample set comprise a plurality of remote sensing images corresponding to the same ground area, and each remote sensing image comprises a sample image component set corresponding to a plurality of image components and a quality weight corresponding to the plurality of sample image component sets;
extracting sample image components for training a weight determination model and quality weights corresponding to the sample image components from the training sample set; and
training the weight determination model based on the extracted sample image components and quality weights corresponding to the sample image components using a back propagation algorithm, wherein
Fusing the corresponding remote sensing image areas at the same area position according to the quality weight to generate a fused image area corresponding to the area position, wherein the operation comprises the following steps:
and fusing the target image components of the corresponding remote sensing image areas at the same area position according to the quality weight to generate fused image components corresponding to the target image components and the area position.
CN202310321440.6A 2023-03-29 2023-03-29 Image enhancement method, device and storage medium Active CN116342449B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310321440.6A CN116342449B (en) 2023-03-29 2023-03-29 Image enhancement method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310321440.6A CN116342449B (en) 2023-03-29 2023-03-29 Image enhancement method, device and storage medium

Publications (2)

Publication Number Publication Date
CN116342449A CN116342449A (en) 2023-06-27
CN116342449B true CN116342449B (en) 2024-01-16

Family

ID=86875901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310321440.6A Active CN116342449B (en) 2023-03-29 2023-03-29 Image enhancement method, device and storage medium

Country Status (1)

Country Link
CN (1) CN116342449B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833184A (en) * 2017-10-12 2018-03-23 北京大学深圳研究生院 A kind of image enchancing method for merging framework again based on more exposure generations
CN108122218A (en) * 2016-11-29 2018-06-05 联芯科技有限公司 Image interfusion method and device based on color space
CN111242034A (en) * 2020-01-14 2020-06-05 支付宝(杭州)信息技术有限公司 Document image processing method and device, processing equipment and client
CN112287756A (en) * 2020-09-25 2021-01-29 北京佳格天地科技有限公司 Ground object identification method, device, storage medium and terminal
CN112641462A (en) * 2019-10-10 2021-04-13 通用电气精准医疗有限责任公司 System and method for reducing anomalies in ultrasound images
CN113962844A (en) * 2020-07-20 2022-01-21 武汉Tcl集团工业研究院有限公司 Image fusion method, storage medium and terminal device
WO2022021999A1 (en) * 2020-07-27 2022-02-03 虹软科技股份有限公司 Image processing method and image processing apparatus
CN114529489A (en) * 2022-03-01 2022-05-24 中国科学院深圳先进技术研究院 Multi-source remote sensing image fusion method, device, equipment and storage medium
CN114821733A (en) * 2022-05-12 2022-07-29 济南博观智能科技有限公司 Method, device and medium for compensating robustness of mode recognition model of unconstrained scene
CN114942031A (en) * 2022-05-27 2022-08-26 驭势科技(北京)有限公司 Visual positioning method, visual positioning and mapping method, device, equipment and medium
CN115187867A (en) * 2022-07-26 2022-10-14 郑州航空工业管理学院 Multi-source remote sensing image fusion method and system based on deep learning
CN115293999A (en) * 2022-07-20 2022-11-04 河南城建学院 Remote sensing image cloud removing method integrating multi-temporal information and sub-channel dense convolution
CN115424221A (en) * 2022-07-21 2022-12-02 深圳元戎启行科技有限公司 Point cloud and image fusion method, related detection method, device and storage medium
CN115829915A (en) * 2022-08-19 2023-03-21 北京旷视科技有限公司 Image quality detection method, electronic device, storage medium, and program product

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113902654A (en) * 2020-07-06 2022-01-07 阿里巴巴集团控股有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108122218A (en) * 2016-11-29 2018-06-05 联芯科技有限公司 Image interfusion method and device based on color space
CN107833184A (en) * 2017-10-12 2018-03-23 北京大学深圳研究生院 A kind of image enchancing method for merging framework again based on more exposure generations
CN112641462A (en) * 2019-10-10 2021-04-13 通用电气精准医疗有限责任公司 System and method for reducing anomalies in ultrasound images
CN111242034A (en) * 2020-01-14 2020-06-05 支付宝(杭州)信息技术有限公司 Document image processing method and device, processing equipment and client
CN113962844A (en) * 2020-07-20 2022-01-21 武汉Tcl集团工业研究院有限公司 Image fusion method, storage medium and terminal device
WO2022021999A1 (en) * 2020-07-27 2022-02-03 虹软科技股份有限公司 Image processing method and image processing apparatus
CN112287756A (en) * 2020-09-25 2021-01-29 北京佳格天地科技有限公司 Ground object identification method, device, storage medium and terminal
CN114529489A (en) * 2022-03-01 2022-05-24 中国科学院深圳先进技术研究院 Multi-source remote sensing image fusion method, device, equipment and storage medium
CN114821733A (en) * 2022-05-12 2022-07-29 济南博观智能科技有限公司 Method, device and medium for compensating robustness of mode recognition model of unconstrained scene
CN114942031A (en) * 2022-05-27 2022-08-26 驭势科技(北京)有限公司 Visual positioning method, visual positioning and mapping method, device, equipment and medium
CN115293999A (en) * 2022-07-20 2022-11-04 河南城建学院 Remote sensing image cloud removing method integrating multi-temporal information and sub-channel dense convolution
CN115424221A (en) * 2022-07-21 2022-12-02 深圳元戎启行科技有限公司 Point cloud and image fusion method, related detection method, device and storage medium
CN115187867A (en) * 2022-07-26 2022-10-14 郑州航空工业管理学院 Multi-source remote sensing image fusion method and system based on deep learning
CN115829915A (en) * 2022-08-19 2023-03-21 北京旷视科技有限公司 Image quality detection method, electronic device, storage medium, and program product

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography;T. Mertens等;《Computer Graphics forum》;第28卷;161-171 *
基于Retinex的低照度图像增强算法研究;尹超;《中国优秀硕士学位论文全文数据库 信息科技辑》;第2020年卷(第1期);I138-2162 *
基于深度学习的金属表面凹凸字符纹理细节增强及识别算法研究;吴华雄;《中国优秀硕士学位论文全文数据库 工程科技I辑》;第2023年卷(第2期);B022-487 *

Also Published As

Publication number Publication date
CN116342449A (en) 2023-06-27

Similar Documents

Publication Publication Date Title
WO2021129642A1 (en) Image processing method, apparatus, computer device, and storage medium
EP3167446B1 (en) Apparatus and method for supplying content aware photo filters
CN106682632B (en) Method and device for processing face image
CN109410131B (en) Face beautifying method and system based on condition generation antagonistic neural network
EP2420955A2 (en) Terminal device and method for augmented reality
CN106462768B (en) Using characteristics of image from image zooming-out form
US8160358B2 (en) Method and apparatus for generating mosaic image
CN110288534B (en) Image processing method, device, electronic equipment and storage medium
CN109086742A (en) scene recognition method, scene recognition device and mobile terminal
US20130321368A1 (en) Apparatus and method for providing image in terminal
CN113194254A (en) Image shooting method and device, electronic equipment and storage medium
US20220067888A1 (en) Image processing method and apparatus, storage medium, and electronic device
CN107332977A (en) The method and augmented reality equipment of augmented reality
CN112950640A (en) Video portrait segmentation method and device, electronic equipment and storage medium
CN111369482A (en) Image processing method and device, electronic equipment and storage medium
CN116342449B (en) Image enhancement method, device and storage medium
CN109615620A (en) The recognition methods of compression of images degree, device, equipment and computer readable storage medium
CN111553865B (en) Image restoration method and device, electronic equipment and storage medium
CN113837980A (en) Resolution adjusting method and device, electronic equipment and storage medium
CN114004750A (en) Image processing method, device and system
CN112800276A (en) Video cover determination method, device, medium and equipment
CN104407838A (en) Methods and equipment for generating random number and random number set
CN106358006B (en) The bearing calibration of video and device
CN111489323A (en) Double-light-field image fusion method, device and equipment and readable storage medium
CN109040584A (en) A kind of method and apparatus of interaction shooting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant