CN112258442A - Image fusion method and device, computer equipment and storage medium - Google Patents

Image fusion method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112258442A
CN112258442A CN202011263595.1A CN202011263595A CN112258442A CN 112258442 A CN112258442 A CN 112258442A CN 202011263595 A CN202011263595 A CN 202011263595A CN 112258442 A CN112258442 A CN 112258442A
Authority
CN
China
Prior art keywords
image
frequency domain
fusion
fused
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011263595.1A
Other languages
Chinese (zh)
Inventor
林枝叶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202011263595.1A priority Critical patent/CN112258442A/en
Publication of CN112258442A publication Critical patent/CN112258442A/en
Priority to PCT/CN2021/117562 priority patent/WO2022100250A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The application relates to an image fusion method, an image fusion device, computer equipment and a storage medium. The method comprises the steps of carrying out spatial conversion on a visible light image to be processed to obtain a color image, a brightness image and a first frequency domain image, carrying out frequency domain spatial conversion on a near-infrared image to be processed to obtain a second frequency domain image, fusing the first frequency domain image and the second frequency domain image to obtain a first fused image, fusing the near-infrared image and the brightness image to obtain a second fused image, and obtaining a target fused image according to the color image, the first fused image and the second fused image. The method realizes the separate fusion of the detail information in the visible light image and the near-infrared image and the brightness information in the visible light image and the near-infrared image, so that the fusion process of the detail information and the fusion process of the brightness information are not affected mutually, the target fusion image contains more detail information and brightness information, and the quality of the target fusion image is improved.

Description

Image fusion method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image fusion method and apparatus, a computer device, and a storage medium.
Background
With the popularization and application of devices with camera shooting functions, such as cameras, ipads, mobile phones and the like, the quality requirement of users for shooting images is higher and higher.
In order to improve the quality of the shot images, methods for fusing various different types of shot images to improve the quality of the shot images are currently available. For example, the industry proposes that the characteristics of high brightness of visible light images and strong perspective of near-infrared images can be fully utilized, and simultaneously, fused images with good shooting quality can be obtained by fusing RGB images shot by a visible light image acquisition device and NIR images shot by a near-infrared image acquisition device.
However, the fusion method based on the visible light image and the near infrared image may lose part of the information of the captured image during the fusion process, so that the quality of the fused image is still not very high.
Disclosure of Invention
In view of the above, it is necessary to provide an image fusion method, an apparatus, a computer device, and a storage medium capable of improving the quality of fused images in view of the above technical problems.
In a first aspect, a method of image fusion, the method comprising:
performing spatial conversion on a visible light image to be processed to obtain a color image, a brightness image and a first frequency domain image, and performing frequency domain spatial conversion on a near-infrared image to be processed to obtain a second frequency domain image;
fusing the first frequency domain image and the second frequency domain image to obtain a first fused image, and fusing the near-infrared image and the brightness image to obtain a second fused image;
and obtaining a target fusion image according to the color image, the first fusion image and the second fusion image.
In a second aspect, an image fusion apparatus, the apparatus comprising:
the conversion module is used for carrying out spatial conversion on the visible light image to be processed to obtain a color image, a brightness image and a first frequency domain image, and carrying out frequency domain spatial conversion on the near-infrared image to be processed to obtain a second frequency domain image;
the fusion module is used for fusing the first frequency domain image and the second frequency domain image to obtain a first fusion image, and fusing the near-infrared image and the brightness image to obtain a second fusion image;
and the acquisition module is used for acquiring a target fusion image according to the color image, the first fusion image and the second fusion image.
In a third aspect, a computer device comprises a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
performing spatial conversion on a visible light image to be processed to obtain a color image, a brightness image and a first frequency domain image, and performing frequency domain spatial conversion on a near-infrared image to be processed to obtain a second frequency domain image;
fusing the first frequency domain image and the second frequency domain image to obtain a first fused image, and fusing the near-infrared image and the brightness image to obtain a second fused image;
and obtaining a target fusion image according to the color image, the first fusion image and the second fusion image.
In a fourth aspect, a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of:
performing spatial conversion on a visible light image to be processed to obtain a color image, a brightness image and a first frequency domain image, and performing frequency domain spatial conversion on a near-infrared image to be processed to obtain a second frequency domain image;
fusing the first frequency domain image and the second frequency domain image to obtain a first fused image, and fusing the near-infrared image and the brightness image to obtain a second fused image;
and obtaining a target fusion image according to the color image, the first fusion image and the second fusion image.
According to the image fusion method, the image fusion device, the computer equipment and the storage medium, the visible light image to be processed is subjected to spatial conversion to obtain the color image, the brightness image and the first frequency domain image, the near infrared image to be processed is subjected to frequency domain spatial conversion to obtain the second frequency domain image, the first frequency domain image and the second frequency domain image are fused to obtain the first fusion image, the near infrared image and the brightness image are fused to obtain the second fusion image, and the target fusion image is obtained according to the color image, the first fusion image and the second fusion image. The method realizes the separate fusion of the detail information in the visible light image and the near infrared image and the brightness information in the visible light image and the near infrared image, so that the fusion process of the detail information and the fusion process of the brightness information are not influenced mutually.
Drawings
FIG. 1 is a diagram illustrating an internal structure of a computer device according to an embodiment;
FIG. 2 is a flow diagram illustrating an image fusion method according to an embodiment;
FIG. 3 is a flowchart illustrating an implementation manner of S102 in the embodiment of FIG. 2;
FIG. 4 is a schematic diagram of a frequency domain image partition provided in one embodiment;
FIG. 5 is a flowchart illustrating an implementation manner of S202 in the embodiment of FIG. 2;
FIG. 6 is a flow diagram illustrating a method for image fusion in one embodiment;
FIG. 7 is a schematic illustration of a frequency domain image partition provided in one embodiment;
FIG. 8 is a flowchart illustrating an implementation manner of S502 in the embodiment of FIG. 7;
FIG. 9 is a flowchart illustrating an implementation manner of S503 in the embodiment of FIG. 7;
FIG. 10 is a flowchart illustrating an implementation manner of S103 in the embodiment of FIG. 2;
FIG. 11 is a flowchart illustrating an implementation manner of S101 in the embodiment of FIG. 2;
FIG. 12 is a flowchart illustrating an image fusion method according to an embodiment;
FIG. 13 is a schematic diagram showing the configuration of an image fusion system in one embodiment;
FIG. 14 is a block diagram showing the construction of an image fusion apparatus according to an embodiment;
FIG. 15 is a block diagram showing the construction of an image fusion apparatus according to an embodiment;
FIG. 16 is a block diagram showing the construction of an image fusion apparatus according to an embodiment;
FIG. 17 is a block diagram showing the construction of an image fusion apparatus according to an embodiment;
FIG. 18 is a block diagram showing the construction of an image fusion apparatus according to an embodiment;
FIG. 19 is a block diagram showing the construction of an image fusion apparatus according to an embodiment;
FIG. 20 is a block diagram showing the construction of an image fusion apparatus according to an embodiment;
fig. 21 is a block diagram showing the structure of the image fusion apparatus according to the embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The image fusion method provided by the application can be applied to the computer device shown in fig. 1, the computer device can be a server, the computer device can also be a terminal, and the internal structure diagram can be shown in fig. 1. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image fusion method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 1 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, as shown in fig. 2, an image fusion method is provided, which is described by taking the method as an example applied to the computer device in fig. 1, and includes the following steps:
s101, performing spatial conversion on the visible light image to be processed to obtain a color image, a brightness image and a first frequency domain image, and performing frequency domain spatial conversion on the near-infrared image to be processed to obtain a second frequency domain image.
Among them, the visible light image is also called RGB image, the Near Infrared image is also called (Near Infrared) NIR image, and the frequency domain image is also called detail image because it contains more high frequency information in the original image. In this embodiment, the collecting device of the visible light image and the collecting device of the near-infrared image may shoot the same target object at the same time, and transmit the visible light image and the near-infrared image obtained after shooting to the computer device, respectively. When the computer device obtains the visible light image, the visible light image can be subjected to spatial conversion, color information in the visible light image is respectively extracted to obtain a color image, brightness information in the visible light image is extracted to obtain a brightness image, and detail information in the visible light image is extracted to obtain a first frequency domain image. When the computer device acquires the near-infrared image, the frequency domain space conversion can be carried out on the near-infrared image, and the detail information in the near-infrared image is extracted to obtain a second frequency domain image.
S102, fusing the first frequency domain image and the second frequency domain image to obtain a first fused image, and fusing the near-infrared image and the brightness image to obtain a second fused image.
In this embodiment, when the computer device acquires the first frequency domain image and the second frequency domain image, the first frequency domain image and the second frequency domain image may be fused by using a corresponding image fusion method to obtain a first fusion image, so that the first fusion image simultaneously includes the detail information of the visible light image and the detail information of the near-infrared image. Correspondingly, when the computer device acquires the near-infrared image and the brightness image, the near-infrared image and the brightness image can be fused by adopting a corresponding image fusion method to obtain a second fusion image, so that the second fusion image simultaneously contains the brightness information of the visible light image and the brightness information of the near-infrared image. For example, the computer device may specifically adopt a laplacian fusion method to fuse the first frequency domain image and the second frequency domain image, optionally, the computer device may also fuse the first frequency domain image and the second frequency domain image based on a machine learning algorithm, and the embodiment of the specifically adopted image fusion method is not limited.
And S103, obtaining a target fusion image according to the color image, the first fusion image and the second fusion image.
In this embodiment, when obtaining the first fusion image, the second fusion image, and the color image, the computer device may further adopt a corresponding image fusion method to fuse the first fusion image, the second fusion image, and the color image to obtain a target fusion image; optionally, the computer device may also adopt a corresponding image fusion method, and first perform primary fusion on the first fusion image and the second fusion image to obtain an initial fusion image, and then perform secondary fusion on the initial fusion image and the color image to obtain a target fusion image; optionally, the computer device may further directly restore the color information of the initial fusion image according to the color image after fusing the first fusion image and the second fusion image to obtain the initial fusion image, so as to obtain the restored target fusion image.
In the image fusion method, a color image, a brightness image and a first frequency domain image are obtained by performing spatial conversion on a visible light image to be processed, a second frequency domain image is obtained by performing frequency domain spatial conversion on a near-infrared image to be processed, the first frequency domain image and the second frequency domain image are fused to obtain a first fusion image, the near-infrared image and the brightness image are fused to obtain a second fusion image, and a target fusion image is obtained according to the color image, the first fusion image and the second fusion image. The method realizes the separate fusion of the detail information in the visible light image and the near infrared image and the brightness information in the visible light image and the near infrared image, so that the fusion process of the detail information and the fusion process of the brightness information are not influenced mutually.
In an embodiment, an implementation manner of the foregoing S102 is provided, and as shown in fig. 3, the step "fusing the first frequency domain image and the second frequency domain image to obtain a first fused image" in the foregoing S102 specifically includes:
s201, the first frequency domain image and the second frequency domain image are respectively divided to obtain a plurality of first frequency domain image blocks corresponding to the first frequency domain image and a plurality of second frequency domain image blocks corresponding to the second frequency domain image.
In this embodiment, when the computer device merges the first frequency domain image and the second frequency domain image, the computer device may first divide the first frequency domain image to obtain a preset number of first frequency domain image blocks, and at the same time divide the second frequency domain image to obtain a same preset number of second frequency domain image blocks, where a size of the first frequency domain image block is the same as a size of the corresponding second frequency domain image block. It should be noted that the preset number may be determined by the computer device in advance according to the actual application requirement and the size of the divided image, and is not limited herein.
S202, respectively fusing each first frequency domain image block and the corresponding second frequency domain image block to obtain a first fused image.
In this embodiment, when the computer device obtains a plurality of first frequency domain image blocks obtained by dividing the first frequency domain image and a plurality of second frequency domain image blocks obtained by dividing the corresponding second frequency domain image, each of the first frequency domain image blocks and the corresponding second frequency domain image block may be further fused, respectively, to obtain a first fused image. For example, as shown in fig. 4, the first frequency-domain image a is divided into 4 first frequency-domain image blocks, and the second frequency-domain image B is divided into 4 second frequency-domain image blocks, and in the process of specifically fusing the first frequency-domain image a and the second frequency-domain image B, the first frequency-domain image block a1 of the first frequency-domain image a may be fused with the second frequency-domain image B1 of the second frequency-domain image B; fusing a first frequency domain image block a2 of the first frequency domain image a with a second frequency domain image B2 of the second frequency domain image B; fusing a first frequency domain image block a3 of the first frequency domain image a with a second frequency domain image B3 of the second frequency domain image B; the first frequency domain image block a4 of the first frequency domain image a is fused with the second frequency domain image B4 of the second frequency domain image B. The embodiment realizes the division and fusion of the images, namely, the block region fusion of the first frequency domain image and the second frequency domain image, so that the information in each block region is not influenced in the fusion process, the phenomenon of information loss in the fusion process is reduced, and the image quality after the image fusion can be improved.
Optionally, the present application further provides a specific implementation manner of the foregoing S202, for example, as shown in fig. 5, the foregoing S202 "respectively fuses each first frequency domain image block and the corresponding second frequency domain image block to obtain a first fused image", including:
s301, determining whether each first frequency-domain image block includes texture information, and whether each corresponding second frequency-domain image block includes texture information.
The texture information represents high-frequency information, that is, detail information, in the frequency domain image. In this embodiment, when the computer device fuses the first frequency domain image block and the corresponding second frequency domain image block, it may be determined whether the first frequency domain image block and the second frequency domain image block contain texture information, and four determination results may be obtained, that is, the first one includes: the first frequency domain image block comprises texture information, and the second frequency domain image block does not comprise the texture information; the second method comprises the following steps: the first frequency domain image block comprises texture information, and the second frequency domain image block comprises texture information; the third method comprises the following steps: the first frequency domain image block does not contain texture information, and the second frequency domain image block contains texture information; the fourth method comprises the following steps: the first frequency domain image block does not contain texture information and the second frequency domain image block does not contain texture information.
And S302, according to the determination result, fusing each first frequency domain image block and each second frequency domain image block to obtain a first fused image composed of a plurality of fused image blocks.
Different determination results correspond to different application scenarios, that is, different image fusion methods can be adopted to fuse the first frequency domain image block and the corresponding second frequency domain image block in different application scenarios. In this embodiment, after the computer device determines whether the first frequency domain image block and the corresponding second frequency domain image block contain texture information, a corresponding image fusion method may be selected according to the determination result of each frequency domain image block to fuse each first frequency domain image block and the corresponding second frequency domain image block to obtain a plurality of fusion image blocks, an image composed of the plurality of fusion image blocks is a first fusion image, and when the first fusion image is specifically combined, the plurality of fusion image blocks may be spliced to obtain the first fusion image.
The embodiment relates to a specific process of fusing a first frequency domain image and a second frequency domain image, in the process, a corresponding fusion method is selected according to the condition that each frequency domain image block contains texture information to realize image fusion, so that the texture information in the second frequency domain image of a near-infrared image can be more retained, and the characteristic of strong perspective of the near-infrared image is fully utilized to improve the definition of the fused image.
Based on the determination results described in the above embodiments, the following embodiments will describe a method for fusing a first frequency-domain image block and a second frequency-domain image block according to different determination results.
For example, if the determination result is that the first frequency domain image block includes texture information and the corresponding second frequency domain image block does not include texture information, the first frequency domain image block is determined as a fused image block corresponding to the first fused image and the second fused image. That is, in this application scenario, the computer device directly takes the first frequency-domain image block as a fused image block, which is equivalent to the computer device not performing the fusion operation on the first frequency-domain image block and the corresponding second frequency-domain image block in this case. The method not only retains the texture information in the first frequency domain image, but also reduces the operation steps of fusing the two frequency domain image blocks under the condition, thereby indirectly improving the speed of fusing the first frequency domain image and the second frequency domain image by the computer equipment.
For example, if the determination result is that the first frequency domain image block does not include texture information and the corresponding second frequency domain image block does not include texture information, the first frequency domain image block is determined as a fused image block corresponding to the first fused image and the second fused image. That is, in this application scenario, the computer device directly takes the first frequency-domain image block as a fused image block, which is equivalent to the computer device not performing the fusion operation on the first frequency-domain image block and the corresponding second frequency-domain image block in this case. The method not only retains the texture information in the first frequency domain image, but also reduces the operation steps of fusing the two frequency domain image blocks under the condition, thereby indirectly improving the speed of fusing the first frequency domain image and the second frequency domain image by the computer equipment.
For example, if the determination result is that the first frequency-domain image block includes texture information and the corresponding second frequency-domain image block includes texture information, the method for fusing the first frequency-domain image block and the second frequency-domain image block, as shown in fig. 6, includes:
s401, determining the weight of the first frequency domain image block and the weight of the second frequency domain image block; the weight of the second frequency-domain image block is greater than the weight of the first frequency-domain image block.
In order to fully utilize the characteristic that the near-infrared image contains more detailed information, under the application scenario, when the computer device fuses the second frequency domain image block of the near-infrared image and the first frequency domain image block of the visible light image, the weight of the first frequency domain image block and the weight of the second frequency domain image block can be determined firstly, wherein the weight of the second frequency domain image block is greater than the weight of the first frequency domain image block, and then the proportion of the detailed information contained in the second frequency domain image block after fusion is increased, so that the fused image can present more detailed information in the near-infrared image.
S402, according to the weight of the first frequency domain image block and the weight of the second frequency domain image block, the first frequency domain image block and the second frequency domain image block are fused to obtain fused image blocks corresponding to the first fused image and the second fused image.
In this embodiment, after the computer device determines the weight of the first frequency domain image block and the weight of the second frequency domain image block, the first frequency domain image block and the second frequency domain image block may be fused according to the weight of the first frequency domain image block and the weight of the second frequency domain image block, and when specifically fusing, the computer device may perform weighted accumulation and operation on the first frequency domain image block and the second frequency domain image block to obtain a fused image block.
In the above embodiment, when the first frequency domain image block and the second frequency domain image block both include texture information, the weight of the frequency domain image block corresponding to the near-infrared image is increased by increasing the weight of the second frequency domain image block in the fusion process, so that the fusion image block includes more detailed information in the near-infrared image, and the image definition of the fusion image block can be increased.
In the fourth example, if the determination result is that the first frequency domain image block does not include texture information and the corresponding second frequency domain image block includes texture information, the second frequency domain image block is determined as a fused image block corresponding to the first fused image and the second fused image. That is, in this application scenario, the computer device directly takes the second frequency-domain image block as a fused image block, which is equivalent to the computer device not performing the fusion operation on the first frequency-domain image block and the corresponding second frequency-domain image block in this case. Because only the second frequency domain image block contains texture information, the second frequency domain image block reflects the detail information which is not presented in the first frequency domain image block, at the moment, the second frequency domain image block is directly used as a fusion image block, so that a target fusion image which is finally fused with the visible light image and the near-infrared image contains the detail information which is not presented in the original visible light image, the characteristics of the visible light image and the near-infrared image are fully utilized to make up for the deficiencies, and the purpose of improving the definition of the target fusion image is achieved.
In an embodiment, the present application further provides a specific implementation of fusing the near-infrared image and the luminance image, as shown in fig. 7, the implementation includes:
s501, acquiring a histogram of a near infrared image and a histogram of a brightness image.
The histogram of the near-infrared image reflects the distribution condition of the corresponding brightness of each pixel point in the near-infrared image, and the histogram of the brightness image reflects the distribution condition of the corresponding brightness of each pixel point in the brightness image. In this embodiment, when the computer device acquires the near-infrared image, a histogram of the near-infrared image can be further obtained by analyzing the brightness distribution of each pixel point on the near-infrared image; correspondingly, when the computer device obtains the brightness image, the histogram of the brightness image can be further obtained by analyzing the brightness distribution condition of each pixel point on the brightness image.
S502, correcting the brightness of the near-infrared image according to the histogram of the near-infrared image to obtain a first corrected image.
In practical applications, particularly in a landscape image, since the near-infrared image may make the image have a more hierarchical sense (for example, green plants in the near-infrared image present more detailed information), in this embodiment, when the computer device acquires the histogram of the near-infrared image, the computer device acquires the distribution condition of the corresponding brightness of each pixel point on the near-infrared image, and further may correct the brightness value of the pixel point with a darker brightness on the near-infrared image according to the distribution condition, so as to enhance the brightness of a part of the pixel points on the near-infrared image, or correct the brightness of the near-infrared image as a whole, so as to enhance the brightness of the near-infrared image, thereby obtaining the first corrected image.
Optionally, in the step S502, "correcting the brightness of the near-infrared image according to the histogram of the near-infrared image to obtain a first corrected image", as shown in fig. 8, the method specifically includes:
s601, determining a first target pixel point in the near-infrared image according to the brightness distribution indicated by the histogram of the near-infrared image; the brightness value corresponding to the first target pixel point is smaller than a preset brightness threshold value.
The first target pixel point is a pixel point corresponding to darker brightness in the near-infrared image. The preset brightness threshold value is determined by the computer device according to the actual correction requirement in advance. In this embodiment, when the computer device acquires the histogram of the near-infrared image, the first target pixel point of which the luminance value is smaller than the preset luminance threshold in the near-infrared image may be determined according to the luminance distribution indicated by the histogram of the near-infrared image, so as to correct the luminance value of the first target pixel point later.
S602, the brightness of the first target pixel point is enhanced according to the brightness threshold value, and a first corrected image is obtained.
When the computer device determines the first target pixel point in the near-infrared image, the brightness of the first target pixel point can be enhanced, that is, the brightness value of the first target pixel point is increased to a preset brightness threshold value, so that a first corrected image is obtained. For example, if the object corresponding to the first target pixel point in the near-infrared image is a green plant, the brightness value of the first target pixel point is increased to the preset brightness threshold value, and then the brightness of the green plant is enhanced, so that the problem that the green plant is too dark in the conventional near-infrared image is solved.
And S503, correcting the brightness of the brightness image according to the histogram of the brightness image to obtain a second corrected image.
In practical applications, because the visible light image often has an exposure phenomenon (for example, a cloud layer in the visible light image has extremely strong brightness), in this embodiment, when the histogram of the brightness image of the visible light image is obtained, the computer device obtains the distribution condition of the brightness corresponding to each pixel point on the brightness image, and further, according to the distribution condition, the brightness value of the pixel point with relatively strong brightness on the brightness image is corrected to weaken the brightness of a part of the pixel points on the brightness image, or the brightness of the brightness image is corrected as a whole to weaken the brightness of the brightness image, so as to obtain a second corrected image.
Optionally, in step S503, "the brightness of the brightness image is corrected according to the histogram of the brightness image to obtain a second corrected image", as shown in fig. 9, the method specifically includes:
s701, determining a second target pixel point in the brightness image according to the brightness distribution indicated by the histogram of the brightness image; the corresponding brightness value of the second target pixel point is greater than a preset brightness threshold value.
And the second target pixel point is a pixel point corresponding to stronger brightness in the brightness image. The preset brightness threshold value is determined by the computer device according to the actual correction requirement in advance. In this embodiment, when the computer device obtains the histogram of the luminance image, the second target pixel point, in which the luminance value in the luminance image is greater than the preset luminance threshold, may be determined according to the luminance distribution indicated by the histogram of the luminance image, so as to correct the luminance value of the second target pixel point later.
S702, weakening the brightness of the first target pixel point according to the brightness threshold value to obtain a second corrected image.
When the computer device determines the second target pixel point in the brightness image, the brightness of the second target pixel point can be weakened, that is, the brightness value of the second target pixel point is reduced to a preset brightness threshold value, so that a second corrected image is obtained. For example, if the object corresponding to the second target pixel point in the luminance image is a cloud layer, the luminance value of the second target pixel point is reduced to the preset luminance threshold value, and then the luminance of the cloud layer is weakened, so that the problem that the cloud layer is too bright in the conventional visible light image is solved.
And S504, fusing the first corrected image and the second corrected image to obtain a second fused image.
After the computer device obtains the first corrected image and the second corrected image, the first corrected image and the second corrected image can be fused by adopting a corresponding image fusion method to obtain a second fused image. In the method, the image of the part with too dark brightness in the near-infrared image is enhanced, the part with too bright brightness in the visible light image is weakened, and then the corrected first corrected image and the second corrected image are fused after the image of the part with the exposure risk possibly exists, so that the high-dynamic-range image can be obtained, and the fused image is clearer while having the sense of layering.
In an embodiment, a specific implementation manner of S103 in the embodiment of fig. 2 is further provided, and as shown in fig. 10, the implementation manner includes:
s801, fusing the first fused image and the second fused image to obtain a third fused image.
In this embodiment, when the computer device obtains the first fusion image and the second fusion image, the first fusion image and the second fusion image may be further fused to obtain a third fusion image, so that the third fusion image simultaneously includes the detail information in the first fusion image and the luminance information in the second fusion image, that is, includes more detail information in the near-infrared image and more luminance information in the visible light image.
In practical application, because the first fused image contains detail information and the second fused image contains brightness information, a poisson fusion mode can be specifically adopted in the process of fusing the first fused image and the second fused image to improve the proportion of the first fused image, so that the fused third fused image can contain more detail information, and the definition of the third fused image is improved.
And S802, performing color recovery on the third fusion image according to the color image to obtain a target fusion image.
In this embodiment, after the computer device obtains the third fused image, the color of the third fused image may be restored according to the color information included in the color image separated from the visible light image, so as to obtain the target fused image, and the target fused image has the color in the visible light image before fusion.
The method of the embodiment is particularly suitable for shooting images in environments such as foggy days or cloudy days, and compared with the traditional images shot by using a camera, the target fusion image obtained by using the method can more clearly present the shooting object in the foggy days or cloudy days.
In an embodiment, a specific implementation manner of S101 in the embodiment of fig. 2 is further provided, and as shown in fig. 11, the implementation manner includes:
s901, performing color space conversion on the visible light image to obtain a color image and a brightness image.
In this embodiment, when the computer device acquires the visible light image, color space conversion, for example, YCbCr conversion, may be performed on the visible light image, color information in the visible light image is extracted to obtain a color image, and luminance information in the visible light image is extracted to obtain a luminance image.
And S902, performing frequency domain space conversion on the visible light image to obtain a first frequency domain image.
In this embodiment, when the computer device acquires the visible light image, frequency domain spatial conversion may be performed on the visible light image, and the detail information in the visible light image is extracted to obtain a first frequency domain image. According to the method, the visible light image is separated, and the color image, the brightness image and the first frequency domain image are respectively obtained, so that the visible light image and the near infrared image can be fused on the brightness channel and the frequency domain channel in a targeted manner, and the image quality after image fusion can be greatly improved.
In summary of the foregoing embodiments, the present application further provides an image fusion method, as shown in fig. 12, the method includes:
s1001, a visible light image to be processed and a near infrared image to be processed are obtained.
S1002, performing color space conversion on the visible light image to obtain a color image and a brightness image.
S1003, performing frequency domain space conversion on the visible light image to obtain a first frequency domain image.
And S1004, performing frequency domain space conversion on the near-infrared image to obtain a second frequency domain image.
S1005, respectively dividing the first frequency domain image and the second frequency domain image to obtain a plurality of first frequency domain image blocks corresponding to the first frequency domain image and a plurality of second frequency domain image blocks corresponding to the second frequency domain image.
S1006, determine whether each first frequency-domain image block includes texture information, and whether each corresponding second frequency-domain image block includes texture information. If the first frequency domain image block includes texture information and the corresponding second frequency domain image block does not include texture information, step S1007 is executed; if the first frequency domain image block does not contain texture information and the corresponding second frequency domain image block does not contain texture information, executing step S1007; if the first frequency domain image block contains texture information and the corresponding second frequency domain image block contains texture information, executing steps S1008-S1009; if the first frequency domain image block does not include texture information and the corresponding second frequency domain image block includes texture information, step S1010 is executed.
And S1007, determining the first frequency domain image block as a fused image block corresponding to the first fused image and the second fused image.
S1008, determining the weight of the first frequency domain image block and the weight of the second frequency domain image block; the weight of the second frequency-domain image block is greater than the weight of the first frequency-domain image block.
And S1009, fusing the first frequency domain image block and the second frequency domain image block according to the weight of the first frequency domain image block and the weight of the second frequency domain image block to obtain fused image blocks corresponding to the first fused image and the second fused image.
And S1010, determining the second frequency domain image block as a fused image block corresponding to the first fused image and the second fused image.
And S1011, acquiring a histogram of the near infrared image and a histogram of the brightness image.
S1012, determining a first target pixel point in the near-infrared image according to the brightness distribution indicated by the histogram of the near-infrared image; the brightness value corresponding to the first target pixel point is smaller than a preset brightness threshold value.
And S1013, enhancing the brightness of the first target pixel point according to the brightness threshold value to obtain a first corrected image.
S1014, determining a second target pixel point in the brightness image according to the brightness distribution indicated by the histogram of the brightness image; the corresponding brightness value of the second target pixel point is greater than a preset brightness threshold value.
And S1015, weakening the brightness of the first target pixel point according to the brightness threshold value to obtain a second corrected image.
And S1016, fusing the first corrected image and the second corrected image to obtain a second fused image.
S1017, fusing the first fused image and the second fused image to obtain a third fused image.
And S1018, performing color recovery on the third fused image according to the color image to obtain a target fused image.
The descriptions of the steps in the above method are the same as the descriptions of the steps in the above embodiment, and for details, refer to the foregoing description, which is not repeated herein.
Based on the image fusion method described in the above embodiment, the present application also provides an image fusion system, as shown in fig. 13, the image fusion system includes: the device comprises a color space conversion module, a first frequency domain space conversion module, a second frequency domain space conversion module, a first fusion module, a second fusion module, a third fusion module and a color recovery module. The color space conversion module is used for performing color conversion on the visible light image to obtain a color image and a brightness image; the first frequency domain space conversion module is used for carrying out frequency domain conversion on the visible light image to obtain a first frequency domain image; the second frequency domain space conversion module is used for carrying out frequency domain conversion on the near-infrared image to obtain a second frequency domain image; the first fusion module is used for fusing the first frequency domain image and the second frequency domain image to obtain a first fusion image; the second fusion module is used for fusing the brightness image and the near-infrared image to obtain a second fusion image; the third fusion module is used for fusing the first fusion image and the second fusion image to obtain a third fusion image; and the color recovery module is used for carrying out color recovery on the third fusion image according to the color image to obtain a target fusion image. The image fusion system can realize image fusion of the visible light image and the near infrared image, and the target fusion image obtained after fusion comprises more detail information and is also a high dynamic range image, so that the target fusion image with higher quality can be obtained through the image fusion system.
It should be understood that although the various steps in the flow charts of fig. 2-12 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-12 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 14, there is provided an image fusion apparatus including: a conversion module 11, a first fusion module 12 and a second fusion module 13, wherein:
the conversion module 11 is configured to perform spatial conversion on the visible light image to be processed to obtain a color image, a luminance image and a first frequency domain image, and perform frequency domain spatial conversion on the near-infrared image to be processed to obtain a second frequency domain image;
a first fusion module 12, configured to fuse the first frequency domain image and the second frequency domain image to obtain a first fusion image, and fuse the near-infrared image and the luminance image to obtain a second fusion image;
and the second fusion module 13 is configured to obtain a target fusion image according to the color image, the first fusion image, and the second fusion image.
In one embodiment, as shown in fig. 15, the first fusion module 12 includes:
a dividing unit 121, configured to divide the first frequency domain image and the second frequency domain image respectively to obtain a plurality of first frequency domain image blocks corresponding to the first frequency domain image and a plurality of second frequency domain image blocks corresponding to the second frequency domain image;
a first fusion unit 122, configured to fuse each first frequency domain image block and the corresponding second frequency domain image block, respectively, to obtain the first fusion image.
In one embodiment, as shown in fig. 16, the first fusing unit 122 includes:
a first determining subunit 1221, configured to determine whether each of the first frequency-domain image blocks includes texture information, and whether each of the corresponding second frequency-domain image blocks includes the texture information;
the first fusion subunit 1222 is configured to fuse each first frequency-domain image block and each second frequency-domain image block according to a determination result, so as to obtain the first fusion image composed of multiple fusion image blocks.
In an embodiment, the first fusion subunit 1222 is specifically configured to, when the determination result is that the first frequency-domain image block includes texture information and the corresponding second frequency-domain image block does not include the texture information, determine the first frequency-domain image block as a fused image block corresponding to the first fused image and the second fused image.
In an embodiment, the first fusion subunit 1222 is specifically configured to, when the determination result is that the first frequency-domain image block does not include texture information, and the corresponding second frequency-domain image block does not include the texture information, determine the first frequency-domain image block as a fused image block corresponding to the first fused image and the second fused image.
In an embodiment, the first fusing subunit 1222 is specifically configured to, if the determination result is that the first frequency-domain image block includes texture information and the corresponding second frequency-domain image block includes the texture information, determine a weight of the first frequency-domain image block and a weight of the second frequency-domain image block; the weight of the second frequency domain image block is greater than that of the first frequency domain image block; and fusing the first frequency domain image block and the second frequency domain image block according to the weight of the first frequency domain image block and the weight of the second frequency domain image block to obtain fused image blocks corresponding to the first fused image and the second fused image.
In an embodiment, the first fusion subunit 1222 is specifically configured to, when the determination result is that the first frequency-domain image block does not include texture information and the corresponding second frequency-domain image block includes the texture information, determine the second frequency-domain image block as a fused image block corresponding to the first fused image and the second fused image.
In one embodiment, as shown in fig. 17, the first fusion module 12 includes:
an acquiring unit 123 configured to acquire a histogram of the near-infrared image and a histogram of the luminance image;
a first correction unit 124, configured to correct the brightness of the near-infrared image according to the histogram of the near-infrared image, so as to obtain a first corrected image;
a second correction unit 125, configured to correct the luminance of the luminance image according to the histogram of the luminance image, so as to obtain a second corrected image;
a second fusion unit 126, configured to fuse the first corrected image and the second corrected image to obtain the second fused image.
In one embodiment, as shown in fig. 18, the first modification unit 124 includes:
a second determining subunit 1241, configured to determine, according to the brightness distribution indicated by the histogram of the near-infrared image, a first target pixel point in the near-infrared image; the brightness value corresponding to the first target pixel point is smaller than a preset brightness threshold value;
and an enhancer unit 1242, configured to enhance the brightness of the first target pixel according to the brightness threshold, so as to obtain the first corrected image.
In one embodiment, as shown in fig. 19, the second correcting unit 125 includes:
a third determining subunit 1251, configured to determine, according to the luminance distribution indicated by the histogram of the luminance image, a second target pixel point in the luminance image; the corresponding brightness value of the second target pixel point is greater than a preset brightness threshold value;
and a weakening subunit 1252, configured to weaken the brightness of the first target pixel point according to the brightness threshold, to obtain the second corrected image.
In one embodiment, as shown in fig. 20, the second fusion module 13 includes:
a third fusion unit 131, configured to fuse the first fusion image and the second fusion image to obtain a third fusion image;
a restoring unit 132, configured to perform color restoration on the third fused image according to the color image, so as to obtain the target fused image.
In an embodiment, as shown in fig. 20, the third fusing unit 131 is specifically configured to fuse the first fused image and the second fused image by using a poisson fusion method to obtain the third fused image.
In one embodiment, as shown in fig. 21, the conversion module 11 includes:
a color conversion unit 111, configured to perform color space conversion on the visible light image to obtain the color image and the luminance image;
a frequency domain converting unit 112, configured to perform frequency domain spatial conversion on the visible light image to obtain the first frequency domain image.
For specific limitations of the image fusion device, reference may be made to the above limitations of the image fusion method, which are not described herein again. The various modules in the image fusion device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
performing spatial conversion on a visible light image to be processed to obtain a color image, a brightness image and a first frequency domain image, and performing frequency domain spatial conversion on a near-infrared image to be processed to obtain a second frequency domain image;
fusing the first frequency domain image and the second frequency domain image to obtain a first fused image, and fusing the near-infrared image and the brightness image to obtain a second fused image;
and obtaining a target fusion image according to the color image, the first fusion image and the second fusion image.
The implementation principle and technical effect of the computer device provided by the above embodiment are similar to those of the above method embodiment, and are not described herein again.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
performing spatial conversion on a visible light image to be processed to obtain a color image, a brightness image and a first frequency domain image, and performing frequency domain spatial conversion on a near-infrared image to be processed to obtain a second frequency domain image;
fusing the first frequency domain image and the second frequency domain image to obtain a first fused image, and fusing the near-infrared image and the brightness image to obtain a second fused image;
and obtaining a target fusion image according to the color image, the first fusion image and the second fusion image.
The implementation principle and technical effect of the computer-readable storage medium provided by the above embodiments are similar to those of the above method embodiments, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (16)

1. An image fusion method, characterized in that the method comprises:
performing spatial conversion on a visible light image to be processed to obtain a color image, a brightness image and a first frequency domain image, and performing frequency domain spatial conversion on a near-infrared image to be processed to obtain a second frequency domain image;
fusing the first frequency domain image and the second frequency domain image to obtain a first fused image, and fusing the near-infrared image and the brightness image to obtain a second fused image;
and obtaining a target fusion image according to the color image, the first fusion image and the second fusion image.
2. The method according to claim 1, wherein said fusing the first frequency domain image and the second frequency domain image to obtain a first fused image comprises:
dividing the first frequency domain image and the second frequency domain image respectively to obtain a plurality of first frequency domain image blocks corresponding to the first frequency domain image and a plurality of second frequency domain image blocks corresponding to the second frequency domain image;
and respectively fusing each first frequency domain image block and the corresponding second frequency domain image block to obtain the first fused image.
3. The method according to claim 2, wherein the fusing each of the first frequency-domain image blocks and the corresponding second frequency-domain image block to obtain the first fused image respectively comprises:
determining whether each first frequency domain image block contains texture information and whether each corresponding second frequency domain image block contains the texture information;
and according to the determination result, fusing each first frequency domain image block and each second frequency domain image block to obtain the first fused image composed of a plurality of fused image blocks.
4. The method according to claim 3, wherein the fusing each of the first frequency-domain image blocks and each of the second frequency-domain image blocks according to the determination result to obtain the first fused image composed of a plurality of fused image blocks comprises:
and if the determination result is that the first frequency domain image block contains texture information and the corresponding second frequency domain image block does not contain the texture information, determining the first frequency domain image block as a fusion image block corresponding to the first fusion image and the second fusion image.
5. The method according to claim 3, wherein the fusing each of the first frequency-domain image blocks and each of the second frequency-domain image blocks according to the determination result to obtain the first fused image composed of a plurality of fused image blocks comprises:
and if the determination result is that the first frequency domain image block does not contain texture information and the corresponding second frequency domain image block does not contain the texture information, determining the first frequency domain image block as a fusion image block corresponding to the first fusion image and the second fusion image.
6. The method according to claim 3, wherein the fusing each of the first frequency-domain image blocks and each of the second frequency-domain image blocks according to the determination result to obtain the first fused image composed of a plurality of fused image blocks comprises:
if the determination result is that the first frequency domain image block contains texture information and the corresponding second frequency domain image block contains the texture information, determining the weight of the first frequency domain image block and the weight of the second frequency domain image block; the weight of the second frequency domain image block is greater than that of the first frequency domain image block;
and fusing the first frequency domain image block and the second frequency domain image block according to the weight of the first frequency domain image block and the weight of the second frequency domain image block to obtain fused image blocks corresponding to the first fused image and the second fused image.
7. The method according to claim 3, wherein the fusing each of the first frequency-domain image blocks and each of the second frequency-domain image blocks according to the determination result to obtain the first fused image composed of a plurality of fused image blocks comprises:
and if the determination result is that the first frequency domain image block does not contain texture information and the corresponding second frequency domain image block contains the texture information, determining the second frequency domain image block as a fusion image block corresponding to the first fusion image and the second fusion image.
8. The method according to claim 7, wherein said fusing the near-infrared image and the luminance image to obtain a second fused image comprises:
acquiring a histogram of the near-infrared image and a histogram of the brightness image;
correcting the brightness of the near-infrared image according to the histogram of the near-infrared image to obtain a first corrected image;
correcting the brightness of the brightness image according to the histogram of the brightness image to obtain a second corrected image;
and fusing the first corrected image and the second corrected image to obtain a second fused image.
9. The method according to claim 8, wherein the correcting the brightness of the near-infrared image according to the histogram of the near-infrared image to obtain a first corrected image comprises:
determining a first target pixel point in the near-infrared image according to the brightness distribution indicated by the histogram of the near-infrared image; the brightness value corresponding to the first target pixel point is smaller than a preset brightness threshold value;
and enhancing the brightness of the first target pixel point according to the brightness threshold value to obtain the first corrected image.
10. The method according to claim 8, wherein the modifying the luminance of the luminance image according to the histogram of the luminance image to obtain a second modified image comprises:
determining a second target pixel point in the brightness image according to the brightness distribution indicated by the histogram of the brightness image; the corresponding brightness value of the second target pixel point is greater than a preset brightness threshold value;
and weakening the brightness of the first target pixel point according to the brightness threshold value to obtain the second corrected image.
11. The method according to claim 1, wherein obtaining a target fused image from the color image, the first fused image and the second fused image comprises:
fusing the first fused image and the second fused image to obtain a third fused image;
and performing color recovery on the third fused image according to the color image to obtain the target fused image.
12. The method according to claim 11, wherein said fusing the first fused image and the second fused image to obtain a third fused image comprises:
and fusing the first fusion image and the second fusion image by adopting a Poisson fusion mode to obtain a third fusion image.
13. The method of claim 1, wherein spatially converting the visible light image to obtain a color image, a luminance image, and a first frequency domain image comprises:
performing color space conversion on the visible light image to obtain the color image and the brightness image;
and performing frequency domain space conversion on the visible light image to obtain the first frequency domain image.
14. An image fusion apparatus, characterized in that the apparatus comprises:
the conversion module is used for carrying out spatial conversion on the visible light image to be processed to obtain a color image, a brightness image and a first frequency domain image, and carrying out frequency domain spatial conversion on the near-infrared image to be processed to obtain a second frequency domain image;
the first fusion module is used for fusing the first frequency domain image and the second frequency domain image to obtain a first fusion image, and fusing the near-infrared image and the brightness image to obtain a second fusion image;
and the second fusion module is used for obtaining a target fusion image according to the color image, the first fusion image and the second fusion image.
15. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 13 when executing the computer program.
16. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 13.
CN202011263595.1A 2020-11-12 2020-11-12 Image fusion method and device, computer equipment and storage medium Pending CN112258442A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011263595.1A CN112258442A (en) 2020-11-12 2020-11-12 Image fusion method and device, computer equipment and storage medium
PCT/CN2021/117562 WO2022100250A1 (en) 2020-11-12 2021-09-10 Method and apparatus for image fusion, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011263595.1A CN112258442A (en) 2020-11-12 2020-11-12 Image fusion method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112258442A true CN112258442A (en) 2021-01-22

Family

ID=74265818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011263595.1A Pending CN112258442A (en) 2020-11-12 2020-11-12 Image fusion method and device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112258442A (en)
WO (1) WO2022100250A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022100250A1 (en) * 2020-11-12 2022-05-19 Oppo广东移动通信有限公司 Method and apparatus for image fusion, computer device and storage medium
WO2023134103A1 (en) * 2022-01-14 2023-07-20 无锡英菲感知技术有限公司 Image fusion method, device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600572A (en) * 2016-12-12 2017-04-26 长春理工大学 Adaptive low-illumination visible image and infrared image fusion method
CN109064436A (en) * 2018-07-10 2018-12-21 西安天盈光电科技有限公司 Image interfusion method
CN110136183A (en) * 2018-02-09 2019-08-16 华为技术有限公司 A kind of method and relevant device of image procossing
US20190318463A1 (en) * 2016-12-27 2019-10-17 Zhejiang Dahua Technology Co., Ltd. Systems and methods for fusing infrared image and visible light image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110133677A (en) * 2010-06-07 2011-12-14 삼성전자주식회사 Method and apparatus for processing 3d image
CN107909562B (en) * 2017-12-05 2021-06-08 华中光电技术研究所(中国船舶重工集团公司第七一七研究所) Fast image fusion algorithm based on pixel level
CN112258442A (en) * 2020-11-12 2021-01-22 Oppo广东移动通信有限公司 Image fusion method and device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600572A (en) * 2016-12-12 2017-04-26 长春理工大学 Adaptive low-illumination visible image and infrared image fusion method
US20190318463A1 (en) * 2016-12-27 2019-10-17 Zhejiang Dahua Technology Co., Ltd. Systems and methods for fusing infrared image and visible light image
CN110136183A (en) * 2018-02-09 2019-08-16 华为技术有限公司 A kind of method and relevant device of image procossing
CN109064436A (en) * 2018-07-10 2018-12-21 西安天盈光电科技有限公司 Image interfusion method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022100250A1 (en) * 2020-11-12 2022-05-19 Oppo广东移动通信有限公司 Method and apparatus for image fusion, computer device and storage medium
WO2023134103A1 (en) * 2022-01-14 2023-07-20 无锡英菲感知技术有限公司 Image fusion method, device, and storage medium

Also Published As

Publication number Publication date
WO2022100250A1 (en) 2022-05-19

Similar Documents

Publication Publication Date Title
JP6803899B2 (en) Image processing methods, image processing equipment and electronic devices
US10395341B2 (en) Panoramic image generation method and apparatus for user terminal
CN110008817B (en) Model training method, image processing method, device, electronic equipment and computer readable storage medium
WO2018176925A1 (en) Hdr image generation method and apparatus
CN111369644A (en) Face image makeup trial processing method and device, computer equipment and storage medium
WO2019052534A1 (en) Image stitching method and device, and storage medium
WO2022100250A1 (en) Method and apparatus for image fusion, computer device and storage medium
CN112233154A (en) Color difference elimination method, device and equipment for spliced image and readable storage medium
CN113962859B (en) Panorama generation method, device, equipment and medium
CN114418825B (en) Image processing method, image processing device, computer equipment and storage medium
CN113538271A (en) Image display method, image display device, electronic equipment and computer readable storage medium
CN112800276B (en) Video cover determining method, device, medium and equipment
US10123052B2 (en) Elimination of artifacts from lossy encoding of digital images by color channel expansion
Zheng et al. Joint residual pyramid for joint image super-resolution
CN115049572A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115937358A (en) Image processing method and device, electronic device and storage medium
CN107590790A (en) A kind of unzoned lens fringe region deblurring method based on symmetrical edge filling
CN113706390A (en) Image conversion model training method, image conversion method, device and medium
CN112106352A (en) Image processing method and device
CN112508801A (en) Image processing method and computing device
CN113034357B (en) Method and system for converting RAW format file, electronic device and storage medium
JP7458857B2 (en) Image processing device, image processing method and program
Shajahan et al. Direction oriented block based inpainting using morphological operations
WO2022024165A1 (en) Information processing device, information processing method, and recording medium
CN114626991A (en) Image stitching method, device, equipment, medium and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination