CN116091375A - Fusion region-optimized high-quality image fusion method and system - Google Patents

Fusion region-optimized high-quality image fusion method and system Download PDF

Info

Publication number
CN116091375A
CN116091375A CN202310037999.6A CN202310037999A CN116091375A CN 116091375 A CN116091375 A CN 116091375A CN 202310037999 A CN202310037999 A CN 202310037999A CN 116091375 A CN116091375 A CN 116091375A
Authority
CN
China
Prior art keywords
image
closed region
entropy
fusion
generate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310037999.6A
Other languages
Chinese (zh)
Inventor
别荣芳
孙运传
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Normal University
Original Assignee
Beijing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Normal University filed Critical Beijing Normal University
Priority to CN202310037999.6A priority Critical patent/CN116091375A/en
Publication of CN116091375A publication Critical patent/CN116091375A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a fusion region-optimized high-quality image fusion method and system, and relates to the technical field of image processing. The fusion area preferred high-quality image fusion method comprises the steps of obtaining an image to be fused and a target image; then carrying out multi-scale image reinforcement processing on the target image to generate a preprocessed image, and carrying out edge detection on the preprocessed image by using an HED edge detection method to generate a plurality of closed areas; then, calculating the entropy values of all the closed areas to generate a plurality of closed area entropy values; then screening the multiple closed region entropy values to obtain the lowest closed region entropy value, so that a fusion region is determined through the region entropy value, and the fusion region is more accurate; and finally, fusing the image to be fused to a closed region corresponding to the entropy value of the lowest closed region to generate a final fused image, thereby improving the image fusion quality.

Description

Fusion region-optimized high-quality image fusion method and system
Technical Field
The invention relates to the technical field of image processing, in particular to a fusion region-preferred high-quality image fusion method and system.
Background
The image fusion technology is used as a classical technology in the field of image processing, plays an increasingly important role in the digital media era, and can fuse target objects extracted from other images into a target image to form a new image with richer semantics.
Although the traditional image fusion method can realize ideal image fusion, the region to be fused cannot be accurately selected, and the quality of image fusion is reduced. In general, the region to be fused is a smoother region without abrupt pixel change, and the conventional classical method often does not fully consider the characteristics, so that the quality of image fusion is reduced.
Disclosure of Invention
The invention aims to provide a fusion area-optimized high-quality image fusion method and system, which are used for solving the problems that an area to be fused cannot be accurately selected in the prior art, and the quality of image fusion is reduced.
In a first aspect, an embodiment of the present application provides a fusion area-preferred high-quality image fusion method, including the steps of:
acquiring an image to be fused and a target image;
performing multi-scale image enhancement processing on the target image to generate a preprocessed image;
performing edge detection on the preprocessed image by using an HED edge detection method to generate a plurality of closed areas;
calculating the region entropy value of each closed region to generate a plurality of closed region entropy values;
screening the multiple closed region entropy values to obtain a lowest closed region entropy value;
and fusing the image to be fused to a closed region corresponding to the entropy value of the lowest closed region, and generating a final fused image.
In the implementation process, the image to be fused and the target image are acquired; then, carrying out multi-scale image enhancement processing on the target image to generate a preprocessed image, wherein the preprocessed image can be processed from multiple angles by adopting the multi-scale image enhancement processing, so that the generated preprocessed image is clearer; then, the HED edge detection method is used for carrying out edge detection on the preprocessed image to generate a plurality of closed areas, and the HED edge detection method is used for obtaining a more accurate edge prediction graph through continuous integration and learning in the generated output process, so that more accurate edges can be obtained, and the closed areas are clearer and more accurate; then, calculating the entropy values of all the closed areas to generate a plurality of closed area entropy values; then screening the multiple closed region entropy values to obtain the lowest closed region entropy value, so that a fusion region is determined through the region entropy value, and the fusion region is more accurate; and finally, fusing the image to be fused to a closed region corresponding to the entropy value of the lowest closed region to generate a final fused image, thereby improving the image fusion quality.
Based on the first aspect, in some embodiments of the present invention, the step of subjecting the target image to multi-scale image enhancement processing, and generating the preprocessed image includes the steps of:
carrying out Gaussian blur processing on the target image in a plurality of different scales to generate a plurality of blurred images;
subtracting each blurred image from the target image to generate a plurality of detail information;
and respectively weighting the detail information into the target image to generate a preprocessed image.
Based on the first aspect, in some embodiments of the present invention, the step of performing the region entropy value calculation on each closed region to generate a plurality of closed region entropy values includes the steps of:
calculating the gray value of each pixel point in each closed region by adopting a gray algorithm;
calculating the probability of each pixel gray value in the corresponding closed region;
and calculating by using an image entropy calculation formula according to the probability of each pixel gray value in the corresponding closed region, and generating a plurality of closed region entropy values.
Based on the first aspect, in some embodiments of the present invention, the image entropy calculation formula is:
Figure BDA0004042737870000031
wherein p is i Probability of each pixel gray value in the corresponding closed region; h is the closed region entropy.
Based on the first aspect, in some embodiments of the present invention, the step of filtering the plurality of closed region entropy values to obtain a lowest closed region entropy value includes the following steps:
ordering the entropy values of each closed region according to the entropy value to obtain a closed region entropy value list;
and extracting the closed region entropy value with the minimum entropy value from the closed region entropy value list as the lowest closed region entropy value.
In a second aspect, embodiments of the present application provide a fusion area-preferred high-quality image fusion system, including:
the image acquisition module is used for acquiring an image to be fused and a target image;
the preprocessing module is used for performing multi-scale image enhancement processing on the target image to generate a preprocessed image;
the edge detection module is used for carrying out edge detection on the preprocessed image by using an HED edge detection method to generate a plurality of closed areas;
the entropy calculating module is used for calculating the entropy of each closed region to generate a plurality of closed region entropy values;
the screening module is used for screening the multiple closed region entropy values to obtain the lowest closed region entropy value;
and the image fusion module is used for fusing the image to be fused to the closed region corresponding to the entropy value of the lowest closed region to generate a final fused image.
Based on the second aspect, in some embodiments of the invention, the preprocessing module includes:
the fuzzy processing unit is used for carrying out Gaussian fuzzy processing on the target image in a plurality of different scales to generate a plurality of fuzzy images;
the detail generation unit is used for respectively carrying out subtraction operation on each blurred image and the target image to generate a plurality of detail information;
and the image enhancement unit is used for respectively weighting the detail information into the target image and generating a preprocessed image.
Based on the second aspect, in some embodiments of the present invention, the entropy value calculation module includes:
a gray value calculation unit for calculating the gray value of each pixel point in each closed region by using a gray algorithm;
a gray value probability calculation unit for calculating the probability of each pixel gray value in the corresponding closed region;
and the image entropy calculation unit is used for calculating according to the probability of each pixel gray value in the corresponding closed region by using an image entropy calculation formula to generate a plurality of closed region entropy values.
In the implementation process, an image to be fused and a target image are acquired through an image acquisition module; then, the preprocessing module carries out multi-scale image enhancement processing on the target image to generate a preprocessed image, and the preprocessed image can be processed from multiple angles by adopting the multi-scale image enhancement processing, so that the generated preprocessed image is clearer; then the edge detection module performs edge detection on the preprocessed image by using an HED edge detection method to generate a plurality of closed areas, and the HED edge detection method is adopted to obtain a more accurate edge prediction graph through continuous integration and learning in the generated output process, so that more accurate edges can be obtained, and the closed areas are clearer and more accurate; then, the entropy value calculation module calculates the regional entropy value of each closed region to generate a plurality of closed region entropy values; then, a screening module screens the multiple closed region entropy values to obtain the lowest closed region entropy value, so that a fusion region is determined through the region entropy value, and the fusion region is more accurate; and finally, the image fusion module fuses the image to be fused to the closed region corresponding to the entropy value of the lowest closed region to generate a final fused image, so that the image fusion quality is improved.
In a third aspect, embodiments of the present application provide an electronic device comprising a memory for storing one or more programs; a processor. The method as described in any one of the first aspects is implemented when the one or more programs are executed by the processor.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as described in any of the first aspects above.
The embodiment of the invention has at least the following advantages or beneficial effects:
the embodiment of the invention provides a fusion area optimized high-quality image fusion method and a fusion area optimized high-quality image fusion system, which are implemented by acquiring an image to be fused and a target image; then, carrying out multi-scale image enhancement processing on the target image to generate a preprocessed image, wherein the preprocessed image can be processed from multiple angles by adopting the multi-scale image enhancement processing, so that the generated preprocessed image is clearer; then, the HED edge detection method is used for carrying out edge detection on the preprocessed image to generate a plurality of closed areas, and the HED edge detection method is used for obtaining a more accurate edge prediction graph through continuous integration and learning in the generated output process, so that more accurate edges can be obtained, and the closed areas are clearer and more accurate; then, calculating the entropy values of all the closed areas to generate a plurality of closed area entropy values; then screening the multiple closed region entropy values to obtain the lowest closed region entropy value, so that a fusion region is determined through the region entropy value, and the fusion region is more accurate; and finally, fusing the image to be fused to a closed region corresponding to the entropy value of the lowest closed region to generate a final fused image, thereby improving the image fusion quality.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a preferred high-quality image fusion method for a fusion area according to an embodiment of the present invention;
FIG. 2 is a block diagram of a fusion system for fusing a high quality image with a preferred fusion area according to an embodiment of the present invention;
fig. 3 is a block diagram of an electronic device according to an embodiment of the present invention.
Icon: 110-an image acquisition module; 120-a pretreatment module; 121-a blurring processing unit; 122-a detail generation unit; 123-an image reinforcement unit; 130-an edge detection module; 140-an entropy calculation module; 141-a gray value calculation unit; 142-gray value probability calculation unit; 143-an image entropy calculation unit; 150-a screening module; 151-ordering unit; 152-a region entropy value extraction unit; 160-an image fusion module; 101-memory; 102-a processor; 103-communication interface.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Examples
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The various embodiments and features of the embodiments described below may be combined with one another without conflict.
Referring to fig. 1, fig. 1 is a flowchart of a preferred high-quality image fusion method for a fusion area according to an embodiment of the present invention. The preferred high-quality image fusion method of the fusion area comprises the following steps:
step S110: acquiring an image to be fused and a target image; the image to be fused can be extracted from other images or can be a complete image. The target image is an image that needs to be subjected to image processing. For example: if the image A needs to be fused into the image B, the image A is the image to be fused, and the image B is the target image.
Step S120: performing multi-scale image enhancement processing on the target image to generate a preprocessed image; the contrast of the image can be improved by strengthening the target image, and the image fusion is convenient. The multi-scale image enhancement processing can be used for processing from multiple angles, so that the generated preprocessed image is clearer. The process for performing multi-scale image enhancement processing comprises the following steps:
firstly, carrying out Gaussian blur processing on a target image in a plurality of different scales to generate a plurality of blurred images; the plurality of different scales can be set according to actual conditions, and generally can be set to 3 different scales, and Gaussian blur processing is performed under the different scales, so that a plurality of blurred images are obtained. The image noise can be reduced and the detail level can be reduced by carrying out Gaussian blur processing on the image, so that a detail image is obtained, meanwhile, the Gaussian blur processing is respectively carried out under a plurality of different scales, the image noise can be reduced and the detail level can be reduced from a plurality of angles, and the subsequent image comparison is convenient. The above-mentioned gaussian blur processing belongs to the prior art, and therefore, will not be described in detail here. For example: and carrying out Gaussian blur processing on the target image A with three different scales to obtain a blurred image A1, a blurred image A2 and a blurred image A3.
Then, subtracting the blurred images from the target image to generate a plurality of detail information; the subtraction operation is to subtract the target image from the obtained blurred image, so as to obtain a plurality of detail information. The method can find different places in the blurred image compared with the target image by subtracting, and a plurality of detail information can be obtained by subtracting a plurality of blurred images respectively. The blurred images are obtained by blurring processing under a plurality of different scales, so that the obtained detail information can reflect the difference between the blurred images and the target images under different scales. For example: the obtained blurred image comprises a blurred image A1, a blurred image A2 and a blurred image A3, the target image is an image A, the blurred image A1 is subtracted from the image A to obtain detail information D1, the blurred image A2 is subtracted from the image A to obtain detail information D2, and the blurred image A3 is subtracted from the image A to obtain detail information D3.
And finally, respectively weighting the detail information into the target image to generate a preprocessed image. The weighting of the plurality of detail information into the target image respectively means that each detail information is added into the target image according to the weight of each detail information. The weights can be preset, the weights of the detail information can be the same or different, when the weights of the detail information are the same, namely the weights of the detail information are 1, the detail information is directly added into the target image, and when the weights of the detail information are different, the detail information is multiplied by the corresponding weights and then added into the target image. By weighting the plurality of detail information into the target image, the obtained preprocessed image is provided with more detail information than the target image, thereby enhancing the target image. For example: the weights of the detail information D1, D2 and D3 are the same, and after the detail information D1, D2 and D3 are weighted into the target image a, the obtained preprocessed image is: the target image a+ details D1, D2, D3. The weights of the detail information D1, D2 and D3 are 0.5, 0.2 and 0.3 respectively, and the preprocessed image is: target image a+ detail information D1X0.5+ detail information D2X0.2+ detail information D3X0.3.
Step S130: performing edge detection on the preprocessed image by using an HED edge detection method to generate a plurality of closed areas; the edge detection means that the preprocessing image is identified by adopting the HED edge detection method to identify points with obvious brightness change in the digital image, and the preprocessing image can be divided into a plurality of closed areas by connecting the points. The HED edge detection method is a process of obtaining a more accurate edge prediction graph through continuous integration and learning in the generated output process, so that a more accurate edge can be obtained, and a closed area is clearer and more accurate. The HED edge detection method belongs to the prior art and is not described in detail herein.
Step S140: calculating the region entropy value of each closed region to generate a plurality of closed region entropy values; the above region entropy calculation means that the image entropy of each closed region is calculated respectively, and the larger the image entropy value is, the clearer the image of the region is indicated. The process for calculating the regional entropy value comprises the following steps:
firstly, calculating the gray value of each pixel point in each closed area by adopting a gray algorithm; the gray value of each pixel point can be calculated by a gray algorithm, and the gray value of each pixel point is divided into a plurality of levels according to the logarithmic relationship between white and black, which is called as "gray level". Typically ranging from 0 to 255. The gray value represents the brightness of each pixel. The gray level algorithm belongs to the prior art, and can be obtained through calculation by the existing software, and will not be described herein.
Then, calculating the probability of each pixel gray value in the corresponding closed region; the probability of calculating the gray value of each pixel may be that the gray value of each pixel is calculated, and then the number of times that the gray value appears in the gray of the pixel in the closed region is counted, so as to calculate the probability of the gray value in the closed region. And respectively calculating a plurality of pixel gray values to obtain the probability of each pixel gray value in the corresponding closed region. The calculation process can be calculated by adopting probability statistics. For example: the closed area A1 includes 100 pixels, wherein 10 pixels with a gray value of 99 are included, and the probability that the gray value of 99 is in the closed area A1 is: 0.1. the closed area A2 includes 200 pixels, wherein 10 pixels with a gray value of 99 are included, and the probability that the gray value of 99 is in the closed area A1 is: 0.05.
and finally, calculating by using an image entropy calculation formula according to the probability of each pixel gray value in the corresponding closed region, and generating a plurality of closed region entropy values. The calculation process refers to that the probability of the pixel gray value of each pixel point in each closed region is brought into an image entropy calculation formula to be calculated, so that the entropy value of each closed region is obtained. The image entropy calculation formula is as follows:
Figure BDA0004042737870000111
wherein p is i Probability of each pixel gray value in the corresponding closed region; h is the closed region entropy.
Step S150: screening the multiple closed region entropy values to obtain a lowest closed region entropy value; the filtering means that the entropy values of the closed regions are compared in pairs, so that the entropy value of the closed region with the smallest entropy value is found out as the lowest entropy value of the closed region. The screening can be achieved by the following steps:
the method comprises the steps of firstly, sorting entropy values of all closed areas according to the entropy values to obtain a closed area entropy value list; the sorting may be performed in order of the entropy values from small to large, or in order of the entropy values from large to small. The obtained closed region entropy list is an entropy list arranged according to ascending and descending order.
And secondly, extracting a closed region entropy value with the minimum entropy value from the closed region entropy value list as the lowest closed region entropy value. For example: the obtained closed region entropy value list is 0.2, 0.3, 0.4, 0.5 and 0.8, and the obtained low closed region entropy value is 0.2.
Step S160: and fusing the image to be fused to a closed region corresponding to the entropy value of the lowest closed region, and generating a final fused image. The above fusion process refers to finding the closed region corresponding to the entropy value of the lowest closed region, and then fusing the image to be fused to the closed region by adopting an image fusion technology, so as to generate a new fused image as a final fused image. The above image fusion technique belongs to the prior art, and therefore, will not be described in detail herein. For example: the target image A comprises closed areas E1, E2, E3 and E4, wherein the closed area corresponding to the entropy value of the lowest closed area is the closed area E1, and the image to be fused is fused into the closed area E1 to obtain a final fused image.
In the implementation process, the image to be fused and the target image are acquired; then, carrying out multi-scale image enhancement processing on the target image to generate a preprocessed image, wherein the preprocessed image can be processed from multiple angles by adopting the multi-scale image enhancement processing, so that the generated preprocessed image is clearer; then, the HED edge detection method is used for carrying out edge detection on the preprocessed image to generate a plurality of closed areas, and the HED edge detection method is used for obtaining a more accurate edge prediction graph through continuous integration and learning in the generated output process, so that more accurate edges can be obtained, and the closed areas are clearer and more accurate; then, calculating the entropy values of all the closed areas to generate a plurality of closed area entropy values; then screening the multiple closed region entropy values to obtain the lowest closed region entropy value, so that a fusion region is determined through the region entropy value, and the fusion region is more accurate; and finally, fusing the image to be fused to a closed region corresponding to the entropy value of the lowest closed region to generate a final fused image, thereby improving the image fusion quality.
Based on the same inventive concept, the present invention further provides a fusion area preferred high-quality image fusion system, please refer to fig. 2, and fig. 2 is a block diagram of a fusion area preferred high-quality image fusion system according to an embodiment of the present invention. The preferred high quality image fusion system for the fusion area comprises:
an image acquisition module 110, configured to acquire an image to be fused and a target image;
a preprocessing module 120, configured to perform multi-scale image enhancement processing on the target image, and generate a preprocessed image;
an edge detection module 130, configured to perform edge detection on the preprocessed image by using an HED edge detection method, to generate a plurality of closed regions;
the entropy calculating module 140 is configured to perform area entropy calculation on each closed area, and generate a plurality of closed area entropy values;
the screening module 150 is configured to screen the multiple closed region entropy values to obtain a lowest closed region entropy value;
the image fusion module 160 is configured to fuse the image to be fused to the occlusion region corresponding to the lowest occlusion region entropy value, and generate a final fused image.
In the above implementation process, the image to be fused and the target image are acquired by the image acquisition module 110; then the preprocessing module 120 performs multi-scale image enhancement processing on the target image to generate a preprocessed image, and the preprocessed image can be processed from multiple angles by adopting the multi-scale image enhancement processing, so that the generated preprocessed image is clearer; then, the edge detection module 130 performs edge detection on the preprocessed image by using an HED edge detection method to generate a plurality of closed areas, and the HED edge detection method is adopted to obtain a more accurate edge prediction graph through continuous integration and learning in the generated output process, so that more accurate edges can be obtained, and the closed areas are clearer and more accurate; then, the entropy calculating module 140 calculates the entropy of each closed region to generate a plurality of closed region entropy values; then, the screening module 150 screens the multiple closed region entropy values to obtain the lowest closed region entropy value, so that a fusion region is determined through the region entropy value, and the fusion region is more accurate; and finally, the image fusion module 160 fuses the image to be fused to the closed region corresponding to the entropy value of the lowest closed region to generate a final fused image, so that the image fusion quality is improved.
Wherein, the preprocessing module 120 includes:
a blurring processing unit 121, configured to perform gaussian blurring processing on a target image in a plurality of different scales, and generate a plurality of blurred images;
a detail generation unit 122, configured to perform subtraction operation on each blurred image and the target image, and generate a plurality of detail information;
an image reinforcement unit 123 for weighting the plurality of detail information into the target image, respectively, to generate a preprocessed image.
Wherein, the entropy calculating module 140 includes:
a gray value calculating unit 141 for calculating a gray value of each pixel point in each closed region using a gray algorithm;
a gray value probability calculation unit 142 for calculating the probability of each pixel gray value in the corresponding closed region;
the image entropy calculating unit 143 is configured to calculate, according to the probability that each pixel gray value is in the corresponding closed region, using an image entropy calculating formula, and generate a plurality of closed region entropy values.
Wherein, screening module 150 includes:
a sorting unit 151, configured to sort the entropy values of each closed region according to the entropy value, to obtain a closed region entropy value list;
the region entropy value extracting unit 152 is configured to extract, from the closed region entropy value list, a closed region entropy value with the smallest entropy value as the lowest closed region entropy value.
Referring to fig. 3, fig. 3 is a schematic block diagram of an electronic device according to an embodiment of the present application. The electronic device comprises a memory 101, a processor 102 and a communication interface 103, wherein the memory 101, the processor 102 and the communication interface 103 are electrically connected with each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 101 may be used to store software programs and modules, such as program instructions/modules corresponding to a fusion area preferred high quality image fusion system provided in the embodiments of the present application, and the processor 102 executes the software programs and modules stored in the memory 101, thereby performing various functional applications and data processing. The communication interface 103 may be used for communication of signaling or data with other node devices.
Wherein the memory 101 may be, but is not limited to, random access memory (Random AccessMemory, RAM), read only memory (ReadOnlyMemory, ROM), programmable read only memory (programmable read-OnlyMemory, PROM), erasable read only memory (erasabableread-OnlyMemory, EPROM), electrically erasable read only memory (electrically erasable programmable read-OnlyMemory, EEPROM), and the like.
The processor 102 may be an integrated circuit chip with signal processing capabilities. The processor 102 may be a general purpose processor including a central processing unit (CentralProcessingUnit, CPU), a network processor (NetworkProcessor, NP), etc.; but may also be a digital signal processor (DigitalSignalProcessing, DSP), application specific integrated circuit (Application SpecificIntegratedCircuit, ASIC), field programmable gate array (Field-ProgrammableGateArray, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
It will be appreciated that the configuration shown in fig. 3 is merely illustrative, and that the electronic device may also include more or fewer components than shown in fig. 3, or have a different configuration than shown in fig. 3. The components shown in fig. 3 may be implemented in hardware, software, or a combination thereof.
In the embodiments provided in this application, it should be understood that the disclosed systems and methods may be implemented in other ways as well. The system embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The above functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the above-described method of the various embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, randomAccessMemory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In summary, the preferred high-quality image fusion method and system for the fusion area provided by the embodiment of the application are implemented by acquiring an image to be fused and a target image; then, carrying out multi-scale image enhancement processing on the target image to generate a preprocessed image, wherein the preprocessed image can be processed from multiple angles by adopting the multi-scale image enhancement processing, so that the generated preprocessed image is clearer; then, the HED edge detection method is used for carrying out edge detection on the preprocessed image to generate a plurality of closed areas, and the HED edge detection method is used for obtaining a more accurate edge prediction graph through continuous integration and learning in the generated output process, so that more accurate edges can be obtained, and the closed areas are clearer and more accurate; then, calculating the entropy values of all the closed areas to generate a plurality of closed area entropy values; then screening the multiple closed region entropy values to obtain the lowest closed region entropy value, so that a fusion region is determined through the region entropy value, and the fusion region is more accurate; and finally, fusing the image to be fused to a closed region corresponding to the entropy value of the lowest closed region to generate a final fused image, thereby improving the image fusion quality.
The foregoing is merely a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and variations may be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (10)

1. A fusion region-preferred high quality image fusion method, comprising the steps of:
acquiring an image to be fused and a target image;
performing multi-scale image enhancement processing on the target image to generate a preprocessed image;
performing edge detection on the preprocessed image by using an HED edge detection method to generate a plurality of closed areas;
calculating the region entropy value of each closed region to generate a plurality of closed region entropy values;
screening the multiple closed region entropy values to obtain a lowest closed region entropy value;
and fusing the image to be fused to a closed region corresponding to the entropy value of the lowest closed region, and generating a final fused image.
2. The fusion area preferred high quality image fusion method according to claim 1, wherein the step of subjecting the target image to multi-scale image enhancement processing to generate the preprocessed image comprises the steps of:
carrying out Gaussian blur processing on the target image in a plurality of different scales to generate a plurality of blurred images;
subtracting each blurred image from the target image to generate a plurality of detail information;
and respectively weighting the detail information into the target image to generate a preprocessed image.
3. The fusion region-preferred high-quality image fusion method according to claim 1, wherein the step of performing region entropy value calculation on each closed region to generate a plurality of closed region entropy values comprises the steps of:
calculating the gray value of each pixel point in each closed region by adopting a gray algorithm;
calculating the probability of each pixel gray value in the corresponding closed region;
and calculating by using an image entropy calculation formula according to the probability of each pixel gray value in the corresponding closed region, and generating a plurality of closed region entropy values.
4. A fusion region preferred high quality image fusion method according to claim 3, wherein the image entropy calculation formula is:
Figure FDA0004042737860000021
wherein p is i Probability of each pixel gray value in the corresponding closed region; h is the closed region entropy.
5. The fusion region-preferred high-quality image fusion method according to claim 1, wherein the step of screening the plurality of closed region entropy values to obtain a lowest closed region entropy value comprises the steps of:
ordering the entropy values of each closed region according to the entropy value to obtain a closed region entropy value list;
and extracting the closed region entropy value with the minimum entropy value from the closed region entropy value list as the lowest closed region entropy value.
6. A fusion area preferred high quality image fusion system comprising:
the image acquisition module is used for acquiring an image to be fused and a target image;
the preprocessing module is used for performing multi-scale image enhancement processing on the target image to generate a preprocessed image;
the edge detection module is used for carrying out edge detection on the preprocessed image by using an HED edge detection method to generate a plurality of closed areas;
the entropy calculating module is used for calculating the entropy of each closed region to generate a plurality of closed region entropy values;
the screening module is used for screening the multiple closed region entropy values to obtain the lowest closed region entropy value;
and the image fusion module is used for fusing the image to be fused to the closed region corresponding to the entropy value of the lowest closed region to generate a final fused image.
7. The fusion area preferred high quality image fusion system of claim 6, wherein the preprocessing module comprises:
the fuzzy processing unit is used for carrying out Gaussian fuzzy processing on the target image in a plurality of different scales to generate a plurality of fuzzy images;
the detail generation unit is used for respectively carrying out subtraction operation on each blurred image and the target image to generate a plurality of detail information;
and the image enhancement unit is used for respectively weighting the detail information into the target image and generating a preprocessed image.
8. The fusion area preferred high quality image fusion system of claim 6, wherein the entropy calculation module comprises:
a gray value calculation unit for calculating the gray value of each pixel point in each closed region by using a gray algorithm;
a gray value probability calculation unit for calculating the probability of each pixel gray value in the corresponding closed region;
and the image entropy calculation unit is used for calculating according to the probability of each pixel gray value in the corresponding closed region by using an image entropy calculation formula to generate a plurality of closed region entropy values.
9. An electronic device, comprising:
a memory for storing one or more programs;
a processor;
the method of any of claims 1-5 is implemented when the one or more programs are executed by the processor.
10. A computer readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the method according to any of claims 1-5.
CN202310037999.6A 2023-01-07 2023-01-07 Fusion region-optimized high-quality image fusion method and system Pending CN116091375A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310037999.6A CN116091375A (en) 2023-01-07 2023-01-07 Fusion region-optimized high-quality image fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310037999.6A CN116091375A (en) 2023-01-07 2023-01-07 Fusion region-optimized high-quality image fusion method and system

Publications (1)

Publication Number Publication Date
CN116091375A true CN116091375A (en) 2023-05-09

Family

ID=86213466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310037999.6A Pending CN116091375A (en) 2023-01-07 2023-01-07 Fusion region-optimized high-quality image fusion method and system

Country Status (1)

Country Link
CN (1) CN116091375A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116863492A (en) * 2023-09-04 2023-10-10 山东正禾大教育科技有限公司 Mobile digital publishing system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116863492A (en) * 2023-09-04 2023-10-10 山东正禾大教育科技有限公司 Mobile digital publishing system
CN116863492B (en) * 2023-09-04 2023-11-21 山东正禾大教育科技有限公司 Mobile digital publishing system

Similar Documents

Publication Publication Date Title
CN108229526B (en) Network training method, network training device, image processing method, image processing device, storage medium and electronic equipment
CN106934397B (en) Image processing method and device and electronic equipment
Chen et al. PMHLD: Patch map-based hybrid learning DehazeNet for single image haze removal
Chen et al. Structure-adaptive fuzzy estimation for random-valued impulse noise suppression
US9152926B2 (en) Systems, methods, and media for updating a classifier
CN114022790B (en) Cloud layer detection and image compression method and device in remote sensing image and storage medium
CN114241484B (en) Social network-oriented image big data accurate retrieval method and system
CN114140683A (en) Aerial image target detection method, equipment and medium
CN111696064B (en) Image processing method, device, electronic equipment and computer readable medium
CN111882565B (en) Image binarization method, device, equipment and storage medium
CN114581434A (en) Pathological image processing method based on deep learning segmentation model and electronic equipment
CN116091375A (en) Fusion region-optimized high-quality image fusion method and system
Freitas et al. Blind image quality assessment based on multiscale salient local binary patterns
Srinivas et al. Remote sensing image segmentation using OTSU algorithm
CN111325671B (en) Network training method and device, image processing method and electronic equipment
CN111259680A (en) Two-dimensional code image binarization processing method and device
CN113344907B (en) Image detection method and device
KR100943595B1 (en) Device and method for blurring decision of image
HosseinKhani et al. Real‐time removal of impulse noise from MR images for radiosurgery applications
CN115829980B (en) Image recognition method, device and equipment for fundus photo and storage medium
Xu et al. Features based spatial and temporal blotch detection for archive video restoration
CN116543373A (en) Block chain-based live video big data intelligent analysis and optimization method and system
CN113537253B (en) Infrared image target detection method, device, computing equipment and storage medium
CN111340044A (en) Image processing method, image processing device, electronic equipment and storage medium
Lv et al. Low‐light image haze removal with light segmentation and nonlinear image depth estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination