CN112241940A - Method and device for fusing multiple multi-focus images - Google Patents
Method and device for fusing multiple multi-focus images Download PDFInfo
- Publication number
- CN112241940A CN112241940A CN202011036730.9A CN202011036730A CN112241940A CN 112241940 A CN112241940 A CN 112241940A CN 202011036730 A CN202011036730 A CN 202011036730A CN 112241940 A CN112241940 A CN 112241940A
- Authority
- CN
- China
- Prior art keywords
- image
- images
- level
- fusion
- focus
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000004927 fusion Effects 0.000 claims abstract description 83
- 238000000605 extraction Methods 0.000 claims abstract description 20
- 238000012937 correction Methods 0.000 claims abstract description 10
- 238000007500 overflow downdraw method Methods 0.000 claims description 12
- 238000013527 convolutional neural network Methods 0.000 claims description 7
- 238000012706 support-vector machine Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000007792 addition Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 5
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 7
- 230000015654 memory Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for fusing a plurality of multi-focus images, and belongs to the technical field of image processing and artificial intelligence. The method comprises the following steps: extracting the characteristics of all images in the image set to be fused by using an image characteristic extraction algorithm, and selecting the characteristics of any two images as a first-level baseline characteristic and a second-level baseline characteristic; respectively carrying out feature fusion on the first-stage baseline features and the features of the rest images in the image set to be fused by adopting an image feature fusion algorithm, and forming a plurality of focusing horizontal graphs; correcting the rest of the focusing level images by adopting a correction algorithm based on the focusing level images formed by the first-level baseline characteristics and the second-level baseline characteristics, and splicing the corrected multiple focusing level images into a focusing level set; converting the focus level set into a decision graph by adopting a decision algorithm; and fusing all images in the image set to be fused into a final single fusion result by adopting an image pixel fusion algorithm based on the decision graph. By adopting the method and the device, the efficiency of fusing a plurality of multi-focus images can be improved.
Description
Technical Field
The invention relates to the technical field of image processing and artificial intelligence, in particular to a method and a device for fusing multiple multi-focus images.
Background
The multi-focus image fusion is an important research branch in the field of image analysis and fusion, and plays an important role in the fields of scientific research, military, medical treatment, digital camera shooting, microstructure analysis and the like. Due to the inherent characteristics of the optical sensor, a single shooting can only ensure that a target in a certain range before and after a focusing area presents a clear image, but a non-focusing area presents a fuzzy image, and all objects with great depth distance differences cannot be focused in one lens through physical operation. Therefore, a multi-focus image fusion method based on image processing is often adopted to finally obtain a fully focused image by fusing respective clear areas in a plurality of images.
With the breakthrough progress of the artificial intelligence theory and the computer vision technology in the image processing field, deep learning gradually becomes the mainstream method in the multi-focus image fusion field, most methods respectively adopt a feature extraction branch to respectively extract high-dimensional features of images, and then use a feature fusion module to fuse the high-dimensional features of each image and output a final fusion result.
However, most of the current multi-focus image processing methods usually focus on only the fusion scene of two images, and use an intuitive pairwise fusion strategy to perform fusion of multiple images, that is, first fuse the 1 st image and the 2 nd image to obtain a fusion result, and then fuse the fusion result with the 3 rd image, and so on. In practical applications, a fusion scene with tens of images is usually required, and the efficiency of image processing is severely reduced by using the above image processing method.
Therefore, a method capable of effectively improving the multi-image fusion efficiency is urgently needed in the field of multi-focus image fusion.
Disclosure of Invention
The invention provides a method and a device for fusing a plurality of multi-focus images, which can improve the efficiency of fusing the plurality of multi-focus images. The technical scheme is as follows:
in one aspect, a method for fusing multiple multi-focus images is provided, and the method is applied to an electronic device, and includes:
extracting the characteristics of all images in an image set to be fused by adopting an image characteristic extraction algorithm, and selecting the characteristics of any two images in the image set to be fused as a first-level baseline characteristic and a second-level baseline characteristic;
respectively carrying out feature fusion on the first-stage baseline features and the features of the rest images in the image set to be fused by adopting an image feature fusion algorithm, and forming a plurality of focusing horizontal graphs;
correcting other focusing level graphs by adopting a correction algorithm based on a focusing level graph formed by the first-level baseline characteristic and the second-level baseline characteristic, and splicing a plurality of corrected focusing level graphs into a focusing level set;
converting the focus level set into a decision graph by adopting a decision algorithm;
and fusing all the images in the image set to be fused into a final single fusion result by adopting an image pixel fusion algorithm based on the decision graph.
Optionally, the image feature extraction algorithm includes: spatial frequency operators, gradient operators, convolutional neural networks, support vector machines.
Optionally, the image set to be fused is at least two registered images with different focusing areas, which are shot for the same scene, and H × W of all the images in the image set to be fused is the same, where H is the number of column pixels and W is the number of row pixels.
Optionally, the image feature fusion algorithm includes: one or more of spatial frequency operators, gradient operators, convolutional neural networks, support vector machines, addition fusion, maximum value fusion and channel dimension splicing; the focusing horizontal map is the same as H multiplied by W of the image to be fused, wherein H is the number of column pixels, and W is the number of row pixels.
Optionally, the expression of the correction algorithm is as follows:
wherein p isi jRepresents the probability that the 1 st image is sharper relative to the j image at the position of element i, and pi j∈[0,1];Is the probability that the 1 st image at element i after rectification is clearer relative to the j image.
Optionally, the stitching the corrected multiple focus level maps into a focus level set includes: and splicing the corrected multiple focusing horizontal graphs into a focusing horizontal set according to a channel direction, wherein the focusing horizontal set refers to a three-dimensional matrix with the size of H multiplied by W multiplied by N, H is the number of column pixels, W is the number of row pixels, N is the number of images, and the value of N is greater than 1.
Optionally, the decision algorithm includes:
finding an index Img of a maximum value in a channel direction of each element i in the focus level seti kWherein the index Imgi kRecording the clearest image sequence number k at the element i;
the indices of each pixel are assembled into a decision graph.
Optionally, the image pixel fusion algorithm includes: one or more of an index value-taking algorithm, a weighted fusion algorithm and a guided filtering algorithm.
Optionally, the value-by-index algorithm is based on an index value Img in the decision graphi jAnd assigning the pixel of the image j in the element i to the pixel of the final single fusion result position i.
In one aspect, a multi-focus image fusion apparatus is provided, and the apparatus is applied to an electronic device, and includes:
the device comprises a characteristic extraction unit, a fusion unit and a fusion unit, wherein the characteristic extraction unit is used for extracting the characteristics of all images in an image set to be fused and selecting the characteristics of any two images in the image set to be fused as a first-level baseline characteristic and a second-level baseline characteristic;
the characteristic fusion unit is used for respectively carrying out characteristic fusion on the first-stage baseline characteristic and the characteristics of the rest images in the image set to be fused and forming a plurality of focusing horizontal graphs;
the correction unit is used for correcting the rest of the focusing level images based on the focusing level images formed by the first-level baseline characteristics and the second-level baseline characteristics and splicing the corrected focusing level images into a focusing level set;
a decision unit, configured to convert the focus level set into a decision graph;
and the pixel fusion unit is used for fusing all the images in the image set to be fused into a final single fusion result based on the decision graph.
The technical scheme provided by the invention has the beneficial effects that at least:
the method provided by the invention has the advantages that a small amount of image baseline features are extracted and fused into a focusing level map, other focusing level maps are corrected by using the focusing level map, the corrected focusing level map forms a focusing level set, then the focusing level set is converted into a decision map, and all images are fused according to the decision map, so that fusion of multiple multi-focus images is carried out, redundant feature extraction operation (the workload of 50% of feature extraction operation in the existing method) in the existing method is avoided, the problem of low multi-focus image fusion efficiency caused by the existing pairwise fusion strategy is solved, and the advantages are obvious in the application scene of fusing a large amount of images and having high requirements on image fusion speed.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a diagram of an implementation environment provided by an embodiment of the invention;
FIG. 2 is a schematic flowchart of a method for fusing a plurality of multi-focus images according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a plurality of multi-focus images provided by an embodiment of the present invention;
FIG. 4 is a comparison chart of the process of the multi-focus image fusion method and the conventional pairwise fusion method provided by the embodiment of the invention;
FIG. 5 is a schematic diagram of a multi-focus image fusion apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
First embodiment
A first embodiment of the present invention provides a method for fusing multiple multi-focus images, and fig. 1 is an implementation environment diagram provided by the first embodiment of the present invention. The enforcement environment may include at least one terminal 101, and a server 102 for providing services to the plurality of terminals 101. At least one terminal 101 is connected to the server 102 through a wireless or wired network, and the plurality of terminals 101 may be computer devices or intelligent terminals, etc. capable of accessing the server 102. The terminal 101 may be installed with computing-related application programs such as an image feature value extraction program, an image feature value fusion program, a rectification program, a decision program, and an image pixel fusion program. When a user wants to generate a fusion image for a certain image set to be fused, the image set to be fused can be selected in a local storage area of the terminal, the image set to be fused can be acquired in real time through the terminal, and the image set to be fused sent by the server can be received through the terminal. The server 102 may also provide the set of images to be fused for the application.
In addition, the terminal 101 may also serve as a demand side to send the image set to be fused to the server 102, and request the server 102 to generate a fused image for the image set to be fused. In this case, the server 102 may further include at least one database for storing an image feature value extraction program, an image feature value fusion program, a rectification program, a decision program, an image pixel fusion program, and the like. The server 102 may be a single terminal or a terminal group, and when the server 102 is a terminal group, each terminal may share and each provide a portion of the image set to be fused, and the like.
The first embodiment of the present invention provides a method for fusing multiple multi-focus images, which may be implemented by an electronic device, where the electronic device may be a terminal or a server. As shown in fig. 2, the process flow of the method for fusing multiple multi-focus images may include the following steps:
in step 201, an image feature extraction algorithm including one or more of spatial frequency operators, gradient operators, convolutional neural networks, support vector machines, and the like is used to extract features of all images in the image set to be fused, and features of any two images in the image set to be fused are selected as a first-level baseline feature and a second-level baseline feature. Preferably, the convolutional neural network algorithm is selected to extract high-dimensional features in the image.
As shown in fig. 3, the image set to be fused in step 201 mainly refers to at least two registered images with different focusing areas, which are taken for the same scene, and H × W of all the images in the image set to be fused is the same, where H is the number of column pixels and W is the number of row pixels.
In step 202, an image feature fusion algorithm including one or more of a spatial frequency operator, a gradient operator, a convolutional neural network, a support vector machine, addition fusion, maximum value fusion, and channel dimension splicing is adopted, and feature fusion is performed on the first-stage baseline features and the features of the other images in the image set to be fused, so that a plurality of focusing horizontal maps are formed. The focusing horizontal map is the same as H multiplied by W of the image to be fused, wherein H is the number of column pixels, W is the number of row pixels, each element in the focusing horizontal map corresponds to each pixel in the image to be fused, and p isi jRepresents the probability that the first image is sharper relative to the jth image at the position of element i, and pi j∈[0,1]。
This implementationIn the example, since the focus level map in which the 1 st image and the j-th image are fused reflects the sharpness information of the 1 st image compared with the j-th image, if the 1 st image is sharper than the j-th image in the vicinity of the pixel i, p is presenti jThe closer to 1, otherwise pi jThe closer to 0. Further, p isi jThe larger the value of (d) is, the larger the degree of sharpness of the 1 st image in the vicinity of the pixel i is compared with that of the j-th image. Therefore, the focus level maps of the different images to be fused and the 1 st image can be corrected in the above manner by taking the focus level maps of the 1 st image and the 2 nd image as a reference, and a focus level map with a unified standard is obtained for clear area judgment of the subsequent different images.
In step 203, a correction algorithm is adopted, the focus level maps formed based on the first-level baseline characteristics and the second-level baseline characteristics are corrected, the rest of the focus level maps are corrected, and the corrected multiple focus level maps are spliced into a focus level set.
Preferably, the expression of the correction algorithm is as follows:
wherein p isi jRepresents the probability that the 1 st image is sharper relative to the j image at the position of element i, and pi j∈[0,1];Is the probability that the 1 st image at element i after rectification is clearer relative to the j image.
The step 203 of combining the corrected multiple focus level maps into a focus level set may be combining the corrected multiple focus level maps into a focus level set according to a channel direction, where the focus level set refers to a three-dimensional matrix with a size of H × W × N, where H is the number of column pixels, W is the number of row pixels, N is the number of images, and the value of N is greater than 1.
In step 204, a decision algorithm is employed to convert the focus level set into a decision graph.
The decision algorithm in step 204 may preferably include the following steps:
s2041, finding an index Img of the maximum value in the channel direction of each element i in the focus level seti kWherein the index Imgi kRecording the clearest image sequence number k at the element i;
s2042, the indexes of each pixel are combined into a decision graph.
In step 205, an image pixel fusion algorithm is used, which includes: and fusing all images in the image set to be fused into a final single-sheet fusion result based on the decision graph according to one or more of an index value-taking algorithm, a weighted fusion algorithm, a guided filtering algorithm and the like.
Preferably, the index-based value algorithm in step 205 is according to the index value Img in the decision graphi jAnd assigning the pixel of the image j in the element i to the pixel of the final single fusion result position i.
Fig. 4 is a comparison chart of the process of the multi-focus image fusion method and the conventional pairwise fusion method according to the embodiment of the present invention.
A traditional pairwise fusion strategy serially fuses a plurality of images to be fused one by one, namely, a 1 st image and a 2 nd image are fused firstly, and an image feature fusion algorithm is used for obtaining a fusion result. And further fusing the fused result with the 3 rd image, and so on. Using the pairwise fusion method requires the use of 2(N-1) times of image feature extraction algorithms. According to the multi-image multi-focus fusion method provided by the embodiment of the invention, the features of the 1 st image are stored in advance as the baseline features, so that only an image feature extraction algorithm needs to be used for N times. Therefore, compared with the traditional pairwise fusion method, the method can reduce the running times of feature extraction by 50 percent, thereby avoiding redundant feature extraction operation and improving the operation efficiency of fusion of a plurality of multi-focus images.
Second embodiment
As shown in fig. 5, a second embodiment of the present invention provides a multi-focus image fusion apparatus, including:
the feature extraction unit 501 is configured to extract features of all images in the image set to be fused, and select features of any two images in the image set to be fused as a first-level baseline feature and a second-level baseline feature;
the feature fusion unit 502 is configured to perform feature fusion on the first-level baseline features and features of the other images in the image set to be fused, and form multiple focusing level maps;
a correction unit 503, which corrects the rest of the focus level maps based on the focus level maps formed by the first-level baseline features and the second-level baseline features, and combines the corrected multiple focus level maps into a focus level set;
a decision unit 504, configured to convert the focus level set into a decision graph;
and a pixel fusion unit 505 for fusing all the images in the image set to be fused into a final single fusion result based on the decision diagram.
The multi-focus image fusion device of the present embodiment corresponds to the multi-focus image fusion method of the first embodiment; the functions implemented by each unit in the apparatus of this embodiment correspond to the flow steps in the multi-focus image fusion method of the first embodiment one by one; therefore, it is not described herein.
Third embodiment
As shown in fig. 6, a schematic structural diagram of an electronic device 600 according to a third embodiment of the present invention, where the electronic device 600 may generate relatively large differences due to different configurations or performances, and may include one or more processors (CPUs) 601 and one or more memories 602, where at least one instruction is stored in the memory 602, and the at least one instruction is loaded and executed by the processor 601 to implement the multi-focus image fusion method according to the first embodiment of the present invention.
Fourth embodiment
In a fourth embodiment of the present invention, a computer-readable storage medium, such as a memory, is provided, where the computer-readable storage medium includes instructions executable by a processor in a terminal to perform the method for fusing multiple multi-focus images according to the first embodiment of the present invention. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
the method provided by the invention has the advantages that a small amount of image baseline features are extracted and fused into a focusing level map, other focusing level maps are corrected by using the focusing level map, the corrected focusing level map forms a focusing level set, then the focusing level set is converted into a decision map, and all images are fused according to the decision map, so that fusion of multiple multi-focus images is carried out, redundant feature extraction operation (the workload of 50% of feature extraction operation in the existing method) in the existing method is avoided, the problem of low multi-focus image fusion efficiency caused by the existing pairwise fusion strategy is solved, and the advantages are obvious in the application scene of fusing a large amount of images and having high requirements on image fusion speed.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (10)
1. A method for fusing multiple multi-focus images, the method comprising:
extracting the characteristics of all images in an image set to be fused by adopting an image characteristic extraction algorithm, and selecting the characteristics of any two images in the image set to be fused as a first-level baseline characteristic and a second-level baseline characteristic;
respectively carrying out feature fusion on the first-stage baseline features and the features of the rest images in the image set to be fused by adopting an image feature fusion algorithm, and forming a plurality of focusing horizontal graphs;
correcting other focusing level graphs by adopting a correction algorithm based on a focusing level graph formed by the first-level baseline characteristic and the second-level baseline characteristic, and splicing a plurality of corrected focusing level graphs into a focusing level set;
converting the focus level set into a decision graph by adopting a decision algorithm;
and fusing all the images in the image set to be fused into a final single fusion result by adopting an image pixel fusion algorithm based on the decision graph.
2. The method for fusing a plurality of multi-focus images according to claim 1, wherein the image feature extraction algorithm comprises: spatial frequency operators, gradient operators, convolutional neural networks, support vector machines.
3. The method for fusing multiple multi-focus images according to claim 1, wherein the image set to be fused is at least two registered images with different focus areas, which are shot for the same scene, and H x W of all the images in the image set to be fused is the same, where H is the number of column pixels and W is the number of row pixels.
4. The method for fusing a plurality of multi-focus images according to claim 1, wherein the image feature fusion algorithm comprises: one or more of spatial frequency operators, gradient operators, convolutional neural networks, support vector machines, addition fusion, maximum value fusion and channel dimension splicing; the focusing horizontal map is the same as H multiplied by W of the image to be fused, wherein H is the number of column pixels, and W is the number of row pixels.
5. The method for fusing a plurality of multi-focus images according to claim 1, wherein the correction algorithm is expressed as follows:
6. The method for fusing multiple multi-focus images according to claim 1, wherein the stitching the corrected multiple focus level maps into a focus level set comprises: and splicing the corrected multiple focusing horizontal graphs into a focusing horizontal set according to a channel direction, wherein the focusing horizontal set refers to a three-dimensional matrix with the size of H multiplied by W multiplied by N, H is the number of column pixels, W is the number of row pixels, N is the number of images, and the value of N is greater than 1.
7. The method for fusing a plurality of multi-focus images according to claim 1, wherein the decision algorithm comprises:
finding an index Img of a maximum value in a channel direction of each element i in the focus level seti kWherein the index Imgi kRecording the clearest image sequence number k at the element i;
the indices of each pixel are assembled into a decision graph.
8. The method for fusing multiple multi-focus images according to claim 1, wherein the image pixel fusion algorithm comprises: one or more of an index value-taking algorithm, a weighted fusion algorithm and a guided filtering algorithm.
9. Sheets according to claim 8The multi-focus image fusion method is characterized in that the index-based value-taking algorithm is based on an index value Img in the decision graphi jAnd assigning the pixel of the image j in the element i to the pixel of the final single fusion result position i.
10. A multi-focus image fusion apparatus, comprising:
the device comprises a characteristic extraction unit, a fusion unit and a fusion unit, wherein the characteristic extraction unit is used for extracting the characteristics of all images in an image set to be fused and selecting the characteristics of any two images in the image set to be fused as a first-level baseline characteristic and a second-level baseline characteristic;
the characteristic fusion unit is used for respectively carrying out characteristic fusion on the first-stage baseline characteristic and the characteristics of the rest images in the image set to be fused and forming a plurality of focusing horizontal graphs;
the correction unit is used for correcting the rest of the focusing level images based on the focusing level images formed by the first-level baseline characteristics and the second-level baseline characteristics and splicing the corrected focusing level images into a focusing level set;
a decision unit, configured to convert the focus level set into a decision graph;
and the pixel fusion unit is used for fusing all the images in the image set to be fused into a final single fusion result based on the decision graph.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011036730.9A CN112241940B (en) | 2020-09-28 | 2020-09-28 | Fusion method and device for multiple multi-focus images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011036730.9A CN112241940B (en) | 2020-09-28 | 2020-09-28 | Fusion method and device for multiple multi-focus images |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112241940A true CN112241940A (en) | 2021-01-19 |
CN112241940B CN112241940B (en) | 2023-12-19 |
Family
ID=74171774
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011036730.9A Active CN112241940B (en) | 2020-09-28 | 2020-09-28 | Fusion method and device for multiple multi-focus images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112241940B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023102724A1 (en) * | 2021-12-07 | 2023-06-15 | 宁德时代新能源科技股份有限公司 | Image processing method and system |
CN116883461A (en) * | 2023-05-18 | 2023-10-13 | 珠海移科智能科技有限公司 | Method for acquiring clear document image and terminal device thereof |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006017233A1 (en) * | 2004-07-12 | 2006-02-16 | Lehigh University | Image fusion methods and apparatus |
CN104700383A (en) * | 2012-12-16 | 2015-06-10 | 吴凡 | Multi-focus image generating device and multi-focus image file handling method |
CN105894483A (en) * | 2016-03-30 | 2016-08-24 | 昆明理工大学 | Multi-focusing image fusion method based on multi-dimensional image analysis and block consistency verification |
US20170024920A1 (en) * | 2014-05-09 | 2017-01-26 | Huawei Technologies Co., Ltd. | Method and Related Apparatus for Capturing and Processing Image Data |
US20170076430A1 (en) * | 2014-05-28 | 2017-03-16 | Huawei Technologies Co., Ltd. | Image Processing Method and Image Processing Apparatus |
US20170213330A1 (en) * | 2016-01-25 | 2017-07-27 | Qualcomm Incorporated | Unified multi-image fusion approach |
CN108230282A (en) * | 2017-11-24 | 2018-06-29 | 洛阳师范学院 | A kind of multi-focus image fusing method and system based on AGF |
CN109300096A (en) * | 2018-08-07 | 2019-02-01 | 北京智脉识别科技有限公司 | A kind of multi-focus image fusing method and device |
CN109389576A (en) * | 2018-10-10 | 2019-02-26 | 青岛大学 | Pulse-couple image interfusion method based on multichannel mechanism |
US20190213450A1 (en) * | 2016-07-22 | 2019-07-11 | Sony Corporation | Image processing apparatus and image processing method |
CN110717879A (en) * | 2019-10-16 | 2020-01-21 | 北京京东尚科信息技术有限公司 | Multi-focus image processing method and device, storage medium and electronic equipment |
CN111105346A (en) * | 2019-11-08 | 2020-05-05 | 同济大学 | Full-scanning microscopic image splicing method based on peak value search and gray template registration |
-
2020
- 2020-09-28 CN CN202011036730.9A patent/CN112241940B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006017233A1 (en) * | 2004-07-12 | 2006-02-16 | Lehigh University | Image fusion methods and apparatus |
CN104700383A (en) * | 2012-12-16 | 2015-06-10 | 吴凡 | Multi-focus image generating device and multi-focus image file handling method |
US20170024920A1 (en) * | 2014-05-09 | 2017-01-26 | Huawei Technologies Co., Ltd. | Method and Related Apparatus for Capturing and Processing Image Data |
US20170076430A1 (en) * | 2014-05-28 | 2017-03-16 | Huawei Technologies Co., Ltd. | Image Processing Method and Image Processing Apparatus |
US20170213330A1 (en) * | 2016-01-25 | 2017-07-27 | Qualcomm Incorporated | Unified multi-image fusion approach |
CN105894483A (en) * | 2016-03-30 | 2016-08-24 | 昆明理工大学 | Multi-focusing image fusion method based on multi-dimensional image analysis and block consistency verification |
US20190213450A1 (en) * | 2016-07-22 | 2019-07-11 | Sony Corporation | Image processing apparatus and image processing method |
CN108230282A (en) * | 2017-11-24 | 2018-06-29 | 洛阳师范学院 | A kind of multi-focus image fusing method and system based on AGF |
CN109300096A (en) * | 2018-08-07 | 2019-02-01 | 北京智脉识别科技有限公司 | A kind of multi-focus image fusing method and device |
CN109389576A (en) * | 2018-10-10 | 2019-02-26 | 青岛大学 | Pulse-couple image interfusion method based on multichannel mechanism |
CN110717879A (en) * | 2019-10-16 | 2020-01-21 | 北京京东尚科信息技术有限公司 | Multi-focus image processing method and device, storage medium and electronic equipment |
CN111105346A (en) * | 2019-11-08 | 2020-05-05 | 同济大学 | Full-scanning microscopic image splicing method based on peak value search and gray template registration |
Non-Patent Citations (1)
Title |
---|
徐海;安吉尧;: "一种新的结合SVM和FNN的多聚焦图像融合算法", 计算技术与自动化, no. 01 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023102724A1 (en) * | 2021-12-07 | 2023-06-15 | 宁德时代新能源科技股份有限公司 | Image processing method and system |
US11948287B2 (en) | 2021-12-07 | 2024-04-02 | Contemporary Amperex Technology Co., Limited | Image processing method and system |
CN116883461A (en) * | 2023-05-18 | 2023-10-13 | 珠海移科智能科技有限公司 | Method for acquiring clear document image and terminal device thereof |
CN116883461B (en) * | 2023-05-18 | 2024-03-01 | 珠海移科智能科技有限公司 | Method for acquiring clear document image and terminal device thereof |
Also Published As
Publication number | Publication date |
---|---|
CN112241940B (en) | 2023-12-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108664526B (en) | Retrieval method and device | |
CN110334779A (en) | A kind of multi-focus image fusing method based on PSPNet detail extraction | |
CN112241940A (en) | Method and device for fusing multiple multi-focus images | |
CN113704531A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN114694185B (en) | Cross-modal target re-identification method, device, equipment and medium | |
EP3994616A1 (en) | Feedbackward decoder for parameter efficient semantic image segmentation | |
CN109859314A (en) | Three-dimensional rebuilding method, device, electronic equipment and storage medium | |
CN110944201A (en) | Method, device, server and storage medium for video duplicate removal compression | |
CN113630549A (en) | Zoom control method, device, electronic equipment and computer-readable storage medium | |
CN112200887A (en) | Multi-focus image fusion method based on gradient perception | |
CN114677422A (en) | Depth information generation method, image blurring method and video blurring method | |
CN112465122A (en) | Device and method for optimizing original dimension operator in neural network model | |
CN116311384A (en) | Cross-modal pedestrian re-recognition method and device based on intermediate mode and characterization learning | |
CN115359108A (en) | Depth prediction method and system based on defocusing under guidance of focal stack reconstruction | |
CN115620206A (en) | Training method of multi-template visual target tracking network and target tracking method | |
CN114372931A (en) | Target object blurring method and device, storage medium and electronic equipment | |
WO2023149135A1 (en) | Image processing device, image processing method, and program | |
CN110598785A (en) | Training sample image generation method and device | |
CN113824989B (en) | Video processing method, device and computer readable storage medium | |
CN112203023B (en) | Billion pixel video generation method and device, equipment and medium | |
CN111489361B (en) | Real-time visual target tracking method based on deep feature aggregation of twin network | |
CN110399881B (en) | End-to-end quality enhancement method and device based on binocular stereo image | |
CN114898429A (en) | Thermal infrared-visible light cross-modal face recognition method | |
CN111862098B (en) | Individual matching method, device, equipment and medium based on light field semantics | |
CN116848547A (en) | Image processing method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |