CN103778616A - Contrast pyramid image fusion method based on area - Google Patents
Contrast pyramid image fusion method based on area Download PDFInfo
- Publication number
- CN103778616A CN103778616A CN201210404367.0A CN201210404367A CN103778616A CN 103778616 A CN103778616 A CN 103778616A CN 201210404367 A CN201210404367 A CN 201210404367A CN 103778616 A CN103778616 A CN 103778616A
- Authority
- CN
- China
- Prior art keywords
- image
- pyramid
- contrast
- images
- mask
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Abstract
The invention provides a contrast pyramid image fusion method based on an area. The method is characterized by utilizing a plurality of out-of-focus images to obtain a sharp image. The method is characterized by, to begin with, utilizing Laplace energy in an image window as measurement for showing whether the area is clear or not; obtaining positions of clear pixel points in each image; marking the positions of the clear pixel points in a mask image; then carrying out optimization operation on the mask image to obtain an accurate position of the clear area in the image; fusing each layer of image of a contrast pyramid with a ground floor image according to the marked clear area in a mask image pyramid; and establishing a completely clear image by utilizing the fused contrast pyramid and the ground floor image. The multi-focus image fusion algorithm can be applied to the processing of criminal investigation images, medical microscopic images, and does not rely on any camera parameters.
Description
Technical field
The invention belongs to image processing field, relate to a kind of fusion that utilizes mask artwork to instruct contrast pyramid and bottom layer image, and obtain full picture rich in detail by reconstruct.
Background technology
There is a shortcoming in high power optical lens (as microscope): the order of magnitude of the depth of field, is 10
-3m or less, and enlargement factor is higher, and the depth of field is less.This has caused a consequence: take when three-dimensional body, in the very large image of a width enlargement factor, only having fraction region is clearly, and other regions are all fuzzy, and such image is called out-of-focus image.
Biology and medical research field, when scientific research personnel uses microscopic examination histocyte, due to the limitation of the depth of field, it is clearly that the image that microscope presents only has fraction, and cannot obtain the overall picture rich in detail of observed object, bring many puzzlements to research work; Criminal investigation field, while analyzing the micro-image of shoot mark and fingerprint, only has the great amount of images of fraction clear area greatly to improve the workload of analysis and comparison work, and the while has also been reduced the accuracy of shoot mark or fingerprint comparison.
Method based on image co-registration is to utilize the clear part of several out-of-focus images to obtain the full picture rich in detail of a width.Existing Image Fusion mainly contains two classes: 1. transform domain blending algorithm; 2. spatial domain blending algorithm.
Existing Image Fusion has a common weak point: be difficult to guarantee that fusion results had both kept the raw information of the overwhelming majority in source images (out-of-focus image), have again good visual effect simultaneously.Such as classical contrast pyramid conversion loss image detail part, linear weighted function method reduces the contrast of image.
Summary of the invention
Object of the present invention can farthest meet human-eye visual characteristic, provide one not rely on any acquisition parameters, the contrast pyramid image interfusion method based on region simple, efficiency is high, solves prior art loss image detail, the technical matters that the contrast of image reduces.
For realizing object of the present invention, the contrast pyramid image interfusion method based on region provided by the invention, utilizes several out-of-focus images of Same Scene, and the step of a width picture rich in detail that obtains Same Scene is as follows:
Step S1: a selected window, read several out-of-focus images, carry out following operation to thering are all pixels of same coordinate in several out-of-focus images: the center using each pixel as window, calculate the Laplce's energy in each window, the pixel of window center corresponding to the maximal value of described Laplce's energy is as sharply defined image vegetarian refreshments, and the sequence number of the out-of-focus image at sharply defined image vegetarian refreshments place is marked in mask images;
Step S2: to mask images optimization, removal noise, obtain position and the mark of clear area in every width out-of-focus image, and build mask pyramid diagram picture by pyramid decomposition by filtering;
Step S3: according to contrast pyramid algorithm, every width out-of-focus image is calculated, obtain the contrast pyramid image of each width out-of-focus image and the bottom layer image of gaussian pyramid;
Step S4: according to the mark to clear area in the each tomographic image of mask pyramid, the bottom layer image of each contrast pyramid of every width out-of-focus image tomographic image and gaussian pyramid is merged to the each tomographic image of contrast pyramid after being merged and the bottom layer image of gaussian pyramid;
Step S5: utilize the each tomographic image of contrast pyramid after merging and the bottom layer image of gaussian pyramid, obtain a width picture rich in detail of Same Scene according to the reconstruct of contrast pyramid algorithm.
Preferred embodiment, adopts mask pyramid diagram to look like to indicate the fusion of several contrast pyramid images:
Step Sa: from the pyramidal first floor image of mask down-sampling successively, obtain the pyramidal each tomographic image of mask;
Step Sb: according to the mark of the pyramidal each tomographic image of mask, from contrast pyramid image for each contrast pyramid tomographic image is merged according to formula (1):
R in formula (1)
cl (p, q), l=1 ..., L represents the fusion pyramid diagram picture of l layer, and L represents the number of plies of pyramid decomposition, and (p, q) represents the location of pixels on pyramid, and wherein p represents pixel horizontal ordinate, q represents pixel ordinate;
the value of the contrast pyramid of fused images, M are treated in expression for the l layer merging
l(p, q) ∈ 1 ..., and N} represents the mask label of l tomographic image in pixel (p, q), and which piece image indication contrast pyramid belongs to, and N represents to treat fused images number;
Merge according to formula (2) for bottom layer image:
RG in formula (2)
lrepresent the bottom layer image after merging,
the bottom layer image of the gaussian pyramid of fused images is treated in expression for the L layer merging, all the other parameter symbol implications are identical with formula (1).
Step Sc: utilize contrast pyramid and bottom layer image after merging, according to the full picture rich in detail of classical contrast pyramid algorithm reconstruct.
The effect that the present invention is useful is: the present invention is on the basis of classical contrast pyramid algorithm, propose first based on region contrast pyramid algorith, on fusion rule, adopt the amalgamation mode based on region splicing to merge contrast pyramid group and bottom layer image group, substitute the Pixel-level amalgamation mode based on contrast of classic algorithm, eliminate the drawback of Pixel-level fusion rule, make fusion results keep having realized qualitative leap in original color and detailed information, fusion results has good visual effect simultaneously, how to obtain thereby solved the picture rich in detail that a width contains maximum raw information amounts, make it farthest meet the problem of human-eye visual characteristic simultaneously.Quantitative and qualitative experiment has all shown effect of the present invention and use value.The present invention can be applied to the processing of criminal investigation image, medical microscopic images.
Accompanying drawing explanation
Fig. 1 is the establishment of prior art single width out-of-focus image contrast pyramid, and in figure, (4) (5) represent to calculate according to formula (4) and (5);
Fig. 2 a~Fig. 2 b is the contrast pyramid blending algorithm process flow diagram that the present invention is based on region;
Fig. 3 a~Fig. 3 h is one group multi-focus source images;
Fig. 4 is the result images that adopts the present invention to merge the source images in Fig. 3 a~Fig. 3 h;
Fig. 5 is that one group of objective examination of the present invention collects example;
Fig. 6 a is an objective examination's out-of-focus image for testing;
Fig. 6 b is the clear area mask artwork of Fig. 6 a image;
Fig. 7 is RMSE average behavior figure of the present invention;
Fig. 8 is SSIM average behavior figure of the present invention.
Embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
As Fig. 2 a and Fig. 2 b illustrate that the embodiment of the contrast pyramid blending algorithm that the present invention is based on region is as follows:
1. concrete implementation step
Step S1: a selected window, read N width out-of-focus image, carry out following operation to thering are all pixels of same coordinate in N width out-of-focus image: the center using each pixel as window, calculate the Laplce's energy in each window, the pixel of window center corresponding to the maximal value of described Laplce's energy is as sharply defined image vegetarian refreshments, and the sequence number of the out-of-focus image at sharply defined image vegetarian refreshments place is marked in mask images.Embodiment is as follows:
Current pixel to each width source images (out-of-focus image) calculates, and obtains being calculated as follows of Laplce's energy E OL (x, y):
In formula (1), (x, y) represents respectively location of pixels, and I (x, y) represents that (x, y) locates the gray-scale value of the pixel of source images, and W represents neighborhood window size; Wherein x represents pixel horizontal ordinate, and y represents pixel ordinate.
In neighborhood window, the reflection of Laplce's energy of the current pixel of source images is the variation severe degree of pixel value in neighborhood window, because the pixel value in clear picture region changes more violent, the pixel value of fuzzy region changes relatively mitigation, and therefore the Laplce's energy in neighborhood window is the readability of image in certain neighborhood window of reflection.
The Laplce's energy that carries out subsequently several source images same position place pixels is at war with.Supposing and have N width source images, is the pixel at (x, y) coordinate position place for position, and Laplce's energy competition results of N width source images same position place pixel is Pixel-level mask images:
EO in formula (2)
ln (x, y) represents the Laplce energy of n width image in (x, y) coordinate position place pixel, M
0be the single channel gray level image that a width is identical with source images size, be referred to as Pixel-level mask images, being illustrated in which width source images of (x, y) coordinate position place is the most clearly.
Step S2: to mask images optimization, removal noise, obtain position and the mark of clear area in every width out-of-focus image, as the pyramidal first floor image of mask by filtering; Specifically be implemented as follows:
In order to remove the impact of noise on mask images, improve the accuracy of clear area mark, I use operation to be below optimized Pixel-level mask images, obtain the pyramidal first floor M of mask (x, y), represent as follows:
In formula (3), (a, b) represents the neighborhood territory pixel position at (x, y) coordinate position place, and wherein a represents pixel horizontal ordinate, and b represents pixel ordinate.Ω (x, y) represents that one centered by (x, y), and radius is the square field of 4~16 pixels.δ () is a function, in the time that it is input as " true value ", is output as " 1 ", otherwise is output as " 0 ", for judging whether initial conditions is set up.
Step S3: according to contrast pyramid algorithm, every width out-of-focus image is calculated, obtain the contrast pyramid image of each width out-of-focus image and the bottom layer image of gaussian pyramid; Be implemented as follows:
In computation process, the contrast pyramid algorithm based on classical has provided following relational expression:
Gl(p,q)=(C
l(p,q)+U(p,q))×E
l(p,q),l=1,...,L. (4)
G in formula (4)
l(p, q) represents the value of l layer gaussian pyramid in (p, q) position, and wherein p represents pixel horizontal ordinate, and q represents pixel ordinate; E
l(p, q) represents the l layer gaussian pyramid after filtering value in (p, q) position; C
l(p, q) represents the value of l layer contrast pyramid in (p, q) position; U (p, q) is a matrix that element value is all " 1 ", and L is integer, represents the pyramidal number of plies; Wherein l layer gaussian pyramid after filtering calculated by following formula at the value El of (p, q) position (p, q):
In formula (5), f (w, h) represents 2-d gaussian filters device, and its window size is 5 pixels, and wherein h represents pixel horizontal ordinate, and w represents pixel ordinate.L layer contrast pyramid is at the value C of (p, q) position
l(p, q) calculated by following formula:
C
l(p,q)=G
l(p,q)/E
l(p,q)-U(p,q). (6)
Fig. 1 is shown in the contrast pyramid sample calculation (L=2) of two layers, G
0, G
1, G
2composition gaussian pyramid, wherein G
2it is bottom layer image.By C
0with C
1composition secondary contrast pyramid, E
0with E
1g
1, G
2low-pass component, by gaussian pyramid C
0with gaussian pyramid C
1composition secondary contrast pyramid." (4) " in figure (1) represent respectively application of formula (4) and formula (5) with " (5) ".
Step S4: according to the mark to clear area in the each tomographic image of mask pyramid, the bottom layer image of each contrast pyramid of described out-of-focus image tomographic image and gaussian pyramid is merged to the each tomographic image of contrast pyramid after being merged and the bottom layer image of gaussian pyramid; Be implemented as follows:
Calculate the pyramidal first floor of mask by formula (3) M (x, y).Then operate and obtain successively pyramidal each layer of mask images by down-sampling.Next use the mark in each layer of mask images pyramid to instruct the fusion process of each layer of contrast pyramid and the fusion process of bottom layer image.
Fig. 2 b example two width images (N=2) carry out the two-layer contrast pyramid fusion process of (L=2) while decomposing.In Fig. 2 b, M
l, l=1,2,3.., L represents mask artwork pyramid, M
l(p, q) represents that the mask of l layer is in the value of pixel (p, q) position.C
l, 1, l=1,2,3.., L is the contrast pyramid being calculated by the first width out-of-focus image, C
l, 2, l=1,2 ..., L is the contrast pyramid being calculated by the second width out-of-focus image, G
l, 1, G
l, 2respectively the bottom layer image of the gaussian pyramid of two width out-of-focus images, RC
0, RC
1contrast pyramid after composition merges, R
gl is the bottom layer image after merging.Symbol in Fig. 2 b "+" expression mixing operation.Fusion process in Fig. 2 b can be described as with following formula:
RC in formula (7)
l(p, q), l=1 ..., L represents the fusion pyramid diagram picture of l layer, and L represents the number of plies of pyramid decomposition, and (p, q) represents the location of pixels on pyramid,
the value of the contrast pyramid of fused images, M are treated in expression for the l layer merging
l(p, q) ∈ 1,2,3..., and N} represents the mask label of l tomographic image in pixel (p, q), which piece image indication contrast pyramid belongs to.
Merge according to formula (8) for bottom layer image:
RG in formula (8)
lrepresent the bottom layer image after merging,
the bottom layer image of the gaussian pyramid of fused images is treated in expression for the L layer merging, all the other parameter symbol implications and formula (7) are same.
Step S5: utilize the each tomographic image of contrast pyramid after merging and the bottom layer image of gaussian pyramid, obtain a width picture rich in detail of Same Scene according to the reconstruct of contrast pyramid algorithm.
Utilize contrast pyramid and bottom layer image after merging, according to the full picture rich in detail of classical contrast pyramid algorithm reconstruct.
RG
l(p,q)=(RC
l(p,q)+U(p,q))×E
l(p,q),l=L,L-1,...,1. (10)
Reconstruct is from bottom layer image, and now (l=L), is reconfigured to the 1st layer (l=1) successively, and the parameter symbol in formula (10) is identical with the definition in formula (4)-formula (8).
2. embodiment and evaluation and test
Utilize optical microscope to obtain out-of-focus image, by regulating the distance between optical microphotograph lens head and shot object to obtain the different out-of-focus image in clear area with Same Scene.
2.1 subjective evaluation and tests
As Fig. 3 a~Fig. 3 h illustrates subjective experiment comparison diagram of the present invention: Fig. 3 a~Fig. 3 f is the wherein six width images of source images group, Fig. 3 g is the fusion results of contrast pyramid algorithm, can it is evident that in figure, exist color distortion (take source images as with reference to).Fig. 3 h is the fusion results of the algorithm that proposes of the present invention, and Fig. 4 is the enlarged image that fusion results of the present invention is shown, can find out that this image has retained original color and the detailed information of source images well.
2.2 objective evaluating
As illustrating objective examination of the present invention, Fig. 5 collects example, objective examination's collection is the artificial synthetic image with local clear area, be used for simulating out-of-focus image, objective examination's collection is divided into two classes according to the shape of clear area: annular class and strip class, each class has 9 groups of images, and each group image comprises 20-30 width out-of-focus image.What the white portion in Fig. 6 a showed is shape and the position of clear area in out-of-focus image Fig. 6 b.
Adopt square error RMSE and two indexs of structural similarity SSIM to evaluate and test, define respectively square error RMSE and structural similarity SSIM is as follows:
In formula (11), I (x, y) represents the color numerical value of fused images, the pixel wide of this other presentation video of X, Y and height, the color numerical value of R (x, y) table.RMSE represents square error, and in order to weigh the otherness of two width images, its value is less, shows that the otherness of two width images is less.(12) structural similarity that represents of SSIM in, it weighs the similarity of two width images, and its value is larger, shows that two width images are more alike, and maximum occurrences is 1.0.μ in formula (12)
rwith μ
irepresent respectively the color average of fused images and real picture rich in detail, σ
rwith σ
irepresent that respectively fused images is poor with the color standard of real picture rich in detail, σ
rIrepresent the standard deviation that fused images is calculated together with true picture rich in detail pixel, C
1with C
2two constants, C in test
1, C
2be taken as respectively 1.0.
Fig. 7 is that RMSE average behavior figure of the present invention and Fig. 8 are SSIM average behavior figure of the present invention, has provided this patent algorithm performance and the comparative result with other exemplary process thereof.The algorithm for contrasting of choosing is: contrast pyramid algorithm (CP), and laplacian pyramid algorithm (Laplacian), discrete wavelet algorithm (DWT), promotes constant discrete wavelet (SIDWT).What in figure, red dotted line showed is the average result of each algorithm on annular class testing collection, and what blue solid lines showed is the average result of each algorithm on strip test set.Can find out the superior function of the proposed contrast pyramid based on region.
The above; be only the embodiment in the present invention, but protection scope of the present invention is not limited to this, any people who is familiar with this technology is in the disclosed technical scope of the present invention; can understand conversion or the replacement expected, all should be encompassed in of the present invention comprise scope within.
Claims (2)
1. the contrast pyramid image interfusion method based on region, is characterized in that, utilizes several out-of-focus images of Same Scene, and the step of a width picture rich in detail that obtains Same Scene is as follows:
Step S1: a selected window, read several out-of-focus images, carry out following operation to thering are all pixels of same coordinate in several out-of-focus images: the center using each pixel as window, calculate the Laplce's energy in each window, the pixel of window center corresponding to the maximal value of described Laplce's energy is as sharply defined image vegetarian refreshments, and the sequence number of the out-of-focus image at sharply defined image vegetarian refreshments place is marked in mask images;
Step S2: to mask images optimization, removal noise, obtain position and the mark of clear area in every width out-of-focus image, and build mask pyramid diagram picture by pyramid decomposition by filtering;
Step S3: according to contrast pyramid algorithm, every width out-of-focus image is calculated, obtain the contrast pyramid image of each width out-of-focus image and the bottom layer image of gaussian pyramid;
Step S4: according to the mark to clear area in the each tomographic image of mask pyramid, the bottom layer image of each contrast pyramid of every width out-of-focus image tomographic image and gaussian pyramid is merged to the each tomographic image of contrast pyramid after being merged and the bottom layer image of gaussian pyramid;
Step S5: utilize the each tomographic image of contrast pyramid after merging and the bottom layer image of gaussian pyramid, obtain a width picture rich in detail of Same Scene according to the reconstruct of contrast pyramid algorithm.
2. the contrast pyramid image interfusion method based on region as claimed in claim, is characterized in that, adopts mask pyramid diagram to look like to indicate the fusion of several contrast pyramid images:
Step Sa: from the pyramidal first floor image of mask down-sampling successively, obtain the pyramidal each tomographic image of mask;
Step Sb: according to the mark of the pyramidal each tomographic image of mask, from contrast pyramid image for each contrast pyramid tomographic image is merged according to formula (1):
RC in formula (1)
l(p, q), l=1 ..., L represents the fusion pyramid diagram picture of l layer, and L represents the number of plies of pyramid decomposition, and (p, q) represents the location of pixels on pyramid, and p represents pixel horizontal ordinate, q represents pixel ordinate;
the value of the contrast pyramid of fused images, M are treated in expression for the l layer merging
l(p, q) ∈ l ..., and N} represents the mask mark of l tomographic image in pixel (p, q), and which piece image indication contrast pyramid belongs to, and N represents to treat fused images number;
Merge according to formula (2) for bottom layer image:
RG in formula (2)
lrepresent the bottom layer image after merging,
the bottom layer image of the gaussian pyramid of fused images is treated in expression for the L layer merging, all the other parameter symbol implications and formula (1) are same;
Step Sc: utilize contrast pyramid and bottom layer image after merging, according to the full picture rich in detail of classical contrast pyramid algorithm reconstruct.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210404367.0A CN103778616A (en) | 2012-10-22 | 2012-10-22 | Contrast pyramid image fusion method based on area |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210404367.0A CN103778616A (en) | 2012-10-22 | 2012-10-22 | Contrast pyramid image fusion method based on area |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103778616A true CN103778616A (en) | 2014-05-07 |
Family
ID=50570814
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210404367.0A Pending CN103778616A (en) | 2012-10-22 | 2012-10-22 | Contrast pyramid image fusion method based on area |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103778616A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104463786A (en) * | 2014-12-03 | 2015-03-25 | 中国科学院自动化研究所 | Mobile robot figure stitching method and device |
CN104933687A (en) * | 2015-07-09 | 2015-09-23 | 武汉大学 | Seam line multiscale feather algorithm of considering changed area |
CN107025641A (en) * | 2017-04-28 | 2017-08-08 | 南京觅踪电子科技有限公司 | Image interfusion method based on Analysis of Contrast |
CN107274372A (en) * | 2017-06-26 | 2017-10-20 | 重庆名图医疗设备有限公司 | Dynamic image Enhancement Method and device based on pyramid local contrast |
CN107292845A (en) * | 2017-06-26 | 2017-10-24 | 重庆名图医疗设备有限公司 | Based on the pyramidal dynamic image noise-reduction method of standard deviation and device |
CN108581869A (en) * | 2018-03-16 | 2018-09-28 | 深圳市策维科技有限公司 | A kind of camera module alignment methods |
CN111401203A (en) * | 2020-03-11 | 2020-07-10 | 西安应用光学研究所 | Target identification method based on multi-dimensional image fusion |
CN111950612A (en) * | 2020-07-30 | 2020-11-17 | 中国科学院大学 | FPN-based weak and small target detection method for fusion factor |
CN112132771A (en) * | 2020-11-02 | 2020-12-25 | 西北工业大学 | Multi-focus image fusion method based on light field imaging |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982522A (en) * | 2012-12-14 | 2013-03-20 | 东华大学 | Method for realizing real-time fusion of multi-focus microscopic images |
-
2012
- 2012-10-22 CN CN201210404367.0A patent/CN103778616A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982522A (en) * | 2012-12-14 | 2013-03-20 | 东华大学 | Method for realizing real-time fusion of multi-focus microscopic images |
Non-Patent Citations (2)
Title |
---|
ZHU YAOHUA ET AL.: "Multi-focus image fusion via region mosaicing", 《2012 INTERNATIONAL CONFERENCE ON SYSTEMS AND INFORMATION》 * |
强赞霞: "遥感图像的融合及应用", 《中国优秀博硕士学位论文全文数据库(博士) 信息科技辑》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104463786B (en) * | 2014-12-03 | 2017-06-16 | 中国科学院自动化研究所 | A kind of mobile robot image split-joint method and device |
CN104463786A (en) * | 2014-12-03 | 2015-03-25 | 中国科学院自动化研究所 | Mobile robot figure stitching method and device |
CN104933687B (en) * | 2015-07-09 | 2018-01-23 | 武汉大学 | A kind of multiple dimensioned emergence algorithm of jointing line for considering region of variation |
CN104933687A (en) * | 2015-07-09 | 2015-09-23 | 武汉大学 | Seam line multiscale feather algorithm of considering changed area |
CN107025641A (en) * | 2017-04-28 | 2017-08-08 | 南京觅踪电子科技有限公司 | Image interfusion method based on Analysis of Contrast |
CN107292845A (en) * | 2017-06-26 | 2017-10-24 | 重庆名图医疗设备有限公司 | Based on the pyramidal dynamic image noise-reduction method of standard deviation and device |
CN107274372A (en) * | 2017-06-26 | 2017-10-20 | 重庆名图医疗设备有限公司 | Dynamic image Enhancement Method and device based on pyramid local contrast |
CN107292845B (en) * | 2017-06-26 | 2020-04-17 | 安健科技(重庆)有限公司 | Standard deviation pyramid-based dynamic image noise reduction method and device |
CN107274372B (en) * | 2017-06-26 | 2020-04-17 | 安健科技(重庆)有限公司 | Pyramid local contrast-based dynamic image enhancement method and device |
CN108581869A (en) * | 2018-03-16 | 2018-09-28 | 深圳市策维科技有限公司 | A kind of camera module alignment methods |
CN111401203A (en) * | 2020-03-11 | 2020-07-10 | 西安应用光学研究所 | Target identification method based on multi-dimensional image fusion |
CN111950612A (en) * | 2020-07-30 | 2020-11-17 | 中国科学院大学 | FPN-based weak and small target detection method for fusion factor |
CN112132771A (en) * | 2020-11-02 | 2020-12-25 | 西北工业大学 | Multi-focus image fusion method based on light field imaging |
CN112132771B (en) * | 2020-11-02 | 2022-05-27 | 西北工业大学 | Multi-focus image fusion method based on light field imaging |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103778616A (en) | Contrast pyramid image fusion method based on area | |
Bashir et al. | A comprehensive review of deep learning-based single image super-resolution | |
Engin et al. | Cycle-dehaze: Enhanced cyclegan for single image dehazing | |
CN107194872B (en) | Remote sensed image super-resolution reconstruction method based on perception of content deep learning network | |
Pei et al. | Does haze removal help cnn-based image classification? | |
Prasanna et al. | Automated crack detection on concrete bridges | |
Zheng et al. | Multisource image fusion method using support value transform | |
Rosenholtz | What your visual system sees where you are not looking | |
CN108334847A (en) | A kind of face identification method based on deep learning under real scene | |
CN106339998A (en) | Multi-focus image fusion method based on contrast pyramid transformation | |
CN112733950A (en) | Power equipment fault diagnosis method based on combination of image fusion and target detection | |
DE102009036474A1 (en) | Image data compression method, pattern model positioning method in image processing, image processing apparatus, image processing program and computer readable recording medium | |
CN101976444B (en) | Pixel type based objective assessment method of image quality by utilizing structural similarity | |
CN111144418B (en) | Railway track area segmentation and extraction method | |
CN109523470A (en) | A kind of depth image super resolution ratio reconstruction method and system | |
DE102018114005A1 (en) | Material testing of optical specimens | |
US20220366682A1 (en) | Computer-implemented arrangements for processing image having article of interest | |
CN109242834A (en) | It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method | |
CN109977834B (en) | Method and device for segmenting human hand and interactive object from depth image | |
CN111292336A (en) | Omnidirectional image non-reference quality evaluation method based on segmented spherical projection format | |
CN111179173B (en) | Image splicing method based on discrete wavelet transform and gradient fusion algorithm | |
CN114596316A (en) | Road image detail capturing method based on semantic segmentation | |
CN104036498A (en) | Fast evaluation method of OCT image quality based on layer by layer classification | |
CN110211064B (en) | Mixed degraded text image recovery method based on edge guide | |
CN106203269A (en) | A kind of based on can the human face super-resolution processing method of deformation localized mass and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20140507 |