CN108665436A - A kind of multi-focus image fusing method and system based on gray average reference - Google Patents
A kind of multi-focus image fusing method and system based on gray average reference Download PDFInfo
- Publication number
- CN108665436A CN108665436A CN201810440930.7A CN201810440930A CN108665436A CN 108665436 A CN108665436 A CN 108665436A CN 201810440930 A CN201810440930 A CN 201810440930A CN 108665436 A CN108665436 A CN 108665436A
- Authority
- CN
- China
- Prior art keywords
- image
- gray
- source images
- value
- average
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000004927 fusion Effects 0.000 claims abstract description 54
- 238000002156 mixing Methods 0.000 claims abstract description 31
- 238000007500 overflow downdraw method Methods 0.000 description 13
- 238000003384 imaging method Methods 0.000 description 5
- 239000000155 melt Substances 0.000 description 3
- 238000013441 quality evaluation Methods 0.000 description 3
- 238000007430 reference method Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of multi-focus image fusing method and system based on gray average reference, can significantly improve blending image contrast.The average gray for calculating separately two width figures to source images A, the B after registration is mainly comprised the following steps, the gray value of blending image is determined according to the distance of the gray value distance average of every bit pixel on two width source images of A, B later, the image after finally output fusion.The present invention has good effect compared with traditional image interfusion method in terms of improving clarity and in terms of the contrast of raising blending image, and upper and traditional image interfusion method is the same magnitude and a kind of fast fusion algorithm at runtime.
Description
Technical field
The invention belongs to the application fields of Digital Image Processing, and in particular to a kind of multi-focus based on gray average reference
Image interfusion method and system.
Background technology
Image definition is to describe the physical quantity of image detail expressive ability, extraneous in order to more comprehensively, truly reflect
The information of scene, people always want to obtain all targets clearly images in scene.And Visible imaging system is external
When boundary's scene is imaged, for being in focal plane position, focusing good object, can be in the image plane of imaging system
Reveal clearly image.But since the focusing range of imaging system is limited, and the different target in scene to imaging system away from
From being then different, therefore it is difficult to so that all target areas keep blur-free imaging in scene, the certain distance before and after focal plane
Outer target will present out different degrees of ambiguous morphology.For application fields such as machine vision, target classification, pattern-recognitions,
Blurred picture can cause discrimination reduction, classification inaccurate to some extent, and even final decision can be caused under serious conditions
Mistake.
Due to the difference of focus point, each multiple focussing image has different clear areas and fuzzy region, by extracting not
It is merged with the clear target information in source images, all objects clearly blending image can be obtained in the scene.
It is the simple Multi-sensor Image Fusion algorithm based on spatial domain to answer image interfusion method with the most use at present.It
Basic principle be directly to be selected to each respective pixel in each source images (choosing is big or selects small), average or weighting flat respectively
After impartial simple process, it is fused into the new image of a width, such method is simple and efficient, and is adapted to the Streaming Media high to rate requirement and is regarded
Frequency is handled.Simple Multi-Sensory Image Fusion at Pixel Level mainly has:The fusion of pixel weighted average, grey scale pixel value select that small, pixel is grey
Angle value choosing is big.The present invention herein proposes a kind of spatial domain Image Fusion that can significantly improve blending image contrast:It is based on
The multi-focus image fusing method of gray average reference.
Invention content
The present invention is in order to solve the above-mentioned technical problem, it is proposed that a kind of multi-focus image fusion based on gray average reference
Method can significantly improve blending image contrast.
The technical solution adopted in the present invention is:A kind of multi-focus image fusing method based on gray average reference, should
Method merges the source images A after registration, B, and source images A, B are gray level image, and A, B ∈ RM×N, RM×NIt is that size is
The space of M × N, the fusion method include the following steps:
Step 1, two width source images A of input, B, size is all M × N pixels;
Step 2, A, the gray average of two width source images of B are calculated, if μ (A) is the gray average of source images A, μ (B) is source
The gray average of image B;
Step 3, image co-registration is completed according to the fusion rule of proposition, i.e., according to every bit pixel on two width source images of A, B
The size of gray value distance sources gradation of image average value complete image co-registration, be implemented as follows;
Step 3.1, note f (i, j) be image i row j row at gray value and i ∈ [0, M), j ∈ [0, N) gray value, enable i=
0, j=0 begins stepping through from the upper left corner of image, reads A (i, j) and B (i, j);
Step 3.2, it being determined according to fusion rule and takes gray value at fused image F (i, j) points, specific formula is as follows,
In formula, F (i, j) is the pixel value of blending image i rows j row, and A (i, j) is the gray value of i rows j row in image A, B
(i, j) is the gray value of i rows j row row in image, and μ (A) is the gray average of image A, and μ (B) is the gray average of image B;On
Formula indicates that values of the blending image F at (i, j) is the pixel value of the bigger with a distance from gray average in source images A and B;
Step 3.3, judge whether all pixels point of traversal source images A, B, if it is not, step 3.2 is then repeated, if so,
Continue to execute step 4;
Step 4, the image F after output fusion.
Further, the step 2 falls into a trap nomogram as the gray average μ (A) and μ (B) of two width source images of A, B, uses
Formula it is as follows,
In above formula, f (i, j) indicates gray value of the pixel at (i, j), and M, N are the width and height of source images A, B.
The present invention also provides a kind of multi-focus image fusion systems based on gray average reference, including following module:
Input module, for inputting two width source images A, B, size is all M × N pixels, and source images A, B are gray-scale map
Picture;
Gray average computing module, for calculating A, the gray average of two width source images of B, if μ (A) is the ash of source images A
Mean value is spent, μ (B) is the gray average of source images B;
Image co-registration module, for the gray value distance sources gradation of image according to every bit pixel on two width source images of A, B
The size of average value completes image co-registration, specifically includes such as lower unit;
Traversal Unit, note f (i, j) be image i row j row at gray value and i ∈ [0, M), j ∈ [0, N) gray value, enable i
=0, j=0 are begun stepping through from the upper left corner of image, read A (i, j) and B (i, j);
Integrated unit determines according to fusion rule and takes gray value at fused image F (i, j) points, and specific formula is as follows,
In formula, F (i, j) is the pixel value of blending image i rows j row, and A (i, j) is the gray value of i rows j row in image A, B
(i, j) is the gray value of i rows j row row in image, and μ (A) is the gray average of image A, and μ (B) is the gray average of image B;On
Formula indicates that values of the blending image F at (i, j) is the pixel value of the bigger with a distance from gray average in source images A and B;
Judging unit, all pixels point for judging whether traversal source images A, B;
Output module, for exporting the image F after merging.
Further, in the gray average computing module, the gray average μ (A) of two width source images of image A, B is calculated
With μ (B), the formula of use is as follows,
In above formula, f (i, j) indicates gray value of the pixel at (i, j), and M, N are the width and height of source images A, B.
The beneficial effects of the invention are as follows:It is proposed a kind of quick multi-focus image fusion new method, i.e., according to two width source figures
Image co-registration is completed as the size of the gray value distance sources gradation of image average value of upper every bit pixel, image can be carried out
Fusion rapidly and efficiently can significantly improve the clarity and contrast of blending image.
Description of the drawings
Fig. 1:The flow chart of the embodiment of the present invention;
Fig. 2:For embodiment source images to be fused, (a) is the source images to be fused of embodiment 1 with (b)
ManByTheWindow;(c) it is the source images CameraAndClock to be fused of embodiment 2 with (d);(e) it is embodiment with (f)
3 source images PlayMobile to be fused;
Fig. 3:For the fusion results figure of embodiment 1, respectively (a) weighted average fusion results figure (b) pixel value, which takes, melts greatly
Close the multi-focus image fusion method based on gray average reference that result figure (c) pixel value takes small fusion results figure (d) to propose;
Fig. 4:For the fusion results figure of embodiment 2, respectively (a) weighted average fusion results figure (b) pixel value, which takes, melts greatly
Close the multi-focus image fusion method based on gray average reference that result figure (c) pixel value takes small fusion results figure (d) to propose;
Fig. 5:For the fusion results figure of embodiment 3, respectively (a) weighted average fusion results figure (b) pixel value, which takes, melts greatly
Close the multi-focus image fusion method based on gray average reference that result figure (c) pixel value takes small fusion results figure (d) to propose.
Specific implementation mode
Understand for the ease of those of ordinary skill in the art and implement the present invention, with reference to the accompanying drawings and embodiments to this hair
It is bright to be described in further detail, it should be understood that implementation example described herein is merely to illustrate and explain the present invention, not
For limiting the present invention.
The present invention can be used for counting by using a kind of multi-focus image fusing method based on gray average reference, this method
In word image procossing related field.
Referring to Fig.1, the technical solution adopted in the present invention is:A kind of multi-focus image fusion based on gray average reference
Method includes the following steps:
Step 1:Two width source images A are inputted, B, size is all M × N pixels, indicates image at the i-th row j row with f (i, j)
Gray value, and i ∈ [0, M), j ∈ [0, N).
Step 2:The gray average of two width source images of A, B is calculated, wherein μ (A) is the gray average of source images A, and μ (B) is
The gray average of source images B.
Gray average μ (A) and μ (B), the formula of use for calculating two width source images of image A, B are as follows:
In above formula, f (i, j) indicates gray value of the pixel at (i, j), and M, N are the width and height of source images A, B.
Step 3:Image co-registration is completed according to the fusion rule of proposition, i.e., according to every bit pixel on two width source images of A, B
The size of gray value distance sources gradation of image average value complete image co-registration.It is as follows:
Step 3.1:Remember (i, j) be image i row j row at gray value and i ∈ [0, M), j ∈ [0, N) gray value.Enable i=
0, j=0 begins stepping through from the upper left corner of image, reads A (i, j) and B (i, j)
Step 3.2:Being determined according to fusion rule takes gray value, specific formula as follows at fused image F (i, j) points:
In formula, F (i, j) is the pixel value of blending image i rows j row, and A (i, j) is the gray value of i rows j row in image A, B
(i, j) is the gray value of i rows j row in image, and μ (A) is the gray average of image A, and μ (B) is the gray average of image B.Above formula
Indicate that values of the blending image F at (i, j) is the pixel value of the bigger with a distance from gray average in source images A and B.
Step 3.3:Judge whether all pixels point of traversal source images A, B, if it is not, step 3.2 is then repeated, if so,
Continue to execute step 4.
Step 4:Image F after output fusion.
The embodiment of the present invention also provides a kind of multi-focus image fusion system based on gray average reference, including such as lower die
Block:
Input module, for inputting two width source images A, B, size is all M × N pixels, and source images A, B are gray-scale map
Picture;
Gray average computing module, for calculating A, the gray average of two width source images of B, if μ (A) is the ash of source images A
Mean value is spent, μ (B) is the gray average of source images B;
The gray average μ (A) and μ (B) of two width source images of image A, B are calculated, the formula of use is as follows,
In above formula, f (i, j) indicates gray value of the pixel at (i, j), and M, N are the width and height of source images A, B.
Image co-registration module, for the gray value distance sources gradation of image according to every bit pixel on two width source images of A, B
The size of average value completes image co-registration, specifically includes such as lower unit;
Traversal Unit, note f (i, j) be image i row j row at gray value and i ∈ [0, M), j ∈ [0, N) gray value, enable i
=0, j=0 are begun stepping through from the upper left corner of image, read A (i, j) and B (i, j);
Integrated unit determines according to fusion rule and takes gray value at fused image F (i, j) points, and specific formula is as follows,
In formula, F (i, j) is the pixel value of blending image i rows j row, and A (i, j) is the gray value of i rows j row in image A, B
(i, j) is the gray value of i rows j row row in image, and μ (A) is the gray average of image A, and μ (B) is the gray average of image B;On
Formula indicates that values of the blending image F at (i, j) is the pixel value of the bigger with a distance from gray average in source images A and B;
Judging unit, all pixels point for judging whether traversal source images A, B;
Output module, for exporting the image F after merging.
It is the embodiment that inventor provides below, to be further explained explanation to technical scheme of the present invention.
Embodiment 1:
Technical scheme of the present invention is followed, which carries out fusion treatment, source gray-scale map to be fused
For Fig. 2 (a) and (b), wherein (a) is left focusedimage, it is (b) right focusedimage, size is 765*510 pixels.Divide now
Big method, pixel value is not taken to take small method, the multi-focus image fusion based on gray average reference with weighted average fusion method, pixel value
4 kinds of blending algorithms of method obtain fusion results figure, respectively Fig. 3 (a), (b), (c), (d).To the blending image of different fusion methods
Quality evaluation is carried out, processing calculates to obtain result shown in table 1.
1 ManByTheWindow fusion results objective indicator comparing results of table
As seen from Table 1, for this index of comentropy, pixel takes big method highest, but the comentropy of remaining 3 kinds of algorithm
It differs also little therewith, is all not much different in 7.3 or more the abilities that i.e. 4 kinds of algorithms retain original image informations.For average ladder
Degree, also known as clarity, reflection image is to the ability to express of Detail contrast and texture variations, the rapid image fusion method highest of proposition
And reach 4.09, there is apparent gap compared with remaining 3 kinds of algorithm, average weighted method is equivalent to and improves 43.5%, for
Pixel takes big method to be equivalent to and improves 41%, takes small method to be equivalent to pixel and improves 22.1%, it can be seen that
The blending algorithm of proposition has huge promotion in terms of improving clarity.For standard deviation, standard deviation is bigger, then image gray levels
Distribution more disperses, and image contrast is bigger, i.e., picture contrast is higher, and the Fast Image Fusion of proposition is maximum, reaches
80.36, averagely improve 3.9% compared to other three kinds of methods, it can be seen that the blending algorithm of proposition can improve fusion figure
The contrast of picture.In addition, upper and traditional image interfusion method belongs to the same magnitude to the method for the present invention at runtime, and
A kind of fast fusion algorithm.
Embodiment 2:
Technical scheme of the present invention is followed, which carries out fusion treatment, source gray-scale map to be fused
It is upper focusedimage for Fig. 2 (c) and (d), wherein Fig. 2 (c), Fig. 2 (d) is lower focusedimage, and size is 765*510 pixels.
Big method, pixel value is taken to take small method, based on the multi-focus figure of gray average reference with weighted average fusion method, pixel value respectively now
As 4 kinds of blending algorithms of fusion method obtain fusion results figure, respectively Fig. 4 (a), Fig. 4 (b), Fig. 4 (c), Fig. 4 (d).Difference is melted
The blending image of conjunction method carries out quality evaluation, and processing calculates to obtain result shown in table 2.
2 CameraAndClock fusion results objective indicator comparing results of table
As seen from Table 2, for this index of comentropy, propose algorithm highest, but the comentropy of remaining 3 kinds of algorithm with
Difference it is also little, the abilities that original image informations are all retained in 7.0 or more i.e. 4 kinds of algorithms are not much different.For average gradient,
Also known as clarity, reflection image to the ability to express of Detail contrast and texture variations, the rapid image fusion method highest of proposition and
Reach 4.24, has apparent gap compared with remaining 3 kinds of algorithm, average weighted method is equivalent to and improves 61%, for pixel
It takes big method to be equivalent to and improves 48.8%, take small method to be equivalent to pixel and improve 18.2%, it can be seen that carry
The blending algorithm gone out has huge promotion in terms of improving clarity.For standard deviation, standard deviation is bigger, then gradation of image fraction
Cloth more disperses, and image contrast is bigger, i.e., picture contrast is higher, and the Fast Image Fusion of proposition is maximum, reaches
44.61, averagely improve 15.2% compared to other three kinds of methods, it can be seen that the blending algorithm of proposition can improve fusion figure
The contrast of picture.In addition, upper and traditional image interfusion method belongs to the same magnitude to the method for the present invention at runtime, and
A kind of fast fusion algorithm.
Embodiment 3:
Technical scheme of the present invention is followed, which carries out fusion treatment, source gray-scale map to be fused
It is prefocusing image for Fig. 2 (e) and (f), wherein Fig. 2 (e), Fig. 2 (f) is rear focusedimage, and size is 765*510 pixels.
Big method, pixel value is taken to take small method, based on the multi-focus figure of gray average reference with weighted average fusion method, pixel value respectively now
As 4 kinds of blending algorithms of fusion method obtain fusion results figure, respectively Fig. 5 (a), Fig. 5 (b), Fig. 5 (c), Fig. 5 (d).Difference is melted
The blending image of conjunction method carries out quality evaluation, and processing calculates to obtain result shown in table 3.
3 PlayMobile fusion results objective indicator comparing results of table
As seen from Table 3, for this index of comentropy, propose algorithm highest, but the comentropy of remaining 3 kinds of algorithm with
Difference it is also little, the abilities that original image informations are all retained in 7.2 or more i.e. 4 kinds of algorithms are not much different.For average gradient,
Also known as clarity, reflection image to the ability to express of Detail contrast and texture variations, the rapid image fusion method highest of proposition and
Reach 3.04, has apparent gap compared with remaining 3 kinds of algorithm, average weighted method is equivalent to and improves 40.7%, for picture
Element takes big method to be equivalent to and improves 28.3%, takes small method to be equivalent to pixel and improves 23.1%, it can be seen that
The blending algorithm of proposition has huge promotion in terms of improving clarity.For standard deviation, standard deviation is bigger, then image gray levels
Distribution more disperses, and image contrast is bigger, i.e., picture contrast is higher, and the Fast Image Fusion of proposition is maximum, reaches
61.36, averagely improve 5.2% compared to other three kinds of methods, it can be seen that the blending algorithm of proposition can improve fusion figure
The contrast of picture.In addition, upper and traditional image interfusion method belongs to the same magnitude to the method for the present invention at runtime, and
A kind of fast fusion algorithm.
It should be understood that the part that this specification does not elaborate belongs to the prior art.
It should be understood that the above-mentioned description for preferred embodiment is more detailed, can not therefore be considered to this
The limitation of invention patent protection range, those skilled in the art under the inspiration of the present invention, are not departing from power of the present invention
Profit requires under protected ambit, can also make replacement or deformation, each fall within protection scope of the present invention, this hair
It is bright range is claimed to be determined by the appended claims.
Claims (4)
1. a kind of multi-focus image fusing method based on gray average reference, this method is to source images A, the B progress after registration
Fusion, source images A, B are gray level image, and A, B ∈ RM×N, RM×NIt is the space that size is M × N, which is characterized in that including
Following steps:
Step 1, two width source images A of input, B, size is all M × N pixels;
Step 2, A, the gray average of two width source images of B are calculated, if μ (A) is the gray average of source images A, μ (B) is source images B
Gray average;
Step 3, image co-registration is completed according to the fusion rule of proposition, i.e., according to the ash of every bit pixel on two width source images of A, B
The size of angle value distance sources gradation of image average value completes image co-registration, is implemented as follows;
Step 3.1, note f (i, j) be image i row j row at gray value and i ∈ [0, M), j ∈ [0, N) gray value, enable i=0, j
=0 begins stepping through from the upper left corner of image, reads A (i, j) and B (i, j);
Step 3.2, it being determined according to fusion rule and takes gray value at fused image F (i, j) points, specific formula is as follows,
In formula, F (i, j) is the pixel value of blending image i rows j row, and A (i, j) is the gray value of i rows j row in image A, B (i, j)
For the gray value of i rows j row row in image, μ (A) is the gray average of image A, and μ (B) is the gray average of image B;Above formula indicates
Values of the blending image F at (i, j) is the pixel value of the bigger with a distance from gray average in source images A and B;
Step 3.3, judge whether all pixels point of traversal source images A, B, if it is not, step 3.2 is then repeated, if so, continuing
Execute step 4;
Step 4, the image F after output fusion.
2. a kind of multi-focus image fusing method based on gray average reference according to claim 1, it is characterised in that:
The step 2 falls into a trap nomogram as the gray average μ (A) and μ (B) of two width source images of A, B, and the formula of use is as follows,
In above formula, f (i, j) indicates gray value of the pixel at (i, j), and M, N are the width and height of source images A, B.
3. a kind of multi-focus image fusion system based on gray average reference, which is characterized in that including following module:
Input module, for inputting two width source images A, B, size is all M × N pixels, and source images A, B are gray level image;
Gray average computing module, for calculating A, the gray average of two width source images of B, if the gray scale that μ (A) is source images A is equal
Value, μ (B) are the gray average of source images B;
Image co-registration module, for average according to the gray value distance sources gradation of image of every bit pixel on two width source images of A, B
The size of value completes image co-registration, specifically includes such as lower unit;
Traversal Unit, note f (i, j) be image i row j row at gray value and i ∈ [0, M), j ∈ [0, N) gray value, enable i=0, j
=0 begins stepping through from the upper left corner of image, reads A (i, j) and B (i, j);
Integrated unit determines according to fusion rule and takes gray value at fused image F (i, j) points, and specific formula is as follows,
In formula, F (i, j) is the pixel value of blending image i rows j row, and A (i, j) is the gray value of i rows j row in image A, B (i, j)
For the gray value of i rows j row row in image, μ (A) is the gray average of image A, and μ (B) is the gray average of image B;Above formula indicates
Values of the blending image F at (i, j) is the pixel value of the bigger with a distance from gray average in source images A and B;
Judging unit, all pixels point for judging whether traversal source images A, B;
Output module, for exporting the image F after merging.
4. a kind of multi-focus image fusing method based on gray average reference according to claim 3, it is characterised in that:
In the gray average computing module, the gray average μ (A) and μ (B) of two width source images of image A, B, the formula of use are calculated
It is as follows,
In above formula, f (i, j) indicates gray value of the pixel at (i, j), and M, N are the width and height of source images A, B.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810440930.7A CN108665436B (en) | 2018-05-10 | 2018-05-10 | Multi-focus image fusion method and system based on gray mean reference |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810440930.7A CN108665436B (en) | 2018-05-10 | 2018-05-10 | Multi-focus image fusion method and system based on gray mean reference |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108665436A true CN108665436A (en) | 2018-10-16 |
CN108665436B CN108665436B (en) | 2021-05-04 |
Family
ID=63778309
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810440930.7A Active CN108665436B (en) | 2018-05-10 | 2018-05-10 | Multi-focus image fusion method and system based on gray mean reference |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108665436B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109767414A (en) * | 2019-01-18 | 2019-05-17 | 湖北工业大学 | A kind of multi-focus image fusing method based on gray scale median reference |
CN109785282A (en) * | 2019-01-22 | 2019-05-21 | 厦门大学 | A kind of multi-focus image fusing method |
CN109886903A (en) * | 2019-01-23 | 2019-06-14 | 湖北工业大学 | A kind of multi-focus image fusing method and system based on gray scale midrange reference |
CN115131350A (en) * | 2022-08-30 | 2022-09-30 | 南京木木西里科技有限公司 | Large-field-depth observation and surface topography analysis system |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101630405A (en) * | 2009-08-14 | 2010-01-20 | 重庆市勘测院 | Multi-focusing image fusion method utilizing core Fisher classification and redundant wavelet transformation |
CN101968883A (en) * | 2010-10-28 | 2011-02-09 | 西北工业大学 | Method for fusing multi-focus images based on wavelet transform and neighborhood characteristics |
CN103632359A (en) * | 2013-12-13 | 2014-03-12 | 清华大学深圳研究生院 | Super-resolution processing method for videos |
US20140193086A1 (en) * | 2013-01-10 | 2014-07-10 | Nuctech Company Limited | Image processing methods and apparatuses |
CN104616274A (en) * | 2015-02-09 | 2015-05-13 | 内蒙古科技大学 | Algorithm for fusing multi-focusing image based on salient region extraction |
EP3015850A1 (en) * | 2013-06-27 | 2016-05-04 | Park Systems Corp. | Image acquiring method and image acquiring apparatus using same |
US20170206690A1 (en) * | 2016-01-20 | 2017-07-20 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
CN107194903A (en) * | 2017-04-25 | 2017-09-22 | 阜阳师范学院 | A kind of multi-focus image fusing method based on wavelet transformation |
CN107808373A (en) * | 2017-11-15 | 2018-03-16 | 北京奇虎科技有限公司 | Sample image synthetic method, device and computing device based on posture |
CN107909562A (en) * | 2017-12-05 | 2018-04-13 | 华中光电技术研究所(中国船舶重工集团公司第七七研究所) | A kind of Fast Image Fusion based on Pixel-level |
-
2018
- 2018-05-10 CN CN201810440930.7A patent/CN108665436B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101630405A (en) * | 2009-08-14 | 2010-01-20 | 重庆市勘测院 | Multi-focusing image fusion method utilizing core Fisher classification and redundant wavelet transformation |
CN101968883A (en) * | 2010-10-28 | 2011-02-09 | 西北工业大学 | Method for fusing multi-focus images based on wavelet transform and neighborhood characteristics |
US20140193086A1 (en) * | 2013-01-10 | 2014-07-10 | Nuctech Company Limited | Image processing methods and apparatuses |
EP3015850A1 (en) * | 2013-06-27 | 2016-05-04 | Park Systems Corp. | Image acquiring method and image acquiring apparatus using same |
CN103632359A (en) * | 2013-12-13 | 2014-03-12 | 清华大学深圳研究生院 | Super-resolution processing method for videos |
CN104616274A (en) * | 2015-02-09 | 2015-05-13 | 内蒙古科技大学 | Algorithm for fusing multi-focusing image based on salient region extraction |
US20170206690A1 (en) * | 2016-01-20 | 2017-07-20 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
CN107194903A (en) * | 2017-04-25 | 2017-09-22 | 阜阳师范学院 | A kind of multi-focus image fusing method based on wavelet transformation |
CN107808373A (en) * | 2017-11-15 | 2018-03-16 | 北京奇虎科技有限公司 | Sample image synthetic method, device and computing device based on posture |
CN107909562A (en) * | 2017-12-05 | 2018-04-13 | 华中光电技术研究所(中国船舶重工集团公司第七七研究所) | A kind of Fast Image Fusion based on Pixel-level |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109767414A (en) * | 2019-01-18 | 2019-05-17 | 湖北工业大学 | A kind of multi-focus image fusing method based on gray scale median reference |
CN109785282A (en) * | 2019-01-22 | 2019-05-21 | 厦门大学 | A kind of multi-focus image fusing method |
CN109785282B (en) * | 2019-01-22 | 2021-03-26 | 厦门大学 | Multi-focus image fusion method |
CN109886903A (en) * | 2019-01-23 | 2019-06-14 | 湖北工业大学 | A kind of multi-focus image fusing method and system based on gray scale midrange reference |
CN115131350A (en) * | 2022-08-30 | 2022-09-30 | 南京木木西里科技有限公司 | Large-field-depth observation and surface topography analysis system |
CN115131350B (en) * | 2022-08-30 | 2022-12-16 | 南京木木西里科技有限公司 | Large-depth-of-field observation and surface topography analysis system |
Also Published As
Publication number | Publication date |
---|---|
CN108665436B (en) | 2021-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107680054B (en) | Multi-source image fusion method in haze environment | |
CN108665436A (en) | A kind of multi-focus image fusing method and system based on gray average reference | |
EP3496383A1 (en) | Image processing method, apparatus and device | |
Funt et al. | Estimating illumination chromaticity via support vector regression | |
CN107038719A (en) | Depth estimation method and system based on light field image angle domain pixel | |
CN107784669A (en) | A kind of method that hot spot extraction and its barycenter determine | |
CN107369148A (en) | Based on the multi-focus image fusing method for improving SML and Steerable filter | |
WO2019096310A1 (en) | Light field image rendering method and system for creating see-through effects | |
CN110147816B (en) | Method and device for acquiring color depth image and computer storage medium | |
CN107506795A (en) | A kind of local gray level histogram feature towards images match describes sub- method for building up and image matching method | |
CN114926407A (en) | Steel surface defect detection system based on deep learning | |
CN110910319A (en) | Operation video real-time defogging enhancement method based on atmospheric scattering model | |
CN109001902A (en) | Microscope focus method based on image co-registration | |
Feng et al. | Low-light image enhancement algorithm based on an atmospheric physical model | |
CN117037103A (en) | Road detection method and device | |
CN111915735A (en) | Depth optimization method for three-dimensional structure contour in video | |
CN109767414A (en) | A kind of multi-focus image fusing method based on gray scale median reference | |
CN113362390B (en) | Rapid circular target positioning video processing method based on ellipse detection | |
CN110969583A (en) | Image background processing method and system | |
Wang et al. | New insights into multi-focus image fusion: A fusion method based on multi-dictionary linear sparse representation and region fusion model | |
CN108830804B (en) | Virtual-real fusion fuzzy consistency processing method based on line spread function standard deviation | |
CN109886903A (en) | A kind of multi-focus image fusing method and system based on gray scale midrange reference | |
CN110120029A (en) | Image interfusion method based on perceptual hash algorithm | |
CN112070771B (en) | Adaptive threshold segmentation method and device based on HS channel and storage medium | |
CN115249358A (en) | Method and system for quantitatively detecting carbon particles in macrophages and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |