CN109636765A - High dynamic display methods based on the fusion of image multiple-exposure - Google Patents
High dynamic display methods based on the fusion of image multiple-exposure Download PDFInfo
- Publication number
- CN109636765A CN109636765A CN201811332568.8A CN201811332568A CN109636765A CN 109636765 A CN109636765 A CN 109636765A CN 201811332568 A CN201811332568 A CN 201811332568A CN 109636765 A CN109636765 A CN 109636765A
- Authority
- CN
- China
- Prior art keywords
- image
- fusion
- exposure
- detail
- levels
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 102
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000002156 mixing Methods 0.000 claims abstract description 33
- 238000000605 extraction Methods 0.000 claims abstract description 20
- 238000004364 calculation method Methods 0.000 claims abstract description 15
- 230000002708 enhancing effect Effects 0.000 claims abstract description 13
- 238000010586 diagram Methods 0.000 claims description 15
- 238000004458 analytical method Methods 0.000 claims description 2
- 239000000155 melt Substances 0.000 claims 2
- 230000000574 ganglionic effect Effects 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 4
- 230000003321 amplification Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 210000004209 hair Anatomy 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 230000003014 reinforcing effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000011010 flushing procedure Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G06T5/90—
-
- G06T5/94—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Abstract
The present invention discloses a kind of high dynamic display methods based on the fusion of image multiple-exposure, including original image input step, more exposure image generation steps, human eye area-of-interest information extraction step, human eye area-of-interest weight calculation step, basal layer and levels of detail extraction step and blending image step.After the method for the present invention is through multiple exposure images with different exposure values are generated from original image, image basis layer and image detail layer are extracted respectively to each exposure image again, then described multiple images basal layer is merged with specific weights weight values, and fusion described multiple images levels of detail is counted accurately with specific detail enhancing, fusion basal layer and fusion levels of detail are finally subjected to second of fusion and obtain blending image, there is preferable display effect.
Description
Technical field
The present invention relates to a kind of high dynamic display methods based on the fusion of image multiple-exposure, carry out subregion to image
Domain enhancing, enhances different exposure image human eye area-of-interests respectively, and Bing retains more thin in the human eye area-of-interest
Section, enhances image overall visual effect.
Background technique
Digital still camera has excellent bottom quick, without flushing egative film of taking pictures, therefore has replaced traditional camera for many years.However number
Image captured by the camera of position, bloom are easy overexposure, and low light was easy secretly, therefore there are many human hairs to put on display more exposure fusion sides
Method, system is handled after the image of camera is carried out numerical digit, to obtain the image that high dynamic is shown.
It is more to generate by establishing a suitable exposure function on an image that the prior art exposes fusion method more
Open exposure image.The weight calculation of every exposure image is sought being being worth centered on the mean value of image.
The emphasis that each different exposure image is often paid close attention to is different, such as darker exposure image is often concerned with most
That bright region (such as sky).Conversely, it is darker region details that most bright exposure image, which needs the place enhanced,.Cause
This, prior art number it is unified take mean value Bing that cannot obtain preferable image display effect, after the image produced often whiten
Or it is fuzzy.
Therefore, it is necessary to a kind of high dynamic display methods based on the fusion of image multiple-exposure is provided, to solve the prior art
The problems of.
Summary of the invention
The present invention provides a kind of high dynamic display methods based on the fusion of image multiple-exposure, to solve the prior art
The problem that image whitens or obscures after processing.
The main purpose of the present invention is to provide a kind of multiple-exposure fusion high dynamic display methods include:
More exposure image generation steps, including suitable S type function is used to generate multiple exposure images from original image;
Human eye area-of-interest information extraction step, including each exposure of the significant property model extraction of image is based on by one
Multiple human eye area-of-interests in image;
Human eye area-of-interest weight calculation step, including calculating separately each human eye sense in each exposure image
The weighted value in interest region;
Basal layer and levels of detail extraction step, including image basis layer and image detail are extracted from each exposure image
Layer;And
Blending image step, including all image basis layers of fusion merge all image details to generate fusion basal layer
Layer merges the fusion basal layer and the fusion levels of detail finally to generate fusion levels of detail to generate blending image.
In one embodiment of this invention, the method further includes: computer provides step, including provides computer;It is former
Image input step, including the original image is inputted to the computer;More exposure image generation steps, the human eye sense
Interest region information extraction step, the human eye area-of-interest weight calculation step, the basal layer and levels of detail extract step
Suddenly, the blending image step is executed by the Computing.
In one embodiment of this invention, the high dynamic display methods of the multiple-exposure fusion further comprises output step
Suddenly, the output step includes exporting the blending image to external electronic by the computer.
In one embodiment of this invention, the exposure value of the multiple exposure image is different.
In one embodiment of this invention, the multiple exposure image is grayscale image.
In one embodiment of this invention, the basal layer and levels of detail extraction step are come using principal component analytical method
Described image basal layer and described image levels of detail are extracted from each exposure image.
In one embodiment of this invention, the blending image step includes: image basis layer fusion steps, including is passed through
The Computing, according to the weighted value of the human eye area-of-interest each in each exposure image, to the multiple figure
As basal layer is weighted fusion, to generate the fusion basal layer.
In one embodiment of this invention, the blending image step further includes: image detail layer fusion steps, including logical
The Computing is crossed, multiple images levels of detail is merged, to generate the fusion levels of detail, wherein described image
Levels of detail fusion steps are the multiple including first generating multiple details enhancing coefficients, then through levels of detail fusion calculation formula progress
The fusion of image detail layer.
In one embodiment of this invention, the blending image step further includes: blending image generation step, including passes through
The Computing merges the fusion basal layer and the fusion levels of detail, to generate the blending image.
In one embodiment of this invention, details enhancing coefficient=1+sqrt (std (L)/255-std (B)/255);
Wherein L represents the gray value of the exposure image of original image, and B represents the gray value of basal layer, and sqrt represents square root, and Std is represented
Standard deviation.
In one embodiment of this invention, the levels of detail fusion calculation formula is as follows: D=(a x D1+b x D2+c x
D3)/(a+b+c);Wherein D1, D2, D3 respectively represent the gray value of described multiple images levels of detail, and a, b, c are details increasing
Strong coefficient.
In one embodiment of this invention, the fusion basal layer and the fusion levels of detail are through linear superposition side
Formula and be fused into the blending image.
Compared to the prior art, the high dynamic display methods of image multiple-exposure fusion of the present invention is penetrated generates from original image
After multiple exposure images with different exposure values, then image basis layer and image detail are extracted respectively to each exposure image
Layer then merges described multiple images basal layer with specific weights weight values, and counts the multiple figure of fusion accurately with specific detail enhancing
As levels of detail, fusion basal layer and fusion levels of detail are finally subjected to second of fusion and obtain blending image.Through above-mentioned
The obtained blending image of the method for the present invention has been directed to the reinforcing that human eye area-of-interest carries out details, therefore compared to root
The prior art image intensifying method optimized according to the average value of image, the present invention have preferable display effect.
For above content of the invention can be clearer and more comprehensible, preferred embodiment is cited below particularly, Bing cooperates institute's accompanying drawings, makees
Detailed description are as follows:
Detailed description of the invention
Fig. 1 is the framework square for the computer that the high dynamic display methods of image multiple-exposure fusion of the present invention is applicable in
Figure.
Fig. 2 is the image processing flow block diagram of the high dynamic display methods of image multiple-exposure fusion of the present invention.
Fig. 3 is the step flow chart of the high dynamic display methods of image multiple-exposure fusion of the present invention.
Fig. 4 is another flow chart of steps of the high dynamic display methods of image multiple-exposure fusion of the present invention.
Fig. 5 is the original image schematic diagram of the high dynamic display methods of image multiple-exposure fusion of the present invention.
Fig. 6 is the low exposure diagram of one embodiment of original image of the high dynamic display methods of image multiple-exposure fusion of the present invention
As schematic diagram.
Fig. 7 is the middle exposure of one embodiment of original image of the high dynamic display methods of image multiple-exposure fusion of the present invention
Image schematic diagram.
Fig. 8 is the high exposure diagram of one embodiment of original image of the high dynamic display methods of image multiple-exposure fusion of the present invention
As schematic diagram.
Fig. 9 is the exposure diagram of another embodiment of original image of the high dynamic display methods of image multiple-exposure fusion of the present invention
As schematic diagram.
Figure 10 is the image of another embodiment of original image of the high dynamic display methods of image multiple-exposure fusion of the present invention
Basal layer schematic diagram.
Figure 11 is the image of another embodiment of original image of the high dynamic display methods of image multiple-exposure fusion of the present invention
Levels of detail schematic diagram.
Specific embodiment
Fig. 1 and Fig. 3 is please referred to, the high dynamic display methods of image multiple-exposure fusion of the present invention can pass through a computer
10 are executed, and the described method includes: computer provides step S01, original image input step S02, more exposure image generation steps
S03, human eye area-of-interest information extraction step S04, human eye area-of-interest weight calculation step S05, basal layer and details
Layer extraction step S06, blending image step S07 and output step S08.
Referring to figure 2. and Fig. 3, the computer provide step S01, including offer computer 10.The computer 10 can
For computer, wisdom mobile phone, tablet computer, wisdom wrist-watch etc..In addition, the computer 10 is including at least in being electrically connected to each other
Central processor 11, memory body 12, reservoir 13, input interface 14 and output interface 15.The memory body 12 can be to deposit at random
Take dynamic memory body 12 (Dynamic Random-access Memory, DRAM).The reservoir 13 can be hard disc (Hard
Disk Drive, HDD) or solid hard disc (Solid State Drive, SSD).The input interface 14 can be electrical connection
Device, such as universal serial bus (Universal Serial Bus, USB) electric connector.The output interface 15 can be electricity
Connector, such as high picture quantity multimedia interface (High Definition Multimedia Interface, HDMI) connector,
For exporting image to external electronic, such as liquid crystal display panel.
Referring to figure 5., the original image input step S02, including input original image OG is to the computer 10.In this hair
In bright embodiment, the original image OG can be any picture, in one embodiment of the invention as shown in Figure 3, the original image
OG has many characteristics, such as sky, distant view, close shot for a landscape figure.
Fig. 6 to Fig. 8 is please referred to, please refers to more exposure image generation step S03 described in figure, including pass through the computer 10
Operation generates multiple exposure images L1, L2, L3 using suitable S type function, as shown in Figure 2.The multiple exposure image L1,
L2, L3 are gray scale (Grayscale) figure.The exposure value of the multiple exposure image L1, L2, L3 are different, and the multiple exposure
Image L1, L2, L3 may respectively be low exposure image L1, middle exposure image L2 and high exposure image L3.
The human eye area-of-interest information extraction step S04, including by 10 operation of computer, be based on by one
The significant property model GBVS of image (Graph-based Visual Saliency) is extracted in each described exposure image L1, L2, L3
Multiple human eye area-of-interest R1, R2, R3.
Wherein, lowest exposure human eye area-of-interest R1 is sky bright areas;Highest exposes human eye area-of-interest R3
For close shot most dark areas;Middle exposure human eye area-of-interest R2 is distant view region.
The human eye area-of-interest weight calculation step S05, including calculate separately each described exposure image L1, L2, L3
Regional average value Lmed 1,2,3,4, and establish Gauss weighting function k=1,2,3, finally calculate separately each exposure image
The weighted value Wk (i, j) of each human eye area-of-interest in L1, L2, L3 is as follows:
According to above-mentioned calculating formula, weighted value W1, W2, W3 to the multiple exposure image L1, L2, L3 are obtained.
The basal layer and levels of detail extraction step S06, including by 10 operation of computer, while using principal component
Analysis (Principal Component Analysis, PCA) method extracts image from each described exposure image L1, L2, L3
Basal layer B1, B2, B3 and image detail layer D1, D2, D3, as shown in Figure 2;Wherein described image levels of detail D1, D2, D3 are by institute
Original image OG is stated to subtract each other with described image basal layer B1, B2, B3 and obtain.The PCA method is those skilled in the art institute energy
The technology of understanding, repeats no more in this article.
The basal layer and levels of detail extraction step S06 include brightness extraction step, image basis layer generation step and
Image detail layer generation step.
The brightness extraction step, including by 10 operation of computer, generated from the original image OG described low
Exposure image L1, the middle exposure image L2, brightness value is extracted respectively in the high exposure image L3, by PCA dimensionality reduction, before taking
K characteristic value accumulates contribution rate > 95%.
Described image basal layer generation step, including by 10 operation of computer, original image is reconstructed with PCA inverse transformation
That is image basis layer B1, B2, B3.
Described image levels of detail generation step, including by 10 operation of computer, by each exposure image L1,
L2, L3 subtracted image basal layer B1, B2, B3, obtain image detail layer D1, D2, D3.
Fig. 9 to Figure 11 is please referred to, is the schematic diagram of the associated picture of another embodiment of the present invention, wherein that shown in Figure 9 is
The exposure image L of another original image OG, basal layer B, Figure 11 those shown that shown in Figure 10 is the exposure image L is the exposure
The levels of detail D of light image L.
Referring to figure 4., the blending image step S07, including fusion all image basis layer B1, B2, B3 are melted with generating
Basal layer CB is closed, merges all image detail layer D1, D2, D3 to generate fusion levels of detail CD, finally merges the fusion basis
The layer CB and fusion levels of detail CD is to generate blending image CG.In invention preferred embodiment, the blending image step S07
Including image basis layer fusion steps S07a, image detail layer fusion steps S07b and blending image generation step S07c.
Described image basal layer fusion steps S07a, including by 10 operation of computer, according to each exposure diagram
As the weighted value Wk (i, j) of the human eye area-of-interest each in L1, L2, L3, to described multiple images basal layer B1, B2, B3
It is weighted fusion, to generate a fusion basal layer CB.In present pre-ferred embodiments, described image basal layer fusion steps
S07a is merged with a basal layer fusion calculation formula, and the basis fusion calculation formula is as follows:
L=B1 x W1+B2 x W2+B3 x W3.
Described image levels of detail fusion steps S07b, including by 10 operation of computer, to multiple images levels of detail
D1, D2, D3 are merged, to generate a fusion levels of detail CD.In present pre-ferred embodiments, the fusion of described image levels of detail
Step S07b is the multiple including first generating multiple details enhancing coefficient a, b, c, then through levels of detail fusion calculation formula progress
The fusion of image detail layer D1, D2, D3.
Details enhances coefficient a, b or c, by exposure image L1, L2, L3 and image basis layer B1, B2, B3 of original image OG
Difference determines that difference is bigger, and details enhancing is more.Applied formula is as follows:
The details enhances coefficient a, b or c=1+sqrt (std (L)/255-std (B)/255);Wherein L represents original image
As the gray value of exposure image L1, L2, L3 of OG, B represents the gray value of basal layer, and sqrt represents square root (Square
Root), Std represents standard deviation (Standard Deviation).In addition, described details enhancing coefficient a, b or c are alternatively referred to as
Details adaptation coefficient.
The levels of detail fusion calculation formula is as follows:
D=(a x D1+b x D2+c x D3)/(a+b+c);Wherein D1, D2, D3 respectively represent image detail layer D1,
The gray value of D2, D3, a, b, c are that details enhances coefficient.
The blending image generation step S07c, including by 10 operation of computer, merge the fusion basal layer
The CB and fusion levels of detail CD, to generate a blending image CG.In present pre-ferred embodiments, the fusion basal layer
The CB and fusion levels of detail CD is merged through linear superposition mode.
The output step S08, including by the computer 10, export the blending image CG and filled to external electrical
It sets, such as is output to display.
Compared to the prior art, the high dynamic display methods of image multiple-exposure fusion of the present invention penetrates raw from original image OG
Figure is extracted respectively after multiple exposure image L1, L2, L3 with different exposure values, then to each described exposure image L1, L2, L3
As basal layer B1, B2, B3 and image detail layer D1, D2, D3, described multiple images basal layer then is merged with specific weights weight values
B1, B2, B3, and fusion described multiple images levels of detail D1, D2, D3 are counted accurately with specific detail enhancing, it will finally merge basal layer
CB and fusion levels of detail CD carries out second of fusion and obtains blending image CG.Melt through aforementioned present invention method is obtained
Image is closed, has been directed to the reinforcing that human eye area-of-interest R1, R2, R3 carry out details, therefore compared to being averaged according to image
The prior art image intensifying method that value optimizes, the present invention have preferable display effect.In addition, the method for the present invention is even
Have following advantages:
1. the present invention is based on HVS human eye sensory perceptual systems to carry out subregion enhancing, enhance different exposure image human eye senses respectively
Interest region retains more image details.
2. the present invention is based on the methods of PCA to generate basal layer and levels of detail, than using guidance/bilateral filtering calculating more to accelerate
Speed, effect are essentially identical.
3. the method for adaptive detailing enhancing coefficient of the present invention uses an experience amplification coefficient more excellent than conventional method, energy
Original image OG and image detail layer D1, D2, D3 gap are balanced, effectively avoids noise amplification and details enhancing deficiency to image detail
The influence of layer D1, D2, D3.
Claims (12)
1. a kind of high dynamic display methods of multiple-exposure fusion, which is characterized in that the described method includes:
More exposure image generation steps, including suitable S type function is used to generate multiple exposure images from original image;
Human eye area-of-interest information extraction step, including each exposure image of the significant property model extraction of image is based on by one
In multiple human eye area-of-interests;
Human eye area-of-interest weight calculation step, it is interested including calculating separately each human eye in each exposure image
The weighted value in region;
Basal layer and levels of detail extraction step, including image basis layer and image detail layer are extracted from each exposure image;
And
Blending image step, including all image basis layers of fusion to be to generate fusion basal layer, merge all image detail layers with
Fusion levels of detail is generated, merges the fusion basal layer and the fusion levels of detail finally to generate blending image.
2. the high dynamic display methods of multiple-exposure fusion as described in claim 1, it is characterised in that:
The method further includes:
Computer provides step, including provides computer;
Original image input step, including the original image is inputted to the computer;And
More exposure image generation steps, the human eye area-of-interest information extraction step, the human eye area-of-interest
Weight calculation step, the basal layer and levels of detail extraction step, the blending image step are by the Computing
And it executes.
3. the high dynamic display methods of multiple-exposure fusion as claimed in claim 2, it is characterised in that: the multiple-exposure melts
The high dynamic display methods of conjunction further comprises output step, and the output step includes passing through the computer, described in output
Blending image is to external electronic.
4. the high dynamic display methods of multiple-exposure fusion as described in claim 1, it is characterised in that: the multiple exposure diagram
The exposure value of picture is different.
5. the high dynamic display methods of multiple-exposure fusion as described in claim 1, it is characterised in that: the multiple exposure diagram
As being grayscale image.
6. the high dynamic display methods of multiple-exposure as described in claim 1 fusion, it is characterised in that: the basal layer and thin
Ganglionic layer extraction step is to extract described image basal layer and described in each exposure image using principal component analytical method
Image detail layer.
7. the high dynamic display methods of multiple-exposure fusion as claimed in claim 2, it is characterised in that: the blending image step
It suddenly include: image basis layer fusion steps, including by the Computing, according to the people each in each exposure image
The weighted value of eye area-of-interest, is weighted fusion to described multiple images basal layer, to generate the fusion basal layer.
8. the high dynamic display methods of multiple-exposure fusion as claimed in claim 7, it is characterised in that: the blending image step
Suddenly it further includes: image detail layer fusion steps, including by the Computing, multiple images levels of detail is merged,
To generate the fusion levels of detail, wherein described image levels of detail fusion steps enhance coefficients including first generating multiple details, then
The fusion of described multiple images levels of detail is carried out through a levels of detail fusion calculation formula.
9. the high dynamic display methods of multiple-exposure fusion as claimed in claim 8, it is characterised in that: the blending image step
Suddenly further include: blending image generation step, including by the Computing merges the fusion basal layer and described melts
Levels of detail is closed, to generate the blending image.
10. the high dynamic display methods of multiple-exposure fusion as claimed in claim 8, it is characterised in that: the details enhancing
Coefficient=1+sqrt (std (L)/255-std (B)/255);Wherein L represents the gray value of the exposure image of original image, and B represents base
The gray value of plinth layer, sqrt represent square root, and Std represents standard deviation.
11. the high dynamic display methods of multiple-exposure fusion as claimed in claim 8, it is characterised in that: the levels of detail is melted
Total formula is as follows: D=(a x D1+b x D2+c x D3)/(a+b+c);Wherein D1, D2, D3 respectively represent the multiple figure
As the gray value of levels of detail, a, b, c are that the details enhances coefficient.
12. the high dynamic display methods of multiple-exposure fusion as described in claim 1, it is characterised in that: the fusion basis
Layer and the fusion levels of detail are to be fused into the blending image through linear superposition mode.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811332568.8A CN109636765B (en) | 2018-11-09 | 2018-11-09 | High dynamic display method based on image multiple exposure fusion |
PCT/CN2019/072435 WO2020093600A1 (en) | 2018-11-09 | 2019-01-18 | Highly dynamic display method based on multi-exposure fusion of images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811332568.8A CN109636765B (en) | 2018-11-09 | 2018-11-09 | High dynamic display method based on image multiple exposure fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109636765A true CN109636765A (en) | 2019-04-16 |
CN109636765B CN109636765B (en) | 2021-04-02 |
Family
ID=66067647
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811332568.8A Active CN109636765B (en) | 2018-11-09 | 2018-11-09 | High dynamic display method based on image multiple exposure fusion |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109636765B (en) |
WO (1) | WO2020093600A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110087003A (en) * | 2019-04-30 | 2019-08-02 | 深圳市华星光电技术有限公司 | More exposure image fusion methods |
CN110602384A (en) * | 2019-08-27 | 2019-12-20 | 维沃移动通信有限公司 | Exposure control method and electronic device |
CN111898532A (en) * | 2020-07-30 | 2020-11-06 | 杭州海康威视数字技术股份有限公司 | Image processing method and device, electronic equipment and monitoring system |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113610861B (en) * | 2021-06-21 | 2023-11-14 | 重庆海尔制冷电器有限公司 | Food image processing method in refrigeration equipment, refrigeration equipment and readable storage medium |
CN113628141B (en) * | 2021-08-18 | 2023-11-28 | 上海磐启微电子有限公司 | HDR detail enhancement method based on high-low exposure image fusion |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110176024A1 (en) * | 2010-01-15 | 2011-07-21 | Samsung Electronics Co., Ltd. | Image Fusion Apparatus and Method |
US20130070965A1 (en) * | 2011-09-21 | 2013-03-21 | Industry-University Cooperation Foundation Sogang University | Image processing method and apparatus |
CN103247036A (en) * | 2012-02-10 | 2013-08-14 | 株式会社理光 | Multiple-exposure image fusion method and device |
CN104077759A (en) * | 2014-02-28 | 2014-10-01 | 西安电子科技大学 | Multi-exposure image fusion method based on color perception and local quality factors |
CN105809641A (en) * | 2016-03-09 | 2016-07-27 | 北京理工大学 | Exposure compensation and edge enhancement method of defogged image |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105279746B (en) * | 2014-05-30 | 2018-01-26 | 西安电子科技大学 | A kind of more exposure image fusion methods based on bilateral filtering |
CN106815827A (en) * | 2017-01-18 | 2017-06-09 | 聚龙智瞳科技有限公司 | Image interfusion method and image fusion device based on Bayer format |
CN107220956A (en) * | 2017-04-18 | 2017-09-29 | 天津大学 | A kind of HDR image fusion method of the LDR image based on several with different exposures |
-
2018
- 2018-11-09 CN CN201811332568.8A patent/CN109636765B/en active Active
-
2019
- 2019-01-18 WO PCT/CN2019/072435 patent/WO2020093600A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110176024A1 (en) * | 2010-01-15 | 2011-07-21 | Samsung Electronics Co., Ltd. | Image Fusion Apparatus and Method |
US20130070965A1 (en) * | 2011-09-21 | 2013-03-21 | Industry-University Cooperation Foundation Sogang University | Image processing method and apparatus |
CN103247036A (en) * | 2012-02-10 | 2013-08-14 | 株式会社理光 | Multiple-exposure image fusion method and device |
CN104077759A (en) * | 2014-02-28 | 2014-10-01 | 西安电子科技大学 | Multi-exposure image fusion method based on color perception and local quality factors |
CN105809641A (en) * | 2016-03-09 | 2016-07-27 | 北京理工大学 | Exposure compensation and edge enhancement method of defogged image |
Non-Patent Citations (3)
Title |
---|
MARK A. ROBERTSON等: "DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES", 《IEEE》 * |
TSUN-HSIEN WANG等: "Pseudo-Multiple-Exposure-Based Tone Fusion With Local Region Adjustment", 《IEEE》 * |
杨镔等: "数字摄像机的高动态范围曝光算法及实现", 《传感技术学报》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110087003A (en) * | 2019-04-30 | 2019-08-02 | 深圳市华星光电技术有限公司 | More exposure image fusion methods |
CN110602384A (en) * | 2019-08-27 | 2019-12-20 | 维沃移动通信有限公司 | Exposure control method and electronic device |
CN110602384B (en) * | 2019-08-27 | 2022-03-29 | 维沃移动通信有限公司 | Exposure control method and electronic device |
CN111898532A (en) * | 2020-07-30 | 2020-11-06 | 杭州海康威视数字技术股份有限公司 | Image processing method and device, electronic equipment and monitoring system |
Also Published As
Publication number | Publication date |
---|---|
WO2020093600A1 (en) | 2020-05-14 |
CN109636765B (en) | 2021-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109636765A (en) | High dynamic display methods based on the fusion of image multiple-exposure | |
Zhang et al. | Dual illumination estimation for robust exposure correction | |
Wang et al. | Pseudo-multiple-exposure-based tone fusion with local region adjustment | |
Gharbi et al. | Deep bilateral learning for real-time image enhancement | |
Lee et al. | Deep chain hdri: Reconstructing a high dynamic range image from a single low dynamic range image | |
Song et al. | Probabilistic exposure fusion | |
Eilertsen et al. | A comparative review of tone‐mapping algorithms for high dynamic range video | |
US20210272251A1 (en) | System and Method for Real-Time Tone-Mapping | |
Shen et al. | QoE-based multi-exposure fusion in hierarchical multivariate Gaussian CRF | |
CN103843032A (en) | Image processing for HDR images | |
Huang et al. | A new hardware-efficient algorithm and reconfigurable architecture for image contrast enhancement | |
CN112449169B (en) | Method and apparatus for tone mapping | |
Yuan et al. | Single image dehazing via NIN-DehazeNet | |
CN109219833B (en) | Enhancing edges in an image using depth information | |
US10445865B1 (en) | Method and apparatus for converting low dynamic range video to high dynamic range video | |
TW200824424A (en) | Adjusting apparatus for enhancing the contrast of image and method thereof | |
Sandoub et al. | A low‐light image enhancement method based on bright channel prior and maximum colour channel | |
Masood et al. | Automatic Correction of Saturated Regions in Photographs using Cross‐Channel Correlation | |
Hsia et al. | High‐performance high dynamic range image generation by inverted local patterns | |
Song et al. | Naturalness index for a tone-mapped high dynamic range image | |
Vanmali et al. | Low complexity detail preserving multi-exposure image fusion for images with balanced exposure | |
Wang et al. | Learning a self‐supervised tone mapping operator via feature contrast masking loss | |
Lo et al. | High dynamic range (hdr) video image processing for digital glass | |
Li et al. | A novel detail weighted histogram equalization method for brightness preserving image enhancement based on partial statistic and global mapping model | |
Qu et al. | Algorithm of multiexposure image fusion with detail enhancement and ghosting removal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 9-2 Tangming Avenue, Guangming New District, Shenzhen City, Guangdong Province Applicant after: TCL Huaxing Photoelectric Technology Co.,Ltd. Address before: 9-2 Tangming Avenue, Guangming New District, Shenzhen City, Guangdong Province Applicant before: Shenzhen China Star Optoelectronics Technology Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |