CN103413285A - HDR and HR image reconstruction method based on sample prediction - Google Patents

HDR and HR image reconstruction method based on sample prediction Download PDF

Info

Publication number
CN103413285A
CN103413285A CN2013103330812A CN201310333081A CN103413285A CN 103413285 A CN103413285 A CN 103413285A CN 2013103330812 A CN2013103330812 A CN 2013103330812A CN 201310333081 A CN201310333081 A CN 201310333081A CN 103413285 A CN103413285 A CN 103413285A
Authority
CN
China
Prior art keywords
sample
image
training
hdr
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013103330812A
Other languages
Chinese (zh)
Inventor
李晓光
李风慧
卓力
赵寒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN2013103330812A priority Critical patent/CN103413285A/en
Publication of CN103413285A publication Critical patent/CN103413285A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses an HDR and HR image reconstruction method based on sample prediction. An algorithm is divided into an off-line training part and an on-line learning part. The off-line training part comprises a learning sample collecting and organizing part and a classification predictor training part. The sample collecting process is divided into three classes to be conducted respectively according to the difference of scene brightness, and a clustering method is used for organizing sample files; afterwards, a linear or nonlinear predictor learning method is used for training a classification predictor, and HDR-HR reconstruction is conducted on multiple input LR-LDR images with different exposure parameters by an on-line reconstruction part. First, scene background brightness classification is conducted through the average images of input images, afterwards, according to a brightness classification result, the classification predictor which is well trained by the off-line training part is used for conducting high dynamic range and high resolution detailed information prediction on the input images, and at last high frequency information is reconstructed. By means of the HDR and HR image reconstruction method, effective imaging can be carried out in high-contrast scenes, and the objective that high resolution images and high dynamic range images can be reconstructed at the same time is achieved.

Description

HDR and HR image rebuilding method based on the sample prediction
Technical field
The present invention relates to digital image processing method, particularly a kind of HDR and HR image rebuilding method based on the sample prediction.
Background technology
The factor that affects picture quality has a lot, as spatial resolution, luminance contrast, noise etc.High-quality image, when effectively showing high contrast scene, also should have higher spatial resolution.For the high dynamic range images demonstration of image and the Problems of Reconstruction of spatial resolution, many scholars have carried out some fruitful research work, but they independently carry out respectively basically.Existing Super-Resolution Restoration from Image Sequences supposes that usually the exposure parameter of multiple image is that parameter and noise parameter constant, the camera response function are known; But the image obtained in real world is difficult to meet to above assumed condition usually.Therefore, in unified technological frame, rebuild high dynamic range and high-definition picture, theoretical foundation and the practical application of image co-registration all had to certain value.This invention can be carried out effective imaging to high contrast scene, can be used for night streetscape monitoring image and processes or provide the digital photograph handling implement for the photographic artist.
Summary of the invention
The object of the invention is to, several low-resolution images with different exposure parameters are redeveloped into and have high brightness dynamic range and high-resolution high quality graphic by Same Scene.
In order to achieve the above object, a kind of HDR and HR image rebuilding method based on the sample prediction of the present invention is characterized in that: the method is divided into off-line training part and the online two parts of rebuilding; Off-line partly comprises learning sample collection, tissue and classification fallout predictor training part.The online part of rebuilding is that to carry out HDR-HR be high dynamic range and high resolution image reconstruction for low-dynamic range and low-resolution image image to several LDR-LR with different exposure parameters of input.At first, by the average image of input picture, carry out the background luminance classification of scene; Then, according to the brightness classification results, utilize the classification fallout predictor that off-line partly trains to carry out high dynamic range and the prediction of high resolving power detailed information to input picture, finally rebuild the HDR-HR image;
The method comprises the steps:
(1) off-line training part
1) gather training sample: training image is that several of Same Scene have the different low-resolution image of exposure parameter and target image corresponding to a width; In training image, extract corresponding LDR-LR and HDR-HR image information piece to as training sample; Concrete acquisition step comprises:
1. according to the scene brightness difference, sample is divided into to clear zone, dark space and zone, three, moderate district;
2. detailed information is calculated: adopt two-sided filter to extract the detailed information of every width sample image as sample data;
3. sample collection: sample is the paired image information piece of input LDR-LR and the extraction of target HDR-HR detail pictures correspondence position.According to the scene brightness classification results, the clear zone sample extracts in short LDR-LR of the time shutter detail view corresponding with target image; The dark space sample extracts in long LDR-LR of the time shutter detail view corresponding with target image; The moderate district of brightness, select to extract in detail view that moderate LDR-LR of time shutter is corresponding with target image;
For three brightness region, extract the training sample set of three correspondences;
2) tissue training's sample: adopt clustering method to carry out taxonomic organization to the sample set from the Different background illumination district;
3) be fallout predictor of each cluster sample set training; Obtain three classification fallout predictors in corresponding clear zone, dark space and moderate district;
(2) rebuild online part:
1) scene brightness of input image sequence is cut apart, formed three different zones of exposure;
2) the basic layer of input image sequence is estimated;
3) according to the brightness classification results of input picture, by the classification fallout predictor trained, the detailed information of each position image block is predicted, obtained the levels of detail of scene;
4) by basic layer and levels of detail estimated result additive fusion;
5) fused images is carried out to the constraint of low-resolution image observation model, obtain reconstructed results.
The technique effect that the present invention is useful is: by the study to the example sample, set up the mapping relations between LDR-LR (Low Dynamic Range – Low Resolution) and HDR-HR (High Dynamic Range-High Resolution).By rationalization's sample, the strategy such as classification based training learning model is in the situation that without the reconstruction of combining of the artificial mutual HDR-HR of realization image.The method is divided into off-line and online two parts.The off-line part mainly completes the training of collection, tissue and the classification fallout predictor of example sample; Online part partly trains the classification fallout predictor obtained to complete the reconstruction of combining of image by off-line.The reconstruction of combining of carrying out image high dynamic range and super-resolution in the framework based on sample prediction study is provided, and the method can be rebuild high dynamic range and high-resolution target image simultaneously.
Below in conjunction with example, be elaborated with reference to accompanying drawing, in the hope of purpose of the present invention, feature and advantage are obtained to more deep understanding.
The accompanying drawing explanation:
Fig. 1, off-line training part process flow diagram;
Fig. 2, online reconstruction portion are divided process flow diagram;
Fig. 3, sample extraction mode (corresponding relation); A) HDR-HR detail pictures b) LDR-LR detail pictures
Embodiment:
Below in conjunction with Figure of description, embodiment of the present invention is illustrated:
Put forward the methods of the present invention is divided into off-line training and the online two parts of rebuilding.Off-line training part process flow diagram as shown in Figure 1, comprises learning sample collection, tissue and classification fallout predictor training part.The sample collection process is divided three classes and carries out respectively according to the scene brightness difference.Adopt clustering method to organize sample file.Then, by linearity or nonlinear prediction device learning method, the classification fallout predictor is trained.
Online reconstruction portion divides process flow diagram as shown in Figure 2, and the LR-LDR image that 3 width of inputting is had to different exposure parameters carries out the HDR-HR reconstruction.At first, by the average image of input picture, carry out the background luminance classification of scene; Then, according to the brightness classification results, utilize the classification fallout predictor that off-line training partly trains input picture to be carried out to the prediction of high dynamic range and high resolving power detailed information, finally rebuild high-frequency information.
Below in conjunction with example, the method is elaborated.
(1) off-line training part
Training image is chosen many group HDR scene images and is formed.Each HDR scene training plan is by the excessive I of exposure 1, moderate I exposes 0With the too small I of exposure -1Three width LDR-LR images and the target image I of HDR-HR scene corresponding to a width HDR-HRForm.When sample collection, at first the HDR-HR scene is carried out to the background luminance classification.Background luminance classification can be adopted kinds of schemes, as can be to the average image I of three width LDR-LR images AverageCarry out the K mean cluster, be divided into three classes, thereby image is divided into to clear zone, three zones in moderate district and dark space.According to the scene brightness classification results, the clear zone sample extracts in short LDR-LR of the time shutter detail view corresponding with target image; The dark space sample extracts in long LDR-LR of the time shutter detail view corresponding with target image; The moderate district of brightness, select to extract in detail view that moderate LDR-LR of time shutter is corresponding with target image; Acquisition example sample in each zone, form three training sample sets.
The example sample is comprised of paired image information piece, i.e. HDR-HR image block and corresponding LDR-LR image block.Before sample extraction, respectively LDR-LR and HDR-HR training image are carried out to two-sided filter filtering, then original image is deducted to filtered image and obtain detailed information.On corresponding detailed information image, according to the paired example sample of corresponding relation collection shown in Figure 3.The sampling multiple of take in Fig. 3 equals 2 and is example.What corresponding sample extracted respectively is the vector of 16 dimensions.
Three training sample database that collect are carried out respectively the sample tissue based on cluster.Can adopt the K mean cluster, the LDR-LR in sample is partly carried out to cluster.
For each Sample Storehouse, train a classification fallout predictor.The classification fallout predictor consists of one group of sub-fallout predictor, the corresponding sub-fallout predictor of the sample set of each cluster classification.In the training of sub-fallout predictor, all samples of corresponding classification are training sample.Wherein LDR-LR partly is input, and HDR-HR is partly target.The purpose of fallout predictor is to describe the mapping relations of similar sample LDR-LR and HDR-HR.This mapping relations are for instructing the HDR-HR image reconstruction of non-training sample LDR-LR image sequence.Sub-fallout predictor can adopt simple least mean-square error (Least Mean Squares, LMS) fallout predictor.
Off-line training purpose partly is the classification fallout predictor of training and the corresponding number of background luminance classification quantity, means the mapping relations between the concentrated LDR-LR of different training samples and HDR-HR.The classification fallout predictor is for the detailed information prediction of online process of reconstruction.
(2) rebuild online part
The input picture that three width of take do not belong to training image is example, I -1Be a shorter image of time shutter, I 1For long image of time shutter, I 0For the time shutter normal picture.In order to keep scene overall brightness dynamic range, select the average image of three width experiment input pictures as the LDR-LR initial pictures, initial pictures is amplified to target image size through bilinear interpolation, as basic tomographic image.To I 0Gray level image carry out the K mean cluster and obtain the scene brightness classification, be partitioned into the moderate district of clear zone, dark space and brightness.
To I -1, I 0And I 1Carry out respectively the detailed information extraction, be about to the difference image of former figure and the filtered smoothed image of two-sided filter as the LDR-LR detail pictures.
According to the brightness classification results, in the process that each regional pixel is rebuild, adopt respectively the classification fallout predictor of corresponding classification to predict.During prediction, at first the input data encode by the code book that the sample classification process produces, i.e. classification; Then according to its classification, select corresponding sub-fallout predictor to carry out the detailed information prediction.
Corresponding to three width input pictures, for clear zone, I -1The corresponding details of image is many, for dark space, and I 1The corresponding details of image is many, corresponding normal region I 0The corresponding details of image is many.Therefore, when with the classification fallout predictor, being used for carrying out the prediction of high-frequency information, correspond respectively to different luminance areas, adopt different input pictures to instruct the prediction of detailed information, estimate that finally stacked being added in the initial estimation image of the detailed information obtained forms the HDR-HR image merged.
Finally, by the image observation model, the basic tomographic image of employing interpolation amplification carries out model constrained to fused images, by iteration optimization, obtain the reconstructed results image.
The high dynamic range that the present invention is based on study is combined method for reconstructing with super-resolution, can carry out effective imaging to high contrast scene, reaches the target of rebuilding simultaneously high resolving power and high dynamic range images.The present invention has wide range of applications, and can be used for night streetscape monitoring image and processes and provide the digital photograph handling implement for the photographic artist.The off-line training process can once be trained, repeatedly application.Effective, the fast operation of online reconstruction.

Claims (2)

1. HDR and HR image rebuilding method based on a sample prediction, it is characterized in that: the method is divided into off-line training part and the online two parts of rebuilding; Specific as follows:
(1) off-line training part
1) gather training sample: training image is that several of Same Scene have the different low-resolution image of exposure parameter and target image corresponding to a width; In training image, extract corresponding LDR-LR and HDR-HR image information piece to as training sample;
2) tissue training's sample: adopt clustering method to carry out taxonomic organization to the sample set from the Different background illumination district;
3) be fallout predictor of each cluster sample set training; Obtain three classification fallout predictors in corresponding clear zone, dark space and moderate district;
(2) rebuild online part:
1) scene brightness of input image sequence is cut apart, formed three different zones of exposure;
2) the basic layer of input image sequence is estimated;
3) according to the brightness classification results of input picture, by the classification fallout predictor trained, the detailed information of each position image block is predicted, obtained the levels of detail of scene;
4) by basic layer and levels of detail estimated result additive fusion;
5) fused images is carried out to the constraint of low-resolution image observation model, obtain reconstructed results.
2. a kind of HDR and HR image rebuilding method based on sample prediction according to claim 1 is characterized in that: in described off-line training part, gather training sample,
1. according to the scene brightness difference, sample is divided into to clear zone, dark space and zone, three, moderate district;
2. detailed information is calculated: adopt two-sided filter to extract the detailed information of every width sample image as sample data;
3. sample collection: sample is the paired image information piece of input LDR-LR and the extraction of target HDR-HR detail pictures correspondence position; According to the scene brightness classification results, the clear zone sample extracts in short LDR-LR of the time shutter detail view corresponding with target image; The dark space sample extracts in long LDR-LR of the time shutter detail view corresponding with target image; The moderate district of brightness, select to extract in detail view that moderate LDR-LR of time shutter is corresponding with target image;
For three brightness region, extract the training sample set of three correspondences.
CN2013103330812A 2013-08-02 2013-08-02 HDR and HR image reconstruction method based on sample prediction Pending CN103413285A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013103330812A CN103413285A (en) 2013-08-02 2013-08-02 HDR and HR image reconstruction method based on sample prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013103330812A CN103413285A (en) 2013-08-02 2013-08-02 HDR and HR image reconstruction method based on sample prediction

Publications (1)

Publication Number Publication Date
CN103413285A true CN103413285A (en) 2013-11-27

Family

ID=49606290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013103330812A Pending CN103413285A (en) 2013-08-02 2013-08-02 HDR and HR image reconstruction method based on sample prediction

Country Status (1)

Country Link
CN (1) CN103413285A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881644A (en) * 2015-05-25 2015-09-02 华南理工大学 Face image acquisition method under uneven lighting condition
CN104899845A (en) * 2015-05-10 2015-09-09 北京工业大学 Method for fusing multiple exposure images based on 1 alphabeta space scene migration
CN108182672A (en) * 2014-05-28 2018-06-19 皇家飞利浦有限公司 Method and apparatus for the method and apparatus encoded to HDR image and for using such coded image
CN108846797A (en) * 2018-05-09 2018-11-20 浙江师范大学 Image super-resolution method based on two kinds of training set
CN109477848A (en) * 2016-07-25 2019-03-15 西门子医疗保健诊断公司 The system of sample container lid, method and apparatus for identification
CN110475072A (en) * 2017-11-13 2019-11-19 Oppo广东移动通信有限公司 Shoot method, apparatus, terminal and the storage medium of image
WO2020107646A1 (en) * 2018-11-28 2020-06-04 深圳市华星光电半导体显示技术有限公司 Image processing method
CN111292264A (en) * 2020-01-21 2020-06-16 武汉大学 Image high dynamic range reconstruction method based on deep learning
CN111669512A (en) * 2019-03-08 2020-09-15 恒景科技股份有限公司 Image acquisition device
CN111709896A (en) * 2020-06-18 2020-09-25 三星电子(中国)研发中心 Method and equipment for mapping LDR video into HDR video
US10950036B2 (en) 2018-03-27 2021-03-16 Samsung Electronics Co., Ltd. Method and apparatus for three-dimensional (3D) rendering

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010016053A1 (en) * 1997-10-10 2001-08-23 Monte A. Dickson Multi-spectral imaging sensor
DE10162422A1 (en) * 2001-12-18 2003-07-17 Zeiss Optronik Gmbh Visualization of extreme contrast differences in camera systems
CN101809617A (en) * 2007-07-30 2010-08-18 杜比实验室特许公司 Improve dynamic range of images
CN102693538A (en) * 2011-02-25 2012-09-26 微软公司 Global alignment for high-dynamic range image generation
CN103201766A (en) * 2010-11-03 2013-07-10 伊斯曼柯达公司 Method for producing high dynamic range images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010016053A1 (en) * 1997-10-10 2001-08-23 Monte A. Dickson Multi-spectral imaging sensor
DE10162422A1 (en) * 2001-12-18 2003-07-17 Zeiss Optronik Gmbh Visualization of extreme contrast differences in camera systems
CN101809617A (en) * 2007-07-30 2010-08-18 杜比实验室特许公司 Improve dynamic range of images
CN103201766A (en) * 2010-11-03 2013-07-10 伊斯曼柯达公司 Method for producing high dynamic range images
CN102693538A (en) * 2011-02-25 2012-09-26 微软公司 Global alignment for high-dynamic range image generation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李晓光等: "高分辨率与高动态范围图像联合重建研究进展", 《测控技术》, vol. 31, no. 5, 31 December 2012 (2012-12-31) *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108182672A (en) * 2014-05-28 2018-06-19 皇家飞利浦有限公司 Method and apparatus for the method and apparatus encoded to HDR image and for using such coded image
CN104899845A (en) * 2015-05-10 2015-09-09 北京工业大学 Method for fusing multiple exposure images based on 1 alphabeta space scene migration
CN104899845B (en) * 2015-05-10 2018-07-06 北京工业大学 A kind of more exposure image fusion methods based on the migration of l α β spatial scenes
CN104881644A (en) * 2015-05-25 2015-09-02 华南理工大学 Face image acquisition method under uneven lighting condition
CN109477848A (en) * 2016-07-25 2019-03-15 西门子医疗保健诊断公司 The system of sample container lid, method and apparatus for identification
CN110475072B (en) * 2017-11-13 2021-03-09 Oppo广东移动通信有限公司 Method, device, terminal and storage medium for shooting image
CN110475072A (en) * 2017-11-13 2019-11-19 Oppo广东移动通信有限公司 Shoot method, apparatus, terminal and the storage medium of image
US11412153B2 (en) 2017-11-13 2022-08-09 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Model-based method for capturing images, terminal, and storage medium
US10950036B2 (en) 2018-03-27 2021-03-16 Samsung Electronics Co., Ltd. Method and apparatus for three-dimensional (3D) rendering
CN108846797A (en) * 2018-05-09 2018-11-20 浙江师范大学 Image super-resolution method based on two kinds of training set
CN108846797B (en) * 2018-05-09 2022-03-11 浙江师范大学 Image super-resolution method based on two training sets
WO2020107646A1 (en) * 2018-11-28 2020-06-04 深圳市华星光电半导体显示技术有限公司 Image processing method
CN111669512A (en) * 2019-03-08 2020-09-15 恒景科技股份有限公司 Image acquisition device
CN111292264A (en) * 2020-01-21 2020-06-16 武汉大学 Image high dynamic range reconstruction method based on deep learning
CN111292264B (en) * 2020-01-21 2023-04-21 武汉大学 Image high dynamic range reconstruction method based on deep learning
CN111709896A (en) * 2020-06-18 2020-09-25 三星电子(中国)研发中心 Method and equipment for mapping LDR video into HDR video
CN111709896B (en) * 2020-06-18 2023-04-07 三星电子(中国)研发中心 Method and equipment for mapping LDR video into HDR video

Similar Documents

Publication Publication Date Title
CN103413285A (en) HDR and HR image reconstruction method based on sample prediction
CN103413286A (en) United reestablishing method of high dynamic range and high-definition pictures based on learning
Chen et al. Real-world single image super-resolution: A brief review
CN103279935B (en) Based on thermal remote sensing image super resolution ratio reconstruction method and the system of MAP algorithm
CN103747189A (en) Digital image processing method
CN111062872A (en) Image super-resolution reconstruction method and system based on edge detection
CN108389226A (en) A kind of unsupervised depth prediction approach based on convolutional neural networks and binocular parallax
CN104899830A (en) Image super-resolution method
Cheng et al. Zero-shot image super-resolution with depth guided internal degradation learning
CN105100579A (en) Image data acquisition processing method and related device
DE102011078662A1 (en) Capture and create high dynamic range images
CN109785236B (en) Image super-resolution method based on super-pixel and convolutional neural network
CN112733950A (en) Power equipment fault diagnosis method based on combination of image fusion and target detection
CN105684412A (en) Calendar mechanism for a clock movement
WO2018168539A1 (en) Learning method and program
CN105844630A (en) Binocular visual image super-resolution fusion de-noising method
CN102609931B (en) Field depth expanding method and device of microscopic image
CN110335222B (en) Self-correction weak supervision binocular parallax extraction method and device based on neural network
CN108416803A (en) A kind of scene depth restoration methods of the Multi-information acquisition based on deep neural network
CN115393227B (en) Low-light full-color video image self-adaptive enhancement method and system based on deep learning
CN102930518A (en) Improved sparse representation based image super-resolution method
CN111861880A (en) Image super-fusion method based on regional information enhancement and block self-attention
CN101202845A (en) Method for changing infrared image into visible light image
CN103020898A (en) Sequence iris image super-resolution reconstruction method
CN111539888A (en) Neural network image defogging method based on pyramid channel feature attention

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20131127