CN106570928B - A kind of heavy illumination method based on image - Google Patents

A kind of heavy illumination method based on image Download PDF

Info

Publication number
CN106570928B
CN106570928B CN201610998904.7A CN201610998904A CN106570928B CN 106570928 B CN106570928 B CN 106570928B CN 201610998904 A CN201610998904 A CN 201610998904A CN 106570928 B CN106570928 B CN 106570928B
Authority
CN
China
Prior art keywords
image
neural network
pixel
artificial neural
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610998904.7A
Other languages
Chinese (zh)
Other versions
CN106570928A (en
Inventor
韦伟
刘惠义
钱苏斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201610998904.7A priority Critical patent/CN106570928B/en
Publication of CN106570928A publication Critical patent/CN106570928A/en
Application granted granted Critical
Publication of CN106570928B publication Critical patent/CN106570928B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of heavy illumination method based on image, belongs to field of Computer Graphics.Sample as few as possible illumination again as precisely as possible is used in order to realize, quantitatively stochastical sampling is repeated in image pattern and image pixel two spaces, and employment artificial neural networks are trained, until all pixels point training precision to given threshold.In view of artificial neural network has smallest sample requirement in training, therefore in pixel lack of training samples, processing is averaged to it using the Bagging algorithm idea of integrated study.The present invention tests in the three-dimensional scenic of simulation, the results showed that, compared with prior art, not only the training time is few, and robustness is strong;Under identical relative error precision, image pattern needed for illumination is smaller again, and the fast real-time of speed is good, and the PSNR value for reconstructing scene image is higher.

Description

A kind of heavy illumination method based on image
Technical field
The present invention relates to a kind of heavy illumination method based on image, belongs to machine learning and graphics field.
Background technique
Illumination again (Image-based Relighting, IBR) based on image, also referred to as image-based rending (Image-based Rendering), the purpose is to calculate and obtain optical transport matrix and draw out newly from captured image Light conditions under scene image.Its sharpest edges are the geological informations without scene, are rendered not by scene complexity shadow It rings, and can also show the various lighting effects such as reflection, refraction, scattering.Therefore, IBR has become graphics since proposition at once Field focus of attention.
IBR generally requires to obtain image pattern by intensive sampling, considerably increases working strength and memory space.It can Using machine learning method, by the sampling of small sample, the illumination again based on image is accurately realized as far as possible, is urgent need to resolve The problem of.
Summary of the invention
Technical problem to be solved by the invention is to provide a kind of heavy illumination method based on image.By image pattern by It is cumulative plus, pixel space stochastical sampling, three-layer neural network are trained and the comprehensive fortune of Bagging integrated study thought With to realize small sample, high-precision heavy lighting effect.
The present invention uses following technical scheme to solve above-mentioned technical problem:
The present invention provides a kind of heavy illumination method based on image, which is characterized in that comprising the following specific steps
Step 1: one group of scene data of acquisition, LigX, LigY coordinate including point light source and its corresponding in fixation The image set ImageSet of viewpoint output, is calculated image set ImageSet in the average value ImgAvg_ in tri- channels R, G, B R,ImgAvg_G,ImgAvg_B;
Step 2: the stochastical sampling in image set ImageSet constitutes the image subset that image pattern number is ImageNum ImageSubset;
Step 3: the stochastical sampling in the pixel space of image subset ImageSubset obtains the instruction of artificial neural network Practice sample set, specifically:
(1) stochastical sampling in the pixel space of image subset ImageSubset constitutes pixel point set, wherein hits For PixNum, the coordinate of pixel is [Px, Py];
(2) training sample set includes two output and input the part for respectively corresponding artificial neural network, wherein input Part include Px, Py, LigX, LigY, ImgAvg_R, ImgAvg_G, ImgAvg_B, output par, c be [LigX, LigY] with The image rgb value of the corresponding position [Px, Py];
Step 4: artificial neural network is trained using the training sample set of step 3, it, will be relatively flat after the completion of training Square error is less than or equal to preset first threshold value δ1Pixel be labeled as the training complete artificial neural network;
Step 5, stochastical sampling again in unlabelled pixel in step 4, trains artificial neural network again, until The pixel that training sample is concentrated all is labeled or unlabelled pixel is unsatisfactory for the most sample that artificial neural network is trained This requirement;It is integrated using Bagging when the smallest sample that unlabelled pixel is unsatisfactory for artificial neural network training requires The thought of study, unlabelled pixel codetermine its output by all neural networks;
Step 6: with trained artificial neural network test chart image set ImageSet, if opposite square error reaches default Second threshold δ2, then trained artificial neural network is saved, step 7 is executed;Otherwise, increase image pattern number ImageNum, Return to 2;
Step 7: the scene under reconstructing light source at an arbitrary position with trained neural network.
As a further optimization solution of the present invention, hits PixNum >=Pix in the step 3min, whereinTminIt is the smallest sample number that artificial neural network training needs, a is coefficient and a >=1).
As a further optimization solution of the present invention, in the step 4 using training sample set to artificial neural network into Before row training, training sample set is normalized.
As a further optimization solution of the present invention, the artificial neural network structure in the step 4 is 7 input sections Point, 2 hidden layers, 3 output nodes, wherein the number of nodes of two hidden layers is identical, input node be respectively Px, Py, LigX, LigY,ImgAvg_R,ImgAvg_G,ImgAvg_B;Output node is respectively [LigX, LigY] and the corresponding position [Px, Py] image Rgb value;The number of nodes N of hidden layerhideIt is determined by experiment.
As a further optimization solution of the present invention, the smallest sample number T that artificial neural network training needsmin=b [(7+ 1)×Nhide+(Nhide+1)×Nhide+(Nhide+ 1) × 3], wherein b is coefficient and b >=10.
As a further optimization solution of the present invention, in step 4 pixel opposite square errorWherein,Indicate the ith pixel point of jth image Practical rgb value, Ij(Pixi) indicate that the jth of neural network prediction output opens the rgb value of the ith pixel point of image.
As a further optimization solution of the present invention, when unlabelled pixel is unsatisfactory for artificial neural network in step 5 When trained smallest sample requires, using the thought of Bagging integrated study, the output of unlabelled pixel is by trained The output simple average of all artificial neural networks obtains.
As a further optimization solution of the present invention, relative mean square error in the step 6
As a further optimization solution of the present invention, increase image pattern number ImageNum in step 6 according to actual needs.
As a further optimization solution of the present invention, image pattern number ImageNum increases by 20.
The invention adopts the above technical scheme compared with prior art, has following technical effect that the present invention in simulation It is tested in two three-dimensional scenics, the results showed that, compared with prior art, not only the training time is few, and robustness is strong;In phase Under same relative error precision, image pattern needed for illumination is less again, and the PSNR value for reconstructing scene image is higher.
Detailed description of the invention
Fig. 1 is flow chart of the method for the present invention.
When Fig. 2 is training error and the training that the present invention and Dragon the and Mitsuba scene of the prior art is respectively adopted Between comparison figure, wherein (a) is the training error of Dragon scene, is (b) training error of Mitsuba scene, (c) The training time of Dragon scene, (d) be Mitsuba scene training time.
Specific embodiment
Technical solution of the present invention is described in further detail with reference to the accompanying drawing:
A kind of heavy illumination method based on image of the present invention, as shown in Figure 1, comprising:
Step 1: one group of scene data (Dagon, Mitsuba) of acquisition, LigX, LigY coordinate including point light source and Its corresponding image set ImageSet in fixed view output;Image set is calculated in the average value in tri- channels R, G, B, is obtained ImgAvg_R,ImgAvg_G,ImgAvg_B;Contextual data is specifically as shown in table 1.
1 contextual data of table
Scene Distribution of light sources Picture size
Dragon 31×31 64×48
Mitsuba 21×21 64×48
Step 2: the stochastical sampling in image set ImageSet constitutes image subset ImageSubset, and image pattern number is ImageNum。
Step 3: the stochastical sampling in pixel of the image subset as ImageSubset obtains artificial neural network and needs Training sample set;
(1) pixel point set is constituted, hits is as the pixel space stochastical sampling of ImageSubset in image subset PixNum, the coordinate of pixel are [Px, Py];
(2) training sample set is formed by inputting, exporting two parts, wherein input attribute include LigX, LigY, Px, Py, ImgAvg_R, ImgAvg_G, ImgAvg_B, output attribute are the image rgb value of [LigX, LigY] and the corresponding position [Px, Py].
Step 4: artificial neural network is trained using training sample set, it, will be with respect to square error after the completion of training RSE≤preset threshold δ1Pixel be labeled as the training complete artificial neural network.
Step 5: stochastical sampling again in unlabelled pixel in step 4 trains artificial neural network again, until The pixel that training sample is concentrated all is labeled or unlabelled pixel is unsatisfactory for the most sample that artificial neural network is trained This requirement;It is required when unmarked pixel is unsatisfactory for the smallest sample that artificial neural network is trained, utilizes Bagging integrated study Thought, codetermine its output by all neural networks.
Step 6: with trained artificial neural network test chart image set ImageSet, if opposite square error reaches pre- If threshold value δ2, then trained artificial neural network is saved;Otherwise, increase image pattern number ImageNum, opened again from step 2 Begin.
Step 7: reconstructing the scene under any light source position with trained artificial neural network.Stochastical sampling and training The image set ImagesetTest of Imageset equivalent amount reconstructs scene with trained neural network.
As shown in Fig. 2, with Ren et al. in " Image Based Relighting Using Neural Technology in the text of Networks.ACM Transactions on Graphics, 2015.34 (4) " is compared.Wherein, in Fig. 2 (a) and (b) be respectively Dragon and Mitsuba scene training error figure, (c) and (d) be Dragon and Mitsuba respectively The training time schematic diagram of scape.By Fig. 2 it will be apparent that, it is (in figure empty using method of the invention with the increase of image pattern number Shown in line), RMSE is obviously faster than the decline of Ren method, and also just meaning needs less sample to reach identical precision;Similarly Training time required for method of the invention is also below Ren method.
Table 2 is to carry out scene reconstruction to the test data of two scenes of Dragon and Mitsuba as a result, showing using less Image can obtain RMSE value lower than Ren method.
3 scene reconstruction result of table
The above, the only specific embodiment in the present invention, but scope of protection of the present invention is not limited thereto, appoints What is familiar with the people of the technology within the technical scope disclosed by the invention, it will be appreciated that expects transforms or replaces, and should all cover Within scope of the invention, therefore, the scope of protection of the invention shall be subject to the scope of protection specified in the patent claim.

Claims (10)

1. a kind of heavy illumination method based on image, which is characterized in that comprising the following specific steps
Step 1: one group of scene data of acquisition, LigX, LigY coordinate including point light source and its corresponding in fixed view The image set ImageSet of output, be calculated image set ImageSet tri- channels R, G, B average value ImgAvg_R, ImgAvg_G,ImgAvg_B;
Step 2: the stochastical sampling in image set ImageSet constitutes the image subset that image pattern number is ImageNum ImageSubset;
Step 3: the stochastical sampling in the pixel space of image subset ImageSubset obtains the training sample of artificial neural network This collection, specifically:
(1) stochastical sampling in the pixel space of image subset ImageSubset constitutes pixel point set, wherein hits is PixNum, the coordinate of pixel are [Px, Py];
(2) training sample set includes two output and input the part for respectively corresponding artificial neural network, wherein importation Including Px, Py, LigX, LigY, ImgAvg_R, ImgAvg_G, ImgAvg_B, output par, c is [LigX, LigY] and [Px, Py] The image rgb value of corresponding position;
Step 4: artificial neural network is trained using the training sample set of step 3, it, will opposite square mistake after the completion of training Difference is less than or equal to preset first threshold value δ1Pixel be labeled as the training complete artificial neural network;
Step 5, stochastical sampling again in unlabelled pixel in step 4, trains artificial neural network, until training again Pixel in sample set is all labeled or unlabelled pixel is unsatisfactory for the smallest sample that artificial neural network is trained and wants It asks;When the smallest sample that unlabelled pixel is unsatisfactory for artificial neural network training requires, Bagging integrated study is utilized Thought, unlabelled pixel codetermines its output by all neural networks;
Step 6: with trained artificial neural network test chart image set ImageSet, if relative mean square error reaches default second Threshold value δ2, then trained artificial neural network is saved, step 7 is executed;Otherwise, increase image pattern number ImageNum, return 2;
Step 7: the scene under reconstructing light source at an arbitrary position with trained neural network.
2. a kind of heavy illumination method based on image according to claim 1, which is characterized in that sampled in the step 3 Number PixNum >=Pixmin, whereinTminIt is the smallest sample number that artificial neural network training needs, A is coefficient and a >=1.
3. a kind of heavy illumination method based on image according to claim 1, which is characterized in that utilized in the step 4 Before training sample set is trained artificial neural network, training sample set is normalized.
4. a kind of heavy illumination method based on image according to claim 1, which is characterized in that the people in the step 4 Artificial neural networks structure is 7 input nodes, 2 hidden layers, 3 output nodes, wherein the number of nodes of two hidden layers is identical, defeated Ingress is respectively Px, Py, LigX, LigY, ImgAvg_R, ImgAvg_G, ImgAvg_B;Output node is [LigX, LigY] With the image rgb value of the corresponding position [Px, Py];The number of nodes N of hidden layerhideIt is determined by experiment.
5. a kind of heavy illumination method based on image according to claim 1 or 2 or 4, which is characterized in that artificial neural network The smallest sample number T that network training needsmin=b [(7+1) × Nhide+(Nhide+1)×Nhide+(Nhide+ 1) × 3], wherein b is Coefficient and b >=10, NhideIt is the number of nodes of hidden layer.
6. a kind of heavy illumination method based on image according to claim 1, which is characterized in that pixel in step 4 Opposite square errorWherein,Indicate jth image The practical rgb value of ith pixel point, Ij(Pixi) indicate that the jth of neural network prediction output opens the ith pixel of image The rgb value of point.
7. a kind of heavy illumination method based on image according to claim 1, which is characterized in that when unmarked in step 5 Pixel be unsatisfactory for artificial neural network training smallest sample require when, using the thought of Bagging integrated study, do not mark The output of the pixel of note is obtained by the output simple average of trained all artificial neural networks.
8. a kind of heavy illumination method based on image according to claim 1, which is characterized in that the phase in the step 6 To mean square errorWherein,Indicate i-th of jth image The practical rgb value of pixel, Ij(Pixi) indicate that the jth of neural network prediction output opens the ith pixel point of image Rgb value.
9. a kind of heavy illumination method based on image according to claim 1, which is characterized in that according to reality in step 6 Need to increase image pattern number ImageNum.
10. a kind of heavy illumination method based on image according to claim 9, which is characterized in that image pattern number ImageNum increases by 20.
CN201610998904.7A 2016-11-14 2016-11-14 A kind of heavy illumination method based on image Expired - Fee Related CN106570928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610998904.7A CN106570928B (en) 2016-11-14 2016-11-14 A kind of heavy illumination method based on image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610998904.7A CN106570928B (en) 2016-11-14 2016-11-14 A kind of heavy illumination method based on image

Publications (2)

Publication Number Publication Date
CN106570928A CN106570928A (en) 2017-04-19
CN106570928B true CN106570928B (en) 2019-06-21

Family

ID=58541876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610998904.7A Expired - Fee Related CN106570928B (en) 2016-11-14 2016-11-14 A kind of heavy illumination method based on image

Country Status (1)

Country Link
CN (1) CN106570928B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909640B (en) * 2017-11-06 2020-07-28 清华大学 Face relighting method and device based on deep learning
CN108765540B (en) * 2018-04-26 2022-04-12 河海大学 Relighting method based on image and ensemble learning
CN110033055A (en) * 2019-04-19 2019-07-19 中共中央办公厅电子科技学院(北京电子科技学院) A kind of complex object image weight illumination method based on the parsing of semantic and material with synthesis

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700109A (en) * 2015-03-24 2015-06-10 清华大学 Method and device for decomposing hyper-spectral intrinsic images
CN105447906A (en) * 2015-11-12 2016-03-30 浙江大学 Method for calculating lighting parameters and carrying out relighting rendering based on image and model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8643701B2 (en) * 2009-11-18 2014-02-04 University Of Illinois At Urbana-Champaign System for executing 3D propagation for depth image-based rendering

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700109A (en) * 2015-03-24 2015-06-10 清华大学 Method and device for decomposing hyper-spectral intrinsic images
CN105447906A (en) * 2015-11-12 2016-03-30 浙江大学 Method for calculating lighting parameters and carrying out relighting rendering based on image and model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Image Based Relighting Using Neural Networks;Peiran REN等;《ACM Transactions on Graphics》;20150831;第34卷(第4期);第111:1-12页

Also Published As

Publication number Publication date
CN106570928A (en) 2017-04-19

Similar Documents

Publication Publication Date Title
CN109241982B (en) Target detection method based on deep and shallow layer convolutional neural network
CN105069746B (en) Video real-time face replacement method and its system based on local affine invariant and color transfer technology
CN110246181B (en) Anchor point-based attitude estimation model training method, attitude estimation method and system
CN113240691B (en) Medical image segmentation method based on U-shaped network
CN106780543A (en) A kind of double framework estimating depths and movement technique based on convolutional neural networks
CN108764250B (en) Method for extracting essential image by using convolutional neural network
CN106780546B (en) The personal identification method of motion blur encoded point based on convolutional neural networks
CN106570928B (en) A kind of heavy illumination method based on image
CN110992366A (en) Image semantic segmentation method and device and storage medium
CN113554032A (en) Remote sensing image segmentation method based on multi-path parallel network of high perception
Condorelli et al. A comparison between 3D reconstruction using nerf neural networks and mvs algorithms on cultural heritage images
CN116188402A (en) Insulator defect identification method based on improved SSD algorithm
CN112686830B (en) Super-resolution method of single depth map based on image decomposition
Zhai et al. Image real-time augmented reality technology based on spatial color and depth consistency
CN106023079B (en) The two stages human face portrait generation method of joint part and global property
CN115496859A (en) Three-dimensional scene motion trend estimation method based on scattered point cloud cross attention learning
CN114972937A (en) Feature point detection and descriptor generation method based on deep learning
Chang et al. Dangerous behaviors detection based on deep learning
CN111915533A (en) High-precision image information extraction method based on low dynamic range
CN114385883B (en) Contour enhancement method for approximately simulating chapping method in style conversion
Liu et al. Enhanced Small Object Detection Neural Network
CN112115949B (en) Optical character recognition method for tobacco certificate and order
Liu et al. Prediction with Visual Evidence: Sketch Classification Explanation via Stroke-Level Attributions
CN112700481B (en) Texture map automatic generation method and device based on deep learning, computer equipment and storage medium
Xu et al. Toward underwater image enhancement: new dataset and white balance priors-based fusion network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190621

Termination date: 20211114

CF01 Termination of patent right due to non-payment of annual fee