CN109829967A - A kind of mobile terminal surface optical field rendering method based on deep learning - Google Patents

A kind of mobile terminal surface optical field rendering method based on deep learning Download PDF

Info

Publication number
CN109829967A
CN109829967A CN201910167533.1A CN201910167533A CN109829967A CN 109829967 A CN109829967 A CN 109829967A CN 201910167533 A CN201910167533 A CN 201910167533A CN 109829967 A CN109829967 A CN 109829967A
Authority
CN
China
Prior art keywords
mobile terminal
target object
deep learning
method based
terminal surface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910167533.1A
Other languages
Chinese (zh)
Other versions
CN109829967B (en
Inventor
张谷力
陈安沛
张迎梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Plex VR Digital Technology Shanghai Co Ltd
Original Assignee
Plex VR Digital Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Plex VR Digital Technology Shanghai Co Ltd filed Critical Plex VR Digital Technology Shanghai Co Ltd
Priority to CN201910167533.1A priority Critical patent/CN109829967B/en
Publication of CN109829967A publication Critical patent/CN109829967A/en
Application granted granted Critical
Publication of CN109829967B publication Critical patent/CN109829967B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The mobile terminal surface optical field rendering method based on deep learning that the invention discloses a kind of, including step S1: being rendered the sample photographs of target object all angles against target object shooting using camera;Step S2: 360 degree of Environments of the environment that photo synthesis target object is currently located are used;Step S3: under the consistent space coordinates of model file for sample photographs being snapped to and being rendered target object;Step S4: collecting the training data of deep neural network, builds and trains deep neural network;Step S5: it renders be rendered target object in the presentation for arbitrarily checking angle in real time, and realize the switching of 360 degree of Environments, obtain the effect of different rendering results.The present invention has used advanced deep learning framework, removes an integral in fitting rendering equation, reduces the operand in render process, when rendering, the present invention can random handoff environment textures, achieve the effect that a relighting.

Description

A kind of mobile terminal surface optical field rendering method based on deep learning
Technical field
The present invention relates to body surface rendering field more particularly to a kind of mobile terminal surface optical field wash with watercolours based on deep learning Dyeing method.
Background technique
The relevant research of appearance presentation for reappearing the object in real world is always computer graphics and computer view The active research direction in feel field, because the technology has the application scenarios of huge practical value, in addition to virtual reality and increasing Except strong reality, the simulation and emulation that the technology carries out virtual material, electronics quotient is also can be used in industry clothes material design Business field can similarly borrow the technology and truly show commodity to user.In recent years some mobile devices such as smart phone and Some headset equipments are quickly grown, and most equipment is equipped with camera and GPU (graphics processor), and equipment some in this way make It obtains more reliable and feasible in mobile terminal progress object rendering.But again due to some configuration limits of mobile device, for example calculate Shortage of resources, battery capacity and heat dissipation problem, so that becoming existing another in the object rendering that mobile phone terminal carries out high frame per second Big challenge.
Rendering method (PBR) based on physics can obtain and its accurate and reasonable rendering result, but this method needs The present invention is wanted to obtain the physics material properties for the object being rendered, such as roughness, metallicity, BRDF (bilateral reflection point in advance Cloth function) etc. the extremely many and diverse material properties of some acquisition process, and this method needs huge operand.With this method pair What is answered is the rendering method (IBR) based on image, this method can allow the present invention obtain the sense of reality rendering result and can Reaching 60fps equipped with the end PC of graphics calculations equipment, but be transplanted to the algorithm to current smart machine (such as Google Pixel) it can only achieve the speed of most 16fps when using 100% GPU calculation resources.
Chen work (Anpei Chen, Minye Wu, Yingliang Zhang, Nianyi Li, Jie Lu, Shenghua Gao,and Jingyi Yu.2018.Deep Surface Light Fields.Proc.ACM Comput.Graph.Interact.Tech.1,1, Article 14 (July 2018), 17pages.) it flourishes current Depth learning technology be integrated to the rendering method (IBR) based on image, according to three-dimensional point different angle sparse sampling Value of the three-dimensional point on all visible directions is recovered, is rendered object in presentation at any angle to recover.But The technology does not assume that environment light is it is known that then bloom with depth network is gone to be fitted as a whole removing, although can be with Visually good effect is obtained, but the result and true value that are actually fitted still have a gap.In addition in the work The depth network structure that face uses compares redundancy, results in some less necessary calculating, this is but also this method is being furnished with The picture of the end the PC rendering 1024*1024 of 1080TiGPU also can only achieve 35fps, let alone in mobile terminal.In the base of Chen On plinth, the present invention is innovated and has been improved, and assumes initially that environment light it is known that this can be by shifting to an earlier date the environment of shooting environmental Textures are realized, certain sub-fraction (James T of fitting rendering equation then can be removed with depth network Kajiya.1986.The rendering equation.In ACM Siggraph Computer Graphics, Vol.20.ACM, 143-150), reach with true value more close to match value, to solve the first problem of chen, and Later period can accomplish that ring change border light achievees the effect that relight.In addition, in order to which algorithm can be run in mobile terminal, Present invention uses a completely new network structures, eliminate the redundancy of the network structure of chen, realize and render in mobile terminal 1024*1024 picture reaches real-time speed.
Therefore, those skilled in the art is dedicated to developing a kind of mobile terminal surface optical field rendering side based on deep learning Method, this method can reappear the surface optical field of real goal object and real-time rendering can come out in mobile end equipment.
Summary of the invention
In view of the above drawbacks of the prior art, the technical problem to be solved by the present invention is to overcome object in the prior art It is new as (appearance) to provide a kind of reproduction object table by depth network for the problems such as surface rendering method operand is big Method.
To achieve the above object, the mobile terminal surface optical field rendering method based on deep learning that the present invention provides a kind of, The following steps are included:
Step S1: target object all angles under the fixed position of fixed-illumination environment are rendered using camera shooting and are adopted Sample photo;
Step S2: 360 degree of Environments of target object are rendered described in the sample photographs synthesis using shooting;
Step S3: the N sample photographs that target object is rendered described in taking are snapped to described by wash with watercolours Under the consistent space coordinates of model file for contaminating target object;
Step S4: from the training data of the N for being rendered target object collection of photos deep neural networks, so After build the deep neural network, and the training deep neural network;
Step S5: after the network training is good, the forward process of the network is realized in mobile terminal, renders institute in real time It states and is rendered target object and arbitrarily checks the presentation of angle, and realize the switching of 360 degree of Environments, obtain difference The effect of rendering result.
Further, the photo shot in the step S1 is sparse as the presentation to point each on target object Sampling, while in the position where target object, the camera uniformly shoots photo, the shooting number of pictures for 360 degree outwardly It is 20-40.
Further, during the training data is collected, bilateral Reflectance Distribution Function BRDF describes exit direction The ratio of radiance and the irradiance of incident direction, the bilateral Reflectance Distribution Function BRDF are divided into two parts:
fr(vi, vr)=fR, d(vi, vr)+fR, s(vi, vr)
Wherein, viRepresent incident direction, vrRepresent exit direction, fr(vi, vr) represent BRDF entirety, fR, d(vi, vr) represent The part diffuse, fR, s(vi, vr) represent the part specular.
Further, the rendering formula of the sample photographs are as follows:
L (p, vr)=∫Ωfr(p, vi, vr)Li(p, vi)n·vidvi
Wherein Li(p, vi) it is from viDirection is irradiated to the radiance of point p.
Further, the rendering formula of the sample photographs can be according to fR, s(vi, vr)=ρ/π, wherein ρ is point p The value of diffuse simplifies two integrals respectively are as follows:
Ls(p, vr)=∫ΩLi(p, vi)dviΩFR, s(n, vi, vr, α, η) and nvidvi
The Ls(p, vr) in first integral by precalculating to obtain in 360 degree of Environments, and with SH (n) table Show, while Ld(p, vr) in integral also precalculate to obtain from 360 degree of Environments, and with Pre (r) indicate, depth nerve Network fitting is Ls(p, vr) in second integral, with Φ (n, vi, vr) indicate, wherein n and r respectively represents the normal side of point p To the reflection direction with light.
Further, the rendering formula of the sample photographs are as follows:
For each point p, v is obtainedr, n and position p, and L (p, vr) intensity value in corresponding sample photographs, ρ It is known that SH (n) and Pre (r) are obtained from 360 degree of Environments, it is available to arrive Φ (vr, n, p) value.
Further, the training data is (n, the v of the pixel on the sample photographs of the deep neural networkr, p) →Φ(vr, n, p) corresponding relationship.
Further, the deep neural network structure is multiple (n, the v according to the sample photographsr, p) and → Φ (vr, N, p) corresponding relationship, build specific deep neural network, be fitted mapping function, the deep neural network is fully connected network Network.
Further, the deep neural network has four layers, and input layer has 9 nodes, respectively corresponds (n, vr, p) and have 9 altogether A dimension.
Further, in four layers of the deep neural network, first layer to the 4th layer of number of nodes is followed successively by 64, 256,128 and 32, the number of nodes of output layer is 3, corresponding tri- channels RGB.
The present invention has used current advanced deep learning framework, removes an integral in fitting rendering equation, reduces Operand in render process, so that the picture of the mobile end equipment rendering 1024*1024 in assembly valiant imperial 845 processor of high pass Face can reach the result of average 30fps simultaneously because the method proposed assumes environment light it is known that deep learning learnt Material information has shot off the influence of light, so when rendering, we can random handoff environment textures, reach one The effect of relighting.
It is described further below with reference to technical effect of the attached drawing to design of the invention, specific structure and generation, with It is fully understood from the purpose of the present invention, feature and effect.
Detailed description of the invention
Fig. 1 is the method flow diagram of a preferred embodiment of the invention;
Fig. 2 is the algorithm flow schematic diagram of a preferred embodiment of the invention;
Fig. 3 is the schematic network structure of a preferred embodiment of the invention;
Fig. 4 is the rendering effect schematic diagram of a preferred embodiment of the invention.
Specific embodiment
Multiple preferred embodiments of the invention are introduced below with reference to Figure of description, keep its technology contents more clear and just In understanding.The present invention can be emerged from by many various forms of embodiments, and protection scope of the present invention not only limits The embodiment that Yu Wenzhong is mentioned.
In the accompanying drawings, the identical component of structure is indicated with same numbers label, everywhere the similar component of structure or function with Like numeral label indicates.The size and thickness of each component shown in the drawings are to be arbitrarily shown, and there is no limit by the present invention The size and thickness of each component.Apparent in order to make to illustrate, some places suitably exaggerate the thickness of component in attached drawing.
As shown in Figure 1, the present invention provides a kind of mobile terminal surface optical field rendering method based on deep learning, including with Lower step:
Step S1: target object all angles under the fixed position of fixed-illumination environment are rendered using camera shooting and are adopted Sample photo;Sparse sampling of the photo of shooting as the presentation to point each on target object, while in target object institute Position, the camera outwardly 360 degree uniformly shooting photo, the shooting number of pictures be 20-40 open;
Step S2: 360 degree of Environments of target object are rendered using the sample photographs synthesis of shooting;
Step S3: the N for being rendered target object taken sample photographs are snapped to and are rendered target object Under the consistent space coordinates of model file;
Step S4: from the training data for N collection of photos deep neural networks for being rendered target object, depth is then built Spend neural network, and training deep neural network;
Step S5: after network training is good, the forward process of network is realized in mobile terminal, renders be rendered target in real time Object realizes the switching of 360 degree of Environments in the presentation for arbitrarily checking angle, obtains the effect of different rendering results.
The input of this technical solution are as follows:
(1) equally distributed N (generally take 200) photos of the target object in the fixed position of fixed-illumination environment;
(2) 360 ° of distant view photographs around target object;
(3) the model file .obj of target object;
(4) the diffuse textures of target object.
Output are as follows:
(1) rendering result of the target object in the case where difference checks angle can be rendered in real time in mobile terminal;
(2) when handoff environment textures, the rendering result of target object can also make reasonable change, obtain reasonable Rendering result.
As shown in Fig. 2, algorithm flow of the invention is as follows: the present invention first needs that (present invention uses using camera Canon 760D) shooting is rendered the photos of object all angles under the fixed position of fixed-illumination environment, and these photos can be with As the sparse sampling of the presentation (appearance) to point each on target object, while during the present invention with target object is The heart, camera uniformly shoot photo outwardly, and 360 ° of Environments of final target object are then synthesized using the photo of shooting.It connects The present invention ready-made software, such as AgiSoftware can be used, the N of the target object taken photos are snapped to Under the consistent space coordinates of model file of target object.Then from the training of the N of target object collection of photos networks Then data build network, training network.After training network, forward direction (inference) mistake of network is realized in mobile terminal Journey renders target object in the presentation (appearance) for arbitrarily checking angle in real time, and realizes and switching 360 ° of rings Border textures obtain the effect of different rendering results.
Collecting training data process of the invention derives are as follows:
The irradiance's of the radiance and incident direction of bilateral Reflectance Distribution Function (BRDF) description exit direction Ratio.It is divided into two parts in Cook-Torrance BRDF
fr(vi, vr)=fR, d(vi, vr)+fR, s(vi, vr)
Wherein, viRepresent incident direction, vrRepresent exit direction, fr(vi, vr) represent BRDF entirety, fR, d(vi, vr) represent The part diffuse, fR, s(vi, vr) part specular is represented, according to rendering formula:
L (p, vr)=∫Ωfr(p, vi, vr)Li(p, vi)n·vidvi
Wherein Li(p, vi) it is from viDirection is irradiated to the radiance of point p.Joint equ.1 and equ.2, the present invention can be with It obtains:
L (p, vr)=∫ΩfD, s(vi, vr)Li(p, vi)n·vidvi+∫ΩfR, s(vi, vr)Li(p, vi)n·vidvi
=Ld(p, vr)+Ls(p, vr)
Such as above formula, the inside is very time-consuming in calculating process, but according to f there are two integralR, s(vi, vr)= ρ/π, wherein ρ is the value of the diffuse of point p, and two integrals are simplified respectively are as follows:
Ls(p, vr)=∫ΩLi(p, vi)dviΩFR, s(n, vi, vr, α, η) and nvidvi
Ls(p, vr) in first integral can by precalculating to obtain in 360 ° of Environments, here the present invention use SH (n) It indicates, while Ld(p, vr) in integral can also precalculate to obtain from 360 ° of Environments, here the present invention use Pre (r) It indicates, that network needs to be fitted is Ls(p, vr) in second integral, the present invention Φ (n, vi, vr) wherein n and r are respectively represented The normal direction of point p and the reflection direction of light.In summary formula, the present invention obtain:
For each point p, the present invention can easily obtain vr, n and position p, and L (p, vr) correspond to sample photographs In intensity value, ρ is it is known that SH (n) and Pre (r) can be obtained from 360 ° of Environments, so the present invention can also again Simply to get very much Φ (vr, n, p) value, in this way for a pixel on sample photographs, the present invention can be obtained To (n, a vr, p) and → Φ (vr, n, p) corresponding relationship, such data, so that it may as one of deep neural network Training data.
As shown in figure 3, deep learning network structure of the invention are as follows:
When obtaining very much (n, vr, p) and → Φ (vr, n, p) as corresponding relationship, the present invention built specific network knot Fruit, such as Fig. 2, to be fitted the mapping function, for this purpose, the present invention has used simple fully-connected network, which shares four layers, Input layer has 9 nodes, respectively corresponds (n, vr, p) and have 9 dimensions altogether, first layer to the 4th layer of number of nodes is followed successively by 64, 256,128 and 32, the number of nodes of output layer is 3, corresponding tri- channels RGB.
The present invention has used current advanced deep learning framework, removes an integral in fitting rendering equation, reduces Operand in render process, so that the picture of the mobile end equipment rendering 1024*1024 in assembly valiant imperial 845 processor of high pass Face can reach the result of average 30fps simultaneously because the method proposed assumes environment light it is known that deep learning learnt Material information has shot off the influence of light, so when rendering, we can random handoff environment textures, reach one The effect of relighting, as shown in Figure 4.
The preferred embodiment of the present invention has been described in detail above.It should be appreciated that the ordinary skill of this field is without wound The property made labour, which according to the present invention can conceive, makes many modifications and variations.Therefore, all technician in the art Pass through the available technology of logical analysis, reasoning, or a limited experiment on the basis of existing technology under this invention's idea Scheme, all should be within the scope of protection determined by the claims.

Claims (10)

1. a kind of mobile terminal surface optical field rendering method based on deep learning, which comprises the following steps:
Step S1: target object sampling of all angles under the fixed position of fixed-illumination environment is rendered using camera shooting and is shone Piece;
Step S2: 360 degree of Environments of target object are rendered described in the sample photographs synthesis using shooting;
Step S3: the N sample photographs that target object is rendered described in taking, which are snapped to, is rendered mesh with described Under the consistent space coordinates of model file for marking object;
Step S4: it from the training data of the N for being rendered target object collection of photos deep neural networks, then takes Build the deep neural network, and the training deep neural network;
Step S5: after the network training is good, the forward process of the network is realized in mobile terminal, renders the quilt in real time Post-processing object object arbitrarily checks the presentation of angle, and realizes the switching of 360 degree of Environments, obtains different renderings As a result effect.
2. the mobile terminal surface optical field rendering method based on deep learning as described in claim 1, which is characterized in that the step Sparse sampling of the photo shot in rapid S1 as the presentation to point each on target object, while where target object Position, the camera outwardly 360 degree uniformly shooting photo, the shooting number of pictures be 20-40 open.
3. the mobile terminal surface optical field rendering method based on deep learning as described in claim 1, which is characterized in that the instruction Practice in data collection, bilateral Reflectance Distribution Function BRDF describes the radiance of exit direction and the irradiance of incident direction Ratio, the bilateral Reflectance Distribution Function BRDF is divided into two parts:
fr(vi, vr)=fR, d(vi, vr)+fR, s(vi, vr)
Wherein, viRepresent incident direction, vrRepresent exit direction, fr(vi, vr) represent BRDF entirety, fR, d(vi, vr) represent The part diffuse, fR, s(vi, vr) represent the part specular.
4. the mobile terminal surface optical field rendering method based on deep learning as claimed in claim 3, which is characterized in that described to adopt The rendering formula of sample photo are as follows:
L (p, υr)=∫Ωfr(p, υi, υr)Li(p, υi)n·υii
Wherein Li(p, vi) it is from viDirection is irradiated to the radiance of point p.
5. the mobile terminal surface optical field rendering method based on deep learning as claimed in claim 4, which is characterized in that described to adopt The rendering formula of sample photo can be according to fR, s(vi, vr)=ρ/π, wherein ρ is the value of the diffuse of point p, and two integrals are distinguished Simplify are as follows:
Ls(p, υr)=∫ΩLi(p, υi)dυiΩFR, s(n, υi, υr, α, η) and n υii
The Ls(p, vr) in first integral by precalculating to obtain in 360 degree Environments, and with SH (n) expression, together When Ld(p, vr) in integral also precalculate to obtain from 360 degree of Environments, and with Pre (r) indicate, deep neural network Fitting is Ls(p, vr) in second integral, with Φ (n, vi, vr) indicate, wherein n and r respectively represent point p normal direction and The reflection direction of light.
6. the mobile terminal surface optical field rendering method based on deep learning as claimed in claim 5, which is characterized in that described to adopt The rendering formula of sample photo are as follows:
For each point p, v is obtainedr, n and position p, and L (p, vr) intensity value in corresponding sample photographs, ρ it is known that SH (n) and Pre (r) are obtained from 360 degree of Environments, available to arrive Φ (vr, n, p) value.
7. the mobile terminal surface optical field rendering method based on deep learning as claimed in claim 6, which is characterized in that the instruction Practice (n, the v of the pixel on the sample photographs that data are the deep neural networkr, p) and → Φ (vr, n, p) corresponding relationship.
8. the mobile terminal surface optical field rendering method based on deep learning as claimed in claim 7, which is characterized in that the depth Spending neural network structure is multiple (n, the v according to the sample photographsr, p) and → Φ (vr, n, p) corresponding relationship, build specific Deep neural network, be fitted mapping function, the deep neural network be fully-connected network.
9. the mobile terminal surface optical field rendering method based on deep learning as claimed in claim 8, which is characterized in that the depth Degree neural network has four layers, and input layer has 9 nodes, respectively corresponds (n, vr, p) and have 9 dimensions altogether.
10. the mobile terminal surface optical field rendering method based on deep learning as claimed in claim 9, which is characterized in that described In four layers of deep neural network, first layer to the 4th layer of number of nodes is followed successively by 64,256,128 and 32, the section of output layer Points are 3, corresponding tri- channels RGB.
CN201910167533.1A 2019-03-06 2019-03-06 Mobile terminal surface light field rendering method based on deep learning Active CN109829967B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910167533.1A CN109829967B (en) 2019-03-06 2019-03-06 Mobile terminal surface light field rendering method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910167533.1A CN109829967B (en) 2019-03-06 2019-03-06 Mobile terminal surface light field rendering method based on deep learning

Publications (2)

Publication Number Publication Date
CN109829967A true CN109829967A (en) 2019-05-31
CN109829967B CN109829967B (en) 2022-11-25

Family

ID=66865455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910167533.1A Active CN109829967B (en) 2019-03-06 2019-03-06 Mobile terminal surface light field rendering method based on deep learning

Country Status (1)

Country Link
CN (1) CN109829967B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113327299A (en) * 2021-07-07 2021-08-31 北京邮电大学 Neural network light field method based on joint sampling structure

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180047208A1 (en) * 2016-08-15 2018-02-15 Aquifi, Inc. System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
CN108304357A (en) * 2018-01-31 2018-07-20 北京大学 A kind of Chinese word library automatic generation method based on font manifold
CN109410310A (en) * 2018-10-30 2019-03-01 安徽虚空位面信息科技有限公司 A kind of real-time lighting Rendering algorithms based on deep learning network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180047208A1 (en) * 2016-08-15 2018-02-15 Aquifi, Inc. System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
CN108304357A (en) * 2018-01-31 2018-07-20 北京大学 A kind of Chinese word library automatic generation method based on font manifold
CN109410310A (en) * 2018-10-30 2019-03-01 安徽虚空位面信息科技有限公司 A kind of real-time lighting Rendering algorithms based on deep learning network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113327299A (en) * 2021-07-07 2021-08-31 北京邮电大学 Neural network light field method based on joint sampling structure
CN113327299B (en) * 2021-07-07 2021-12-14 北京邮电大学 Neural network light field method based on joint sampling structure

Also Published As

Publication number Publication date
CN109829967B (en) 2022-11-25

Similar Documents

Publication Publication Date Title
US20200380333A1 (en) System and method for body scanning and avatar creation
Pavlakos et al. Texturepose: Supervising human mesh estimation with texture consistency
Black et al. Bedlam: A synthetic dataset of bodies exhibiting detailed lifelike animated motion
CN106255990B (en) Image for camera array is focused again
CN104392045B (en) A kind of real time enhancing virtual reality system and method based on intelligent mobile terminal
CN104571887B (en) Static picture based dynamic interaction method and device
CN106897108A (en) A kind of implementation method of the virtual reality Panoramic Warping based on WebVR
US11321916B1 (en) System and method for virtual fitting
WO2023226454A1 (en) Product information processing method and apparatus, and terminal device and storage medium
Bi et al. Manipulating patterns of dynamic deformation elicits the impression of cloth with varying stiffness
Ruiz et al. Viewpoint information channel for illustrative volume rendering
CN109829967A (en) A kind of mobile terminal surface optical field rendering method based on deep learning
CN108230431A (en) A kind of the human action animation producing method and system of two-dimensional virtual image
Marques et al. Deep Light Source Estimation for Mixed Reality.
CN105678829A (en) Two-dimensional and three-dimensional combined digital building exhibition method
Dai et al. Interactive mixed reality rendering on holographic pyramid
CN112799507B (en) Human body virtual model display method and device, electronic equipment and storage medium
CN116266408A (en) Body type estimating method, body type estimating device, storage medium and electronic equipment
Erra et al. Ambient Occlusion Baking via a Feed-Forward Neural Network.
CN110610537B (en) Clothes image display method and device, storage medium and terminal equipment
Wu et al. Recovering geometric information with learned texture perturbations
Güssefeld et al. Are reflectance field renderings appropriate for optical flow evaluation?
CN117011446B (en) Real-time rendering method for dynamic environment illumination
RU2757563C1 (en) Method for visualizing a 3d portrait of a person with altered lighting and a computing device for it
Liang et al. Pencil drawing animation from a video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant