CN106056658A - Virtual object rendering method and virtual object rendering device - Google Patents

Virtual object rendering method and virtual object rendering device Download PDF

Info

Publication number
CN106056658A
CN106056658A CN201610349066.0A CN201610349066A CN106056658A CN 106056658 A CN106056658 A CN 106056658A CN 201610349066 A CN201610349066 A CN 201610349066A CN 106056658 A CN106056658 A CN 106056658A
Authority
CN
China
Prior art keywords
texturing
pixel
parameter
sample image
demand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610349066.0A
Other languages
Chinese (zh)
Other versions
CN106056658B (en
Inventor
崔树林
杨林
武韬
郭炜炜
吴云
黄衫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Kingsoft Digital Network Technology Co Ltd
Original Assignee
Zhuhai Kingsoft Online Game Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Kingsoft Online Game Technology Co Ltd filed Critical Zhuhai Kingsoft Online Game Technology Co Ltd
Priority to CN201610349066.0A priority Critical patent/CN106056658B/en
Publication of CN106056658A publication Critical patent/CN106056658A/en
Application granted granted Critical
Publication of CN106056658B publication Critical patent/CN106056658B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Abstract

Embodiments of the invention provide a virtual object rendering method and a virtual object rendering device. The method comprises the following steps: getting the material of an object to be rendered and the rendering requirement of each mold surface; selecting material maps respectively corresponding to the rendering requirements of the mold surfaces from a pre-established material map library according to the material of the object to be rendered, wherein the material map library stores material maps corresponding to the different rendering requirements of materials; and rendering the mold surfaces according to the selected material maps, and thus completing rendering of the object to be rendered. For virtual object rendering using the method provided by the embodiments of the invention, rendering is carried out according to material maps obtained in advance according to the real material, and there is no need for graphic drawing. Therefore, the effect of virtual object rendering is improved, and the sense of reality of a virtual scene is improved.

Description

A kind of virtual objects rendering intent and device
Technical field
The present invention relates to graph and image processing technical field, particularly relate to a kind of virtual objects rendering intent and device.
Background technology
Virtual reality technology, is a kind of can establishment and the computer simulation system in the experiencing virtual world.Utilize computer The digitized simulation scene that analogue system generates is referred to as virtual scene, and user can be by mouse, keyboard etc. with made Virtual scene interacts and produces experience on the spot in person.At present, virtual reality technology is in the every field such as game, animation It is widely used, such as, utilizes the online game that virtual reality technology makes, etc..
Virtual scene includes multiple virtual objects (being called for short " object ") often, such as, in online game, virtual Scene typically can include multiple objects such as virtual portrait, virtual house, virtual sky and virtual weapons equipment;And it is each Object is typically made up of multiple model faces, and wherein, model face can be the model face of triangle, it is also possible to be tetragon Model face, etc..So that virtual scene is realistic, often object included in virtual scene is carried out wash with watercolours Dye, such as rendering the light effects of object, in virtual scene, the quality of object rendering effect is often directly connected to virtual The sense of reality of scene.In prior art, object is rendered and is painted by art designing according to rendering experience by professional often Mode processed completes.
Visible, largely can be painted by professional when rendering virtual objects by the way of art designing draws The restriction of degree, the sense of reality causing virtual scene is the highest.
Summary of the invention
The purpose of the embodiment of the present invention is to provide a kind of virtual objects rendering intent and device, to improve virtual scene Sense of reality.
For reaching above-mentioned purpose, the embodiment of the invention discloses a kind of virtual objects rendering intent, described method includes:
Obtain the material of object to be rendered and each model face renders demand;
According to the material of described object to be rendered, select from the texturing storehouse pre-build and each model face The texturing that the demand that renders is corresponding, wherein, in described texturing storehouse, storage has the difference of each material to render demand pair The texturing answered;
According to each selected texturing, each model face is rendered, and then complete described object to be rendered Render.
Optionally, described texturing storehouse is set up in the following manner:
Utilize display as area source, simulate difform illumination pattern based on Fourier transformation, and utilize list Camera obtains the material preset each sample image under different illumination conditions;
Illumination pattern that image information according to each sample image, each sample image are corresponding and camera position, profit Calculate described material with nonlinear optimization method and render the texturing parameter of demand based on difference;
According to described texturing parameter, the difference generating described material renders each texturing that demand is corresponding.
Optionally, the described image information according to each sample image, each sample image are corresponding illumination pattern and Camera position, utilizes nonlinear optimization method to calculate described material and renders the texturing parameter of demand based on difference, including:
By following steps, calculate described material and render the texturing parameter of demand based on difference:
According to the pixel value of pixel, utilize the clustering algorithm preset, to the whole pixels in each sample image described Point carries out clustering processing, it is thus achieved that the classification after cluster;
According to default pixel traversal rule, determine first pixel in each classification;
Illumination pattern that image information according to each sample image, each sample image are corresponding and camera position, profit With nonlinear optimization method, calculate the texturing parameter of first pixel in each classification;
According to other pixels in each classification and the brightness ratio of first pixel, calculate the weight of other pixels Coefficient, and the texturing ginseng of other pixels is calculated according to the texturing parameter of described each first pixel of apoplexy due to endogenous wind Number.
Optionally, described utilize display as area source, simulate difform illumination pattern based on Fourier transformation Case, and utilize one camera to obtain the material preset each sample image under different illumination conditions, including:
Utilize display as area source, simulate difform illumination pattern based on Fourier transformation, and utilize list Camera, gathers described material each sample image under the different illumination intensity of each default light source colour.
Optionally, described texturing parameter includes: diffuse-reflectance coefficient, specularity factor, normal vector, high backscatter extinction logarithmic ratio;
Illumination pattern that image information according to each sample image, each sample image are corresponding and camera position, profit With nonlinear optimization method, calculate the texturing parameter of first pixel in each classification, including:
Illumination pattern that image information according to each sample image, each sample image are corresponding and camera position, and The texturing parameter of first pixel in each classification is calculated according to following formula,
arg min Σ x || V ( x ) - I ( x ) || 2 ,
Wherein, V (x)=ρd(x)·Dot(N(x),L)+ρs(x)·pow((Dot(N(x),H),g(x))·Dot(N(x), L), I (x) is the pixel value of pixel x identical with first pixel coordinate in each sample image described, ρdX () is picture The diffuse-reflectance coefficient of vegetarian refreshments x, ρsX () is the specularity factor of pixel x, N (x) is the normal vector of pixel x, and g (x) is picture The high backscatter extinction logarithmic ratio of vegetarian refreshments x, L is the light source direction of pixel x, and H is light source direction and the image capture device direction of pixel x Middle Component.
Optionally, described according to described texturing parameter, generate the difference of described material and render corresponding each of demand After texturing, also include:
Utilize the Standard pinup picture parameter of the Standard previously generated, correct the material of calculated described material Pinup picture parameter.
Optionally, the demand that renders in each model face includes direct reflection information, diffuse-reflectance information, normal information With high optical information;
In described texturing storehouse storage have direct reflection texturing corresponding to each material, diffuse-reflectance texturing, Normal vector texturing and high light texturing.
For reaching above-mentioned purpose, the embodiment of the invention discloses a kind of virtual objects rendering device, described device includes:
Information acquisition module, renders demand for obtain the material of object to be rendered and each model face;
Pinup picture selects module, for the material according to described object to be rendered, selects from the texturing storehouse pre-build Selecting the texturing corresponding with the demand that renders in each model face, wherein, in described texturing storehouse, storage has each material The difference of matter renders the texturing that demand is corresponding;
Rendering module, for each model face being rendered according to each selected texturing, and then complete right Rendering of described object to be rendered.
Optionally, also including that module is set up in texturing storehouse, described texturing storehouse is set up module and is included: image obtains son Module, pinup picture parameter computation module, texturing generate submodule;Wherein,
Described image obtains submodule, is used for utilizing display as area source, simulates difference based on Fourier transformation The illumination pattern of shape, and utilize one camera to obtain the material preset each sample image under different illumination conditions;
Described pinup picture parameter computation module, for the image information according to each sample image, each sample image pair The illumination pattern answered and camera position, utilize nonlinear optimization method to calculate described material and render the material of demand based on difference Pinup picture parameter;
Described texturing generates submodule, for according to described texturing parameter, generates the different wash with watercolours of described material Each texturing that dye demand is corresponding.
Optionally, described pinup picture parameter computation module, including: cluster cell, pixel determine unit, the first pinup picture ginseng Number computing unit, the second pinup picture parameter calculation unit;Wherein,
Described cluster cell, for the pixel value according to pixel, utilizes the clustering algorithm preset, to each sample described Whole pixels in image carry out clustering processing, it is thus achieved that the classification after cluster;
Described pixel determines unit, for according to default pixel traversal rule, determines first in each classification Individual pixel;
Described first pinup picture parameter calculation unit, for the image information according to each sample image, each sample image Corresponding illumination pattern and camera position, utilize nonlinear optimization method, calculate the material of first pixel in each classification Matter pinup picture parameter;
Described second pinup picture parameter calculation unit, for according to other pixels in each classification and first pixel Brightness ratio, calculates the weight coefficient of other pixels, and joins according to the texturing of described each first pixel of apoplexy due to endogenous wind Number calculates the texturing parameter of other pixels.
Optionally, described image obtains submodule, specifically for:
Utilize display as area source, simulate difform illumination pattern based on Fourier transformation, and utilize list Camera, gathers described material each sample image under the different illumination intensity of each default light source colour.
Optionally, described texturing parameter includes: diffuse-reflectance coefficient, specularity factor, normal vector, high backscatter extinction logarithmic ratio;
Described first pinup picture parameter calculation unit, specifically for:
Illumination pattern that image information according to each sample image, each sample image are corresponding and camera position, and The texturing parameter of first pixel in each classification is calculated according to following formula,
arg min Σ x || V ( x ) - I ( x ) || 2 ,
Wherein, V (x)=ρd(x)·Dot(N(x),L)+ρs(x)·pow((Dot(N(x),H),g(x))·Dot(N(x), L), I (x) is the pixel value of pixel x identical with first pixel coordinate in each sample image described, ρdX () is picture The diffuse-reflectance coefficient of vegetarian refreshments x, ρsX () is the specularity factor of pixel x, N (x) is the normal vector of pixel x, and g (x) is picture The high backscatter extinction logarithmic ratio of vegetarian refreshments x, L is the light source direction of pixel x, and H is light source direction and the image capture device direction of pixel x Middle Component.
Optionally, also include: pinup picture parameter correction module, be used for:
Generate submodule in described texturing to render from the different of described material according to described texturing gain of parameter After each texturing that demand is corresponding, the Standard pinup picture parameter of the Standard previously generated, correction is utilized to calculate The texturing parameter of the described material obtained.
Optionally, the demand that renders in each model face includes direct reflection information, diffuse-reflectance information, normal information With high optical information;
In described texturing storehouse storage have direct reflection texturing corresponding to each material, diffuse-reflectance texturing, Normal vector texturing and high light texturing.
The embodiment of the present invention provides a kind of virtual objects rendering intent and device.When carrying out virtual objects and rendering, first Obtain the material of object to be rendered and each model face renders demand, then according to the material of object to be rendered, builds from advance Vertical texturing storehouse selects the texturing corresponding with the demand that renders in each model face, finally according to selected Each model face is rendered by each texturing, and then completes to render described object to be rendered.The application present invention is real The method that executing example provides carries out virtual objects when rendering, and utilizes the texturing obtained previously according to true material to render, And without the mode using art designing to draw, improve the effect that virtual objects renders, improve the sense of reality of virtual scene.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing In having technology to describe, the required accompanying drawing used is briefly described, it should be apparent that, the accompanying drawing in describing below is only this Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, it is also possible to Other accompanying drawing is obtained according to these accompanying drawings.
The schematic flow sheet of a kind of virtual objects rendering intent that Fig. 1 provides for the embodiment of the present invention;
The structural representation of a kind of virtual objects rendering device that Fig. 2 provides for the embodiment of the present invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Describe, it is clear that described embodiment is only a part of embodiment of the present invention rather than whole embodiments wholely.Based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under not making creative work premise Embodiment, broadly falls into the scope of protection of the invention.
As it is shown in figure 1, the schematic flow sheet of a kind of virtual objects rendering intent provided for the embodiment of the present invention, the method May comprise steps of:
Step S101: obtain the material of object to be rendered and each model face renders demand.
Material, it is possible to understand that becoming material and the combination of texture, its surface has specific visual attributes, in brief, just It is what quality object looks like.These visual attributes refer to the color on surface, texture, smoothness, transparency, reflectance, Refractive index, luminosity etc..
For different materials, often there is different visual attributes, therefore, the model face of unlike material is carried out The demand that renders when rendering also tends to be different.
" rendering demand " described herein, for for indicate how treat the information that rendering objects carries out rendering.
It should be noted that those skilled in the art can be according to the concrete condition in reality application, reasonable set Demand is rendered for object to be rendered.
In a kind of implementation, the direct reflection information that specifically may include that in demand, unrestrained anti-of rendering in each model face Penetrate information, normal information and high optical information.For example, for minute surface rendering effect, the demand of rendering may include that mirror Face reflective information and normal vector;For diffuse-reflectance rendering effect, the demand of rendering may include that diffuse-reflectance information and normal direction Amount, several concrete renders demand it should be noted that the most only list, rendering of other certainly can also be had to need Ask, will not enumerate at this.
Certainly, in actual application, the demand that renders in each model face can also include needing except above-mentioned four kinds enumerated render Other information beyond asking, the particular content that the embodiment of the present invention need not rendering demand is defined.
Step S102: according to the material of object to be rendered, selects and each mould from the texturing storehouse pre-build The texturing that the demand that renders of profile is corresponding.
Wherein, in texturing storehouse, storage has the difference of each material to render the texturing that demand is corresponding.
It should be noted that storage has the texturing of various material in texturing storehouse, each material can include again Various material pinup picture, each texturing is one to one with the demand that specifically renders of each material.For example, if being used for Indicate how that the demand that renders rendering material A includes: direct reflection information and normal vector, then material in texturing storehouse Matter A should at least include: direct reflection texturing and normal vector texturing.
In a kind of implementation, in texturing storehouse, storage has direct reflection texturing corresponding to each material, unrestrained anti- Penetrate texturing, normal vector texturing and high light texturing.Certainly, it is merely illustrative here, texturing Texturing corresponding to each material of being stored in storehouse can also include the patch in addition to cited these four texturing Figure.But, in texturing storehouse, texturing corresponding to each material at least needs to include mentioned with step S101 Render in demand each specifically renders the texturing that demand is corresponding.
In a kind of specific embodiment of the present invention, texturing storehouse can be set up in the following manner:
(11) utilize display as area source, simulate difform illumination pattern, and profit based on Fourier transformation The material preset each sample image under different illumination conditions is obtained with one camera.
Specifically, obtain the material preset each sample image under different illumination conditions, can be according to lower section Formula is carried out: gather the material preset each sample image under the different illumination intensity of each default light source colour, wherein, light Source shape is variable.
The different illumination intensity of default light source colour described herein, refers in the gatherer process carrying out sample image, The color of light source can be different, such as, and white light source, blue-light source, red light source etc., it addition, even with same The light source of sample color, such as white light source, intensity of illumination can also be different, such as, white high light, white low light level etc..
It should be noted that the sample image why gathered under different illumination conditions in order to increase preset The multiformity of the sample of material, can obtain accurate texturing during to ensure and to carry out subsequent treatment.
In a kind of implementation, default light source can be area source, in actual application, can simulate with liquid crystal display The light source of different illumination conditions, so can be substantially reduced the cost of real light sources, and it is convenient to use comparison.Certainly, this In be merely illustrative, the embodiment of the present invention is not required to the color to default light source and intensity of illumination is defined, ability Technical staff in territory needs the concrete condition in applying according to reality reasonably to arrange.
(12) according to illumination pattern corresponding to the image information of each sample image, each sample image and phase seat in the plane Put, utilize nonlinear optimization method to calculate this material and render the texturing parameter of demand based on difference.
Texturing parameter, refers to the concrete numerical value of the texturing rendered for treating rendering objects to carry out, and material is pasted Graph parameter can include multiple concrete parameter, and the design parameter of each texturing with each render in demand concrete The demand of rendering is one to one.For example, for the direct reflection information rendered in demand, in texturing parameter There is direct reflection pinup picture parameter corresponding.From the point of view of further, render in demand, texturing and texturing parameter included Particular content be all one to one, such as, direct reflection information (belonging to the demand of rendering), direct reflection texturing (belong to In texturing) and direct reflection texturing parameter (belonging to texturing parameter) be the most all to close one to one System.
Concrete, pass through following steps, it is thus achieved that material is based on each texturing parameter rendering demand:
(12-1) according to the pixel value of pixel, the clustering algorithm preset is utilized, to the whole pictures in each sample image Vegetarian refreshments carries out clustering processing, it is thus achieved that the classification after cluster.
The described herein clustering algorithm preset can be clustering algorithm commonly used in the prior art, and such as, K-means gathers Class algorithm, is the most no longer described clustering algorithm of the prior art.
It addition, the number for the classification after cluster can be by those skilled in the art according to the tool in reality application Body situation is reasonably arranged, and the present invention is not required to be defined this.
(12-2) according to default pixel traversal rule, first pixel in each classification is determined.
In a kind of implementation, can according to the order of the sample image gathered, for each sample image according to by The mode of row scanning travels through, when scan with cluster after classification identical pixel time, determine that this pixel fixed was for should First pixel in classification, then, then carries out the scanning of next classification.
For example, it is assumed that acquire I respectively according to acquisition order1、I2And I3Three sample images, the classification after cluster For C1、C2.First, by C1It is defined as current class, from I1Start scanogram, when scanning I1Middle coordinate is (2,3) pixel Time, scan and C1The pixel P that classification is identical1, then by pixel P1It is defined as C1First pixel of classification;The most again By C2It is defined as current class, is scanned according to same scan mode, when scanning when scanning I2Middle coordinate is (5,6) During pixel, scan and C2The pixel P that classification is identical2, then by pixel P2It is defined as C2First pixel of classification, Traversal terminates.
In another kind of implementation, can carry out according to the classification after cluster, gathering for the pixel in each classification Class order travels through, when scan with cluster after classification identical pixel time, determine that this pixel is in the category First pixel, then, then carries out the scanning of next classification.
For example, it is also assumed that the classification after cluster is C1、C2, wherein, C1Include P11、P12、P13And P14Four pictures Vegetarian refreshments, C2Include P21、P22、P23And P24Four pixels.First, by C1It is defined as current class, from C1Classification starts scanning, When scanning pixel P12Time, scan and C1The pixel that classification is identical, then by pixel P12It is defined as C1The first of classification Individual pixel;The most again by C2It is defined as current class, is scanned according to same scan mode, when scanning when scanning Pixel P23Time, scan and C2The pixel that classification is identical, then by pixel P23It is defined as C2First pixel of classification, Traversal terminates.
It should be noted that the above-mentioned two kinds of implementations enumerated are merely illustrative, the embodiment of the present invention is not Needing to be defined concrete pixel traversal rule, any possible implementation all can apply to the present invention.
(12-3) according to illumination pattern corresponding to the image information of each sample image, each sample image and phase seat in the plane Put, utilize nonlinear optimization method, calculate the texturing parameter of first pixel in each classification.
In a kind of implementation, texturing parameter may include that diffuse-reflectance coefficient, specularity factor, normal vector, height Backscatter extinction logarithmic ratio;
Illumination pattern that image information according to each sample image, each sample image are corresponding and camera position, profit With nonlinear optimization method, calculate the texturing parameter of first pixel in each classification, may include that
Illumination pattern that image information according to each sample image, each sample image are corresponding and camera position, and The texturing parameter of first pixel in each classification is calculated according to following formula,
arg min Σ x || V ( x ) - I ( x ) || 2 ,
Wherein, V (x)=ρd(x)·Dot(N(x),L)+ρs(x)·pow((Dot(N(x),H),g(x))·Dot(N(x), L), I (x) is the pixel value of pixel x identical with first pixel coordinate in each sample image, ρdX () is pixel x Diffuse-reflectance coefficient, ρsX () is the specularity factor of pixel x, N (x) is the normal vector of pixel x, and g (x) is pixel x High backscatter extinction logarithmic ratio, L is the light source direction of pixel x, and H is the centre in light source direction and the image capture device direction of pixel x Component.
In a kind of implementation, the light source direction of light source direction L of pixel, pixel can be obtained in such a way Middle Component H with image capture device direction:
According to default image rectification rule, each sample image obtained is carried out image rectification;
Data according to image rectification calculate light source direction L, Middle Component H.
It should be noted that mentioned here light source direction L of pixel, the light source direction of pixel and image acquisition The position relationship of the hardware when Middle Component H of device orientation gathers with sample image is relevant, in actual application, except leading to Cross the above mentioned mode obtaining light source direction L and Middle Component H by the way of image rectification to think, it is also possible to utilize Alternate manner, such as by measuring instrument manual measurement etc..Certainly, the embodiment of the present invention be not required to light source direction L and The acquisition pattern of Middle Component H is defined, and any possible implementation all can apply to the present invention.
(12-4) according to other pixels in each classification and the brightness ratio of first pixel, other pixels are calculated Weight coefficient, and according to the texturing parameter of each first pixel of apoplexy due to endogenous wind calculate other pixels texturing ginseng Number.
It can be seen that during obtaining texturing parameter, only actually belongs to every determined by after cluster The texturing parameter of first pixel of one classification has carried out correlation computations, and for each classification is removed first pixel Other texturing parameters of thinking of point only replicate generic in the texturing parameter of first pixel, due to In view data, pixel is the most all thousands of, therefore, utilizes the side provided by step (12-1) to step (12-4) Method, greatly reduces the amount of calculation calculating texturing parameter.
(13) according to this texturing parameter, the difference generating this material renders each texturing that demand is corresponding.
In a kind of implementation, different from material to render demand corresponding according to texturing gain of parameter for step (13) After each texturing, it is also possible to including:
(14) utilize the Standard pinup picture parameter of the Standard previously generated, correct calculated texturing ginseng Number.
By utilizing the correction to calculated texturing parameter of the Standard pinup picture parameter of Standard, it is possible to As far as possible the texturing parameter of calculated distortion is modified, so further ensures the accurate journey of texturing parameter Degree and sense of reality.
Step S103: according to each selected texturing, each model face is rendered, and then complete to treat wash with watercolours Rendering of dye object.
The method that the application embodiment of the present invention provides carries out virtual objects when rendering, and utilizes and obtains previously according to true material Texturing render, and without the mode using art designing to draw, improve the effect that virtual objects renders, improve void Intend the sense of reality of scene.
Further, the mode using art designing to draw in prior art obtains texturing, even with identical material Identical material pinup picture remain a need for art designing and draw once, and the scheme that the embodiment of the present invention provides carries out virtual objects when rendering, It is to select to obtain from the texturing storehouse pre-build that each model face renders each texturing utilized, After texturing storehouse is set up, suitable texturing is only selected to carry out rendering, it is not necessary to again to carry out art designing and paint Make or set up texturing storehouse, therefore, greatly reducing the workload that virtual objects renders, improve the effect that virtual objects renders Rate.
As in figure 2 it is shown, the structural representation of a kind of virtual objects rendering device provided for the embodiment of the present invention, this device Can include with lower module:
Information acquisition module 210, renders demand for obtain the material of object to be rendered and each model face.
Pinup picture selects module 220, for the material according to object to be rendered, selects from the texturing storehouse pre-build The texturing corresponding with the demand that renders in each model face.
Wherein, in texturing storehouse, storage has the difference of each material to render the texturing that demand is corresponding.
Rendering module 230, for rendering each model face according to each selected texturing, and then completes Treat rendering of rendering objects.
Concrete, it is also possible to including that module is set up in texturing storehouse, texturing storehouse is set up module and be may include that image obtains Obtain submodule, pinup picture parameter computation module, texturing generation submodule.
Wherein, image obtains submodule, is used for utilizing display as area source, simulates difference based on Fourier transformation The illumination pattern of shape, and utilize one camera to obtain the material preset each sample image under different illumination conditions.
Pinup picture parameter computation module, corresponding for image information, each sample image according to each sample image Illumination pattern and camera position, utilize nonlinear optimization method to calculate this material and render the texturing ginseng of demand based on difference Number.
Texturing generates submodule, and for according to texturing parameter, the difference generating this material renders demand correspondence Each texturing.
Concrete, pinup picture parameter computation module, may include that cluster cell, pixel determine unit, the first pinup picture ginseng Number computing unit, the second pinup picture parameter calculation unit.
Wherein, cluster cell, for the pixel value according to pixel, utilize the clustering algorithm preset, to each sample graph Whole pixels in Xiang carry out clustering processing, it is thus achieved that the classification after cluster.
Pixel determines unit, for according to default pixel traversal rule, determines first picture in each classification Vegetarian refreshments.
First pinup picture parameter calculation unit, for the image information according to each sample image, each sample image correspondence Illumination pattern and camera position, utilize nonlinear optimization method, calculate first pixel in each classification material patch Graph parameter.
Second pinup picture parameter calculation unit, for according to other pixels in each classification and the brightness of first pixel Ratio, calculates the weight coefficient of other pixels, and calculates it according to the texturing parameter of each first pixel of apoplexy due to endogenous wind The texturing parameter of his pixel.
Concrete, image obtains submodule, specifically for:
Utilize display as area source, simulate difform illumination pattern based on Fourier transformation, and utilize list Camera, gathers this material each sample image under the different illumination intensity of each default light source colour.
Concrete, texturing parameter may include that diffuse-reflectance coefficient, specularity factor, normal vector, high backscatter extinction logarithmic ratio.
First pinup picture parameter calculation unit, specifically for:
Illumination pattern that image information according to each sample image, each sample image are corresponding and camera position, and The texturing parameter of first pixel in each classification is calculated according to following formula,
arg min Σ x || V ( x ) - I ( x ) || 2 ,
Wherein, V (x)=ρd(x)·Dot(N(x),L)+ρs(x)·pow((Dot(N(x),H),g(x))·Dot(N(x), L), I (x) is the pixel value of pixel x identical with first pixel coordinate in each sample image, ρdX () is pixel x Diffuse-reflectance coefficient, ρsX () is the specularity factor of pixel x, N (x) is the normal vector of pixel x, and g (x) is pixel x High backscatter extinction logarithmic ratio, L is the light source direction of pixel x, and H is the centre in light source direction and the image capture device direction of pixel x Component.
Concrete, it is also possible to including: pinup picture parameter correction module, be used for:
Generate submodule in texturing and render corresponding each of demand according to texturing gain of parameter is different from material After individual texturing, utilize the Standard pinup picture parameter of the Standard previously generated, correct calculated material Texturing parameter.
Concrete, rendering of each model face can include direct reflection information, diffuse-reflectance information, normal vector in demand Information and high optical information.
In texturing storehouse, storage has direct reflection texturing corresponding to each material, diffuse-reflectance texturing, normal direction Amount texturing and high light texturing.
The method that the application embodiment of the present invention provides carries out virtual objects when rendering, and utilizes and obtains previously according to true material Texturing render, and without the mode using art designing to draw, improve the effect that virtual objects renders, improve void Intend the sense of reality of scene.
Further, the mode using art designing to draw in prior art obtains texturing, even with identical material Identical material pinup picture remain a need for art designing and draw once, and the scheme that the embodiment of the present invention provides carries out virtual objects when rendering, It is to select to obtain from the texturing storehouse pre-build that each model face renders each texturing utilized, After texturing storehouse is set up, suitable texturing is only selected to carry out rendering, it is not necessary to again to carry out art designing and paint Make or set up texturing storehouse, therefore, greatly reducing the workload that virtual objects renders, improve the effect that virtual objects renders Rate.
For device embodiment, owing to it is substantially similar to embodiment of the method, so describe is fairly simple, relevant Part sees the part of embodiment of the method and illustrates.
It should be noted that in this article, the relational terms of such as first and second or the like is used merely to a reality Body or operation separate with another entity or operating space, and deposit between not necessarily requiring or imply these entities or operating Relation or order in any this reality.And, term " includes ", " comprising " or its any other variant are intended to Comprising of nonexcludability, so that include that the process of a series of key element, method, article or equipment not only include that those are wanted Element, but also include other key elements being not expressly set out, or also include for this process, method, article or equipment Intrinsic key element.In the case of there is no more restriction, statement " including ... " key element limited, it is not excluded that Including process, method, article or the equipment of described key element there is also other identical element.
One of ordinary skill in the art will appreciate that all or part of step realizing in said method embodiment is can Completing instructing relevant hardware by program, described program can be stored in computer read/write memory medium, The storage medium obtained designated herein, such as: ROM/RAM, magnetic disc, CD etc..
The foregoing is only presently preferred embodiments of the present invention, be not intended to limit protection scope of the present invention.All Any modification, equivalent substitution and improvement etc. made within the spirit and principles in the present invention, are all contained in protection scope of the present invention In.

Claims (14)

1. a virtual objects rendering intent, it is characterised in that described method includes:
Obtain the material of object to be rendered and each model face renders demand;
According to the material of described object to be rendered, select from the texturing storehouse pre-build and the rendering of each model face The texturing that demand is corresponding, wherein, in described texturing storehouse, to have the difference of each material to render demand corresponding in storage Texturing;
According to each selected texturing, each model face is rendered, and then complete the wash with watercolours to described object to be rendered Dye.
Method the most according to claim 1, it is characterised in that set up described texturing storehouse in the following manner:
Utilize display as area source, simulate difform illumination pattern based on Fourier transformation, and utilize one camera Obtain the material preset each sample image under different illumination conditions;
Illumination pattern that image information according to each sample image, each sample image are corresponding and camera position, utilize non- Linear optimization method calculates described material and renders the texturing parameter of demand based on difference;
According to described texturing parameter, the difference generating described material renders each texturing that demand is corresponding.
Method the most according to claim 2, it is characterised in that the described image information according to each sample image, each Illumination pattern that sample image is corresponding and camera position, utilize nonlinear optimization method to calculate described material and render based on difference The texturing parameter of demand, including:
By following steps, calculate described material and render the texturing parameter of demand based on difference:
According to the pixel value of pixel, utilize the clustering algorithm preset, the whole pixels in each sample image described are clicked on Row clustering processing, it is thus achieved that the classification after cluster;
According to default pixel traversal rule, determine first pixel in each classification;
Illumination pattern that image information according to each sample image, each sample image are corresponding and camera position, utilize non- Linear optimization method, calculates the texturing parameter of first pixel in each classification;
According to other pixels in each classification and the brightness ratio of first pixel, calculate the weight system of other pixels Number, and the texturing parameter of other pixels is calculated according to the texturing parameter of described each first pixel of apoplexy due to endogenous wind.
The most according to the method in claim 2 or 3, it is characterised in that described utilize display as area source, based in Fu Leaf transformation simulates difform illumination pattern, and utilizes one camera to obtain each under different illumination conditions of the material preset Individual sample image, including:
Utilize display as area source, simulate difform illumination pattern based on Fourier transformation, and utilize one camera, Gather described material each sample image under the different illumination intensity of each default light source colour.
Method the most according to claim 3, it is characterised in that described texturing parameter includes: diffuse-reflectance coefficient, minute surface Reflection coefficient, normal vector, high backscatter extinction logarithmic ratio;
Illumination pattern that image information according to each sample image, each sample image are corresponding and camera position, utilize non- Linear optimization method, calculates the texturing parameter of first pixel in each classification, including:
Illumination pattern that image information according to each sample image, each sample image are corresponding and camera position, and according to Following formula calculates the texturing parameter of first pixel in each classification,
arg min Σ x | | V ( x ) - I ( x ) | | 2 ,
Wherein, V (x)=ρd(x)·Dot(N(x),L)+ρs(x) pow ((Dot (N (x), H), g (x)) Dot (N (x), L), I X () is the pixel value of pixel x identical with first pixel coordinate in each sample image described, ρdX () is pixel x Diffuse-reflectance coefficient, ρsX () is the specularity factor of pixel x, N (x) is the normal vector of pixel x, and g (x) is pixel x High backscatter extinction logarithmic ratio, L is the light source direction of pixel x, and H is the centre in light source direction and the image capture device direction of pixel x Component.
The most according to the method in claim 2 or 3, it is characterised in that described according to described texturing parameter, generate described After the difference of material renders each texturing that demand is corresponding, also include:
Utilize the Standard pinup picture parameter of the Standard previously generated, correct the texturing of calculated described material Parameter.
Method the most according to claim 1, it is characterised in that
The demand that renders in each model face includes direct reflection information, diffuse-reflectance information, normal information and high optical information;
In described texturing storehouse, storage has direct reflection texturing corresponding to each material, diffuse-reflectance texturing, normal direction Amount texturing and high light texturing.
8. a virtual objects rendering device, it is characterised in that described device includes:
Information acquisition module, renders demand for obtain the material of object to be rendered and each model face;
Pinup picture selects module, for according to the material of described object to be rendered, select from the texturing storehouse pre-build with The texturing that the demand that renders in each model face is corresponding, wherein, in described texturing storehouse, storage has each material Difference renders the texturing that demand is corresponding;
Rendering module, for rendering each model face according to each selected texturing, and then completes described Rendering of object to be rendered.
Device the most according to claim 8, it is characterised in that also including that module is set up in texturing storehouse, described material is pasted Picture library is set up module and is included: image obtains submodule, pinup picture parameter computation module, texturing generation submodule;Wherein,
Described image obtains submodule, is used for utilizing display as area source, simulates difformity based on Fourier transformation Illumination pattern, and utilize one camera to obtain each sample image under different illumination conditions of material preset;
Described pinup picture parameter computation module, corresponding for image information, each sample image according to each sample image Illumination pattern and camera position, utilize nonlinear optimization method to calculate described material and render the texturing of demand based on difference Parameter;
Described texturing generates submodule, and for according to described texturing parameter, the difference generating described material renders need Seek each corresponding texturing.
Device the most according to claim 9, it is characterised in that described pinup picture parameter computation module, including: cluster is single Unit, pixel determine unit, the first pinup picture parameter calculation unit, the second pinup picture parameter calculation unit;Wherein,
Described cluster cell, for the pixel value according to pixel, utilizes the clustering algorithm preset, to each sample image described In whole pixels carry out clustering processing, it is thus achieved that the classification after cluster;
Described pixel determines unit, for according to default pixel traversal rule, determines first picture in each classification Vegetarian refreshments;
Described first pinup picture parameter calculation unit, for the image information according to each sample image, each sample image correspondence Illumination pattern and camera position, utilize nonlinear optimization method, calculate first pixel in each classification material patch Graph parameter;
Described second pinup picture parameter calculation unit, for according to other pixels in each classification and the brightness of first pixel Ratio, calculates the weight coefficient of other pixels, and according to the texturing parameter meter of described each first pixel of apoplexy due to endogenous wind Calculate the texturing parameter of other pixels.
11. according to the device described in claim 9 or 10, it is characterised in that described image obtains submodule, specifically for:
Utilize display as area source, simulate difform illumination pattern based on Fourier transformation, and utilize one camera, Gather described material each sample image under the different illumination intensity of each default light source colour.
12. devices according to claim 10, it is characterised in that described texturing parameter includes: diffuse-reflectance coefficient, mirror Face reflection coefficient, normal vector, high backscatter extinction logarithmic ratio;
Described first pinup picture parameter calculation unit, specifically for:
Illumination pattern that image information according to each sample image, each sample image are corresponding and camera position, and according to Following formula calculates the texturing parameter of first pixel in each classification,
arg min Σ x | | V ( x ) - I ( x ) | | 2 ,
Wherein, V (x)=ρd(x)·Dot(N(x),L)+ρs(x) pow ((Dot (N (x), H), g (x)) Dot (N (x), L), I X () is the pixel value of pixel x identical with first pixel coordinate in each sample image described, ρdX () is pixel x Diffuse-reflectance coefficient, ρsX () is the specularity factor of pixel x, N (x) is the normal vector of pixel x, and g (x) is pixel x High backscatter extinction logarithmic ratio, L is the light source direction of pixel x, and H is the centre in light source direction and the image capture device direction of pixel x Component.
13. according to the device described in claim 9 or 10, it is characterised in that also include: pinup picture parameter correction module, is used for:
Generate submodule in described texturing and render demand according to described texturing gain of parameter from the different of described material After each corresponding texturing, utilizing the Standard pinup picture parameter of the Standard previously generated, correction is calculated The texturing parameter of described material.
14. devices according to claim 8, it is characterised in that
The demand that renders in each model face includes direct reflection information, diffuse-reflectance information, normal information and high optical information;
In described texturing storehouse, storage has direct reflection texturing corresponding to each material, diffuse-reflectance texturing, normal direction Amount texturing and high light texturing.
CN201610349066.0A 2016-05-23 2016-05-23 A kind of virtual objects rendering method and device Active CN106056658B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610349066.0A CN106056658B (en) 2016-05-23 2016-05-23 A kind of virtual objects rendering method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610349066.0A CN106056658B (en) 2016-05-23 2016-05-23 A kind of virtual objects rendering method and device

Publications (2)

Publication Number Publication Date
CN106056658A true CN106056658A (en) 2016-10-26
CN106056658B CN106056658B (en) 2019-01-25

Family

ID=57175260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610349066.0A Active CN106056658B (en) 2016-05-23 2016-05-23 A kind of virtual objects rendering method and device

Country Status (1)

Country Link
CN (1) CN106056658B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600712A (en) * 2016-12-20 2017-04-26 南京大学 Texture replacing method based on two dimension image
CN106898040A (en) * 2017-03-06 2017-06-27 网易(杭州)网络有限公司 Virtual resource object rendering intent and device
CN107103638A (en) * 2017-05-27 2017-08-29 杭州万维镜像科技有限公司 A kind of Fast rendering method of virtual scene and model
CN107909638A (en) * 2017-11-15 2018-04-13 网易(杭州)网络有限公司 Rendering intent, medium, system and the electronic equipment of dummy object
WO2018094307A1 (en) * 2016-11-18 2018-05-24 Robert Bosch Start-Up Platform North America, Llc, Sensing system and method
CN108933954A (en) * 2017-05-22 2018-12-04 中兴通讯股份有限公司 Method of video image processing, set-top box and computer readable storage medium
CN109377543A (en) * 2018-10-29 2019-02-22 广东明星创意动画有限公司 A kind of method of quick creation material connection
CN109603155A (en) * 2018-11-29 2019-04-12 网易(杭州)网络有限公司 Merge acquisition methods, device, storage medium, processor and the terminal of textures
CN109934903A (en) * 2017-12-19 2019-06-25 北大方正集团有限公司 Bloom information extracting method, system, computer equipment and storage medium
CN110021071A (en) * 2018-12-25 2019-07-16 阿里巴巴集团控股有限公司 Rendering method, device and equipment in a kind of application of augmented reality
CN110390713A (en) * 2019-06-11 2019-10-29 中新软件(上海)有限公司 Implementation method, device and the computer equipment of the mutual minus effect of mirror surface in entity decoration
CN111068314A (en) * 2019-12-06 2020-04-28 珠海金山网络游戏科技有限公司 Unity-based NGUI resource rendering processing method and device
CN111127623A (en) * 2019-12-25 2020-05-08 上海米哈游天命科技有限公司 Model rendering method and device, storage medium and terminal
CN112132213A (en) * 2020-09-23 2020-12-25 创新奇智(南京)科技有限公司 Sample image processing method and device, electronic equipment and storage medium
CN112619154A (en) * 2020-12-28 2021-04-09 网易(杭州)网络有限公司 Processing method and device of virtual model and electronic device
CN112915536A (en) * 2021-04-02 2021-06-08 网易(杭州)网络有限公司 Rendering method and device of virtual model
CN113069762A (en) * 2021-03-15 2021-07-06 广州三七互娱科技有限公司 Eye image generation method and device of virtual character and electronic equipment
CN113350786A (en) * 2021-05-08 2021-09-07 广州三七极创网络科技有限公司 Skin rendering method and device for virtual character and electronic equipment
CN113648652A (en) * 2021-08-20 2021-11-16 腾讯科技(深圳)有限公司 Object rendering method and device, storage medium and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001184518A (en) * 1999-12-27 2001-07-06 Nec Corp Device and method for plotting three-dimensional image
US20070046665A1 (en) * 2005-08-31 2007-03-01 Yoshihiko Nakagawa Apparatus and program for image generation
CN102346921A (en) * 2011-09-19 2012-02-08 广州市凡拓数码科技有限公司 Renderer-baking light mapping method of three-dimensional software
CN102722904A (en) * 2012-05-30 2012-10-10 北京尔宜居科技有限责任公司 Local rendering method
CN103310488A (en) * 2013-03-20 2013-09-18 常州依丽雅斯纺织品有限公司 mental ray rendering-based virtual reality rendering method
CN104866861A (en) * 2015-04-08 2015-08-26 浙江大学 Material appearance acquisition method based on Kinect equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001184518A (en) * 1999-12-27 2001-07-06 Nec Corp Device and method for plotting three-dimensional image
US20070046665A1 (en) * 2005-08-31 2007-03-01 Yoshihiko Nakagawa Apparatus and program for image generation
CN102346921A (en) * 2011-09-19 2012-02-08 广州市凡拓数码科技有限公司 Renderer-baking light mapping method of three-dimensional software
CN102722904A (en) * 2012-05-30 2012-10-10 北京尔宜居科技有限责任公司 Local rendering method
CN103310488A (en) * 2013-03-20 2013-09-18 常州依丽雅斯纺织品有限公司 mental ray rendering-based virtual reality rendering method
CN104866861A (en) * 2015-04-08 2015-08-26 浙江大学 Material appearance acquisition method based on Kinect equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
PAUL DEBEVEC: "Rendering synthetic objects into real scens:Bridging traditional and image-based graphics with global illuminatin and high dynamic range photography", 《ACM SIGGRAPH 2008 CLASSES.ACM》 *
张莹: "基于WEB的虚拟仿真校园开发设计与实现", 《中国优秀硕士学位论文全文数据库_信息科技辑》 *
王一轩: "虚拟展馆建设中的贴图烘焙技术", 《信息与电脑》 *
薛媛媛: "在3D MAX虚拟现实中反光材质的实现", 《科学之友》 *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018094307A1 (en) * 2016-11-18 2018-05-24 Robert Bosch Start-Up Platform North America, Llc, Sensing system and method
CN106600712A (en) * 2016-12-20 2017-04-26 南京大学 Texture replacing method based on two dimension image
CN106600712B (en) * 2016-12-20 2019-05-31 南京大学 A kind of texture replacement method based on two dimensional image
CN106898040A (en) * 2017-03-06 2017-06-27 网易(杭州)网络有限公司 Virtual resource object rendering intent and device
CN106898040B (en) * 2017-03-06 2020-08-04 网易(杭州)网络有限公司 Virtual resource object rendering method and device
CN108933954A (en) * 2017-05-22 2018-12-04 中兴通讯股份有限公司 Method of video image processing, set-top box and computer readable storage medium
CN107103638A (en) * 2017-05-27 2017-08-29 杭州万维镜像科技有限公司 A kind of Fast rendering method of virtual scene and model
CN107103638B (en) * 2017-05-27 2020-10-16 杭州万维镜像科技有限公司 Rapid rendering method of virtual scene and model
CN107909638B (en) * 2017-11-15 2021-05-14 杭州易现先进科技有限公司 Rendering method, medium, system and electronic device of virtual object
CN107909638A (en) * 2017-11-15 2018-04-13 网易(杭州)网络有限公司 Rendering intent, medium, system and the electronic equipment of dummy object
CN109934903A (en) * 2017-12-19 2019-06-25 北大方正集团有限公司 Bloom information extracting method, system, computer equipment and storage medium
CN109934903B (en) * 2017-12-19 2020-10-16 北大方正集团有限公司 Highlight information extraction method, system, computer equipment and storage medium
CN109377543B (en) * 2018-10-29 2022-04-15 广东明星创意动画有限公司 Method for quickly establishing material connection
CN109377543A (en) * 2018-10-29 2019-02-22 广东明星创意动画有限公司 A kind of method of quick creation material connection
CN109603155A (en) * 2018-11-29 2019-04-12 网易(杭州)网络有限公司 Merge acquisition methods, device, storage medium, processor and the terminal of textures
US11325045B2 (en) 2018-11-29 2022-05-10 Netease (Hangzhou) Network Co., Ltd. Method and apparatus for acquiring merged map, storage medium, processor, and terminal
CN110021071A (en) * 2018-12-25 2019-07-16 阿里巴巴集团控股有限公司 Rendering method, device and equipment in a kind of application of augmented reality
CN110390713B (en) * 2019-06-11 2023-05-12 中新软件(上海)有限公司 Method and device for realizing mirror surface reciprocal effect in entity decoration and computer equipment
CN110390713A (en) * 2019-06-11 2019-10-29 中新软件(上海)有限公司 Implementation method, device and the computer equipment of the mutual minus effect of mirror surface in entity decoration
CN111068314A (en) * 2019-12-06 2020-04-28 珠海金山网络游戏科技有限公司 Unity-based NGUI resource rendering processing method and device
CN111068314B (en) * 2019-12-06 2023-09-05 珠海金山数字网络科技有限公司 NGUI resource rendering processing method and device based on Unity
CN111127623B (en) * 2019-12-25 2023-08-29 上海米哈游天命科技有限公司 Model rendering method and device, storage medium and terminal
CN111127623A (en) * 2019-12-25 2020-05-08 上海米哈游天命科技有限公司 Model rendering method and device, storage medium and terminal
CN112132213A (en) * 2020-09-23 2020-12-25 创新奇智(南京)科技有限公司 Sample image processing method and device, electronic equipment and storage medium
CN112619154A (en) * 2020-12-28 2021-04-09 网易(杭州)网络有限公司 Processing method and device of virtual model and electronic device
CN113069762A (en) * 2021-03-15 2021-07-06 广州三七互娱科技有限公司 Eye image generation method and device of virtual character and electronic equipment
CN112915536A (en) * 2021-04-02 2021-06-08 网易(杭州)网络有限公司 Rendering method and device of virtual model
CN112915536B (en) * 2021-04-02 2024-03-22 网易(杭州)网络有限公司 Virtual model rendering method and device
CN113350786A (en) * 2021-05-08 2021-09-07 广州三七极创网络科技有限公司 Skin rendering method and device for virtual character and electronic equipment
CN113648652A (en) * 2021-08-20 2021-11-16 腾讯科技(深圳)有限公司 Object rendering method and device, storage medium and electronic equipment
CN113648652B (en) * 2021-08-20 2023-11-14 腾讯科技(深圳)有限公司 Object rendering method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN106056658B (en) 2019-01-25

Similar Documents

Publication Publication Date Title
CN106056658A (en) Virtual object rendering method and virtual object rendering device
Deussen et al. Computer-generated pen-and-ink illustration of trees
Bergen et al. The validity of computer-generated graphic images of forest landscape
CN106652007B (en) Virtual sea surface rendering method and system
CN107644453A (en) A kind of rendering intent and system based on physical colored
Goodsell et al. Rendering volumetric data in molecular systems
CN113436308B (en) Three-dimensional environment air quality dynamic rendering method
US20030038811A1 (en) System and method of line sampling object scene information
CN101710429A (en) Illumination algorithm of augmented reality system based on dynamic light map
CN102096941A (en) Consistent lighting method under falsehood-reality fused environment
KR20210095921A (en) Point cloud colorization system using real-time 3D visualization
CN106023300B (en) A kind of the body rendering intent and system of translucent material
CN106898040A (en) Virtual resource object rendering intent and device
CN107273622A (en) Digital yarn emulation mode based on fiber
US6583790B1 (en) Apparatus for and method of converting height fields into parametric texture maps
CN103679818B (en) A kind of real-time scene method for drafting based on virtual surface light source
CN103617593B (en) The implementation method of three-dimensional fluid physic animation engine and device
Bruder et al. Voronoi-Based Foveated Volume Rendering.
CN112580213A (en) Method and apparatus for generating display image of electric field lines, and storage medium
US6753875B2 (en) System and method for rendering a texture map utilizing an illumination modulation value
Guzek et al. Efficient rendering of caustics with streamed photon mapping
Kakimoto et al. Glare generation based on wave optics
US9514566B2 (en) Image-generated system using beta distribution to provide accurate shadow mapping
CN110298910A (en) A kind of illumination calculation method, apparatus calculates equipment and storage medium
Dias et al. Teaching 3D modelling and visualization using VTK

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329

Patentee after: Zhuhai Jinshan Digital Network Technology Co.,Ltd.

Address before: No. 8 Lianshan Lane, Jingshan Road, Jida, Zhuhai, Guangdong, 519000

Patentee before: ZHUHAI KINGSOFT ONLINE GAME TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address