CN106056658B - A kind of virtual objects rendering method and device - Google Patents
A kind of virtual objects rendering method and device Download PDFInfo
- Publication number
- CN106056658B CN106056658B CN201610349066.0A CN201610349066A CN106056658B CN 106056658 B CN106056658 B CN 106056658B CN 201610349066 A CN201610349066 A CN 201610349066A CN 106056658 B CN106056658 B CN 106056658B
- Authority
- CN
- China
- Prior art keywords
- texturing
- pixel
- parameter
- rendering
- sample image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
Abstract
A kind of virtual objects rendering method provided in an embodiment of the present invention and device, this method comprises: obtaining the material of object to be rendered and the rendering demand in each model face;According to the material of the object to be rendered, texturing corresponding with the rendering demand in each model face is selected from the texturing library pre-established, wherein the corresponding texturing of different rendering demands of each material is stored in the texturing library;Each model face is rendered according to selected each texturing, and then completes the rendering to the object to be rendered.When carrying out virtual objects rendering using method provided in an embodiment of the present invention, it is rendered using the texturing obtained previously according to true material, without improving the effect of virtual objects rendering, improving the sense of reality of virtual scene in such a way that art designing draws.
Description
Technical field
The present invention relates to graph and image processing technical fields, more particularly to a kind of virtual objects rendering method and device.
Background technique
Virtual reality technology is a kind of computer simulation system that can be created with the experiencing virtual world.Utilize computer
The digitized simulation scene that analogue system generates is referred to as virtual scene, user can by mouse, keyboard etc. with it is made
Virtual scene interacts and generates experience on the spot in person.Currently, virtual reality technology is in the every field such as game, animation
It is widely used, for example, utilizing the online game, etc. of virtual reality technology production.
It often include multiple virtual objects (referred to as " object ") in virtual scene, for example, in online game, virtually
It generally may include multiple objects such as virtual portrait, virtual house, virtual sky and virtual weapons equipment in scene;And it is each
Object is usually to be made of multiple model faces, wherein model face can be the model face of triangle, be also possible to quadrangle
Model face, etc..In order to enable virtual scene is realistic, often wash with watercolours is carried out to object included in virtual scene
It contaminates, such as the rendering of the light effects to object, the quality of object rendering effect is often directly related to virtual in virtual scene
The sense of reality of scene.In the prior art, the rendering of object is often drawn according to rendering experience by art designing by professional
Mode processed is completed.
As it can be seen that in such a way that art designing draws to render virtual objects when largely will receive the drawing of professional
The limitation of degree causes the sense of reality of virtual scene not high.
Summary of the invention
The embodiment of the present invention is designed to provide a kind of virtual objects rendering method and device, to improve virtual scene
The sense of reality.
In order to achieve the above objectives, the embodiment of the invention discloses a kind of virtual objects rendering methods, which comprises
Obtain the material of object to be rendered and the rendering demand in each model face;
According to the material of the object to be rendered, selected from the texturing library pre-established and each model face
The corresponding texturing of rendering demand, wherein the different rendering demands pair of each material are stored in the texturing library
The texturing answered;
Each model face is rendered according to selected each texturing, and then is completed to the object to be rendered
Rendering.
Optionally, the texturing library is established in the following manner:
Using display as area source, illumination pattern of different shapes is simulated based on Fourier transformation, and utilize list
Camera obtains each sample image of the preset material under different illumination conditions;
According to the image information of each sample image, the corresponding illumination pattern of each sample image and camera position, benefit
Texturing parameter of the material based on different rendering demands is calculated with nonlinear optimization method;
According to the texturing parameter, the corresponding each texturing of different rendering demands of the material is generated.
Optionally, it is described according to the image information of each sample image, the corresponding illumination pattern of each sample image and
Camera position calculates texturing parameter of the material based on different rendering demands using nonlinear optimization method, comprising:
By following steps, texturing parameter of the material based on different rendering demands is calculated:
According to the pixel value of pixel, using preset clustering algorithm, to whole pixels in each sample image
Point carries out clustering processing, the classification after being clustered;
According to preset pixel traversal rule, first pixel in each classification is determined;
According to the image information of each sample image, the corresponding illumination pattern of each sample image and camera position, benefit
With nonlinear optimization method, the texturing parameter of first pixel in each classification is calculated;
According to the brightness ratio of other pixels and first pixel in each classification, the weight of other pixels is calculated
Coefficient, and joined according to the texturing that the texturing parameter of first pixel in described every one kind calculates other pixels
Number.
Optionally, described using display as area source, illumination pattern of different shapes is simulated based on Fourier transformation
Case, and each sample image of the preset material under different illumination conditions is obtained using one camera, comprising:
Using display as area source, illumination pattern of different shapes is simulated based on Fourier transformation, and utilize list
Camera acquires each sample image of the material under the different illumination intensity of each default light source colour.
Optionally, the texturing parameter includes: diffusing reflection coefficient, specularity factor, normal vector, high backscatter extinction logarithmic ratio;
According to the image information of each sample image, the corresponding illumination pattern of each sample image and camera position, benefit
With nonlinear optimization method, the texturing parameter of first pixel in each classification is calculated, comprising:
According to the image information of each sample image, the corresponding illumination pattern of each sample image and camera position, and
The texturing parameter of first pixel in each classification is calculated according to following formula,
Wherein, V (x)=ρd(x)·Dot(N(x),L)+ρs(x)·pow((Dot(N(x),H),g(x))·Dot(N(x),
L), I (x) is the pixel value of pixel x identical with first pixel coordinate in each sample image, ρdIt (x) is picture
The diffusing reflection coefficient of vegetarian refreshments x, ρs(x) specularity factor for being pixel x, N (x) are the normal vector of pixel x, and g (x) is picture
The high backscatter extinction logarithmic ratio of vegetarian refreshments x, L are the light source direction of pixel x, the light source direction and image capture device direction that H is pixel x
Middle Component.
Optionally, described according to the texturing parameter, the different rendering demands for generating the material are corresponding each
After texturing, further includes:
Using the Standard textures parameter of pre-generated Standard, the material for the material being calculated is corrected
Textures parameter.
It optionally, include mirror-reflection information, diffusing reflection information, normal information in the rendering demand in each model face
With high optical information;
Be stored in the texturing library the corresponding mirror-reflection texturing of each material, diffusing reflection texturing,
Normal vector texturing and bloom texturing.
In order to achieve the above objectives, the embodiment of the invention discloses a kind of virtual objects rendering device, described device includes:
Information acquisition module, for obtaining the material of object to be rendered and the rendering demand in each model face;
Textures selecting module is selected from the texturing library pre-established for the material according to the object to be rendered
Select texturing corresponding with the rendering demand in each model face, wherein be stored with each material in the texturing library
The corresponding texturing of different rendering demands of matter;
Rendering module, for being rendered according to selected each texturing to each model face, and then completion pair
The rendering of the object to be rendered.
It optionally, further include that module is established in texturing library, it includes: that image obtains son that module is established in the texturing library
Module, textures parameter computation module, texturing generate submodule;Wherein,
Described image obtains submodule, for simulating difference based on Fourier transformation using display as area source
The illumination pattern of shape, and each sample image of the preset material under different illumination conditions is obtained using one camera;
The textures parameter computation module, for the image information according to each sample image, each sample image pair
The illumination pattern and camera position answered calculate material of the material based on different rendering demands using nonlinear optimization method
Textures parameter;
The texturing generates submodule, for generating the different wash with watercolours of the material according to the texturing parameter
The corresponding each texturing of dye demand.
Optionally, the textures parameter computation module, comprising: cluster cell, pixel determination unit, the first textures ginseng
Number computing unit, the second textures parameter calculation unit;Wherein,
The cluster cell, for the pixel value according to pixel, using preset clustering algorithm, to each sample
Whole pixels in image carry out clustering processing, the classification after being clustered;
The pixel determination unit, for determining first in each classification according to preset pixel traversal rule
A pixel;
The first textures parameter calculation unit, for the image information according to each sample image, each sample image
Corresponding illumination pattern and camera position calculate the material of first pixel in each classification using nonlinear optimization method
Matter textures parameter;
The second textures parameter calculation unit, for according to other pixels in each classification and first pixel
Brightness ratio calculates the weight coefficient of other pixels, and is joined according to the texturing of first pixel in described every one kind
Number calculates the texturing parameter of other pixels.
Optionally, described image obtains submodule, is specifically used for:
Using display as area source, illumination pattern of different shapes is simulated based on Fourier transformation, and utilize list
Camera acquires each sample image of the material under the different illumination intensity of each default light source colour.
Optionally, the texturing parameter includes: diffusing reflection coefficient, specularity factor, normal vector, high backscatter extinction logarithmic ratio;
The first textures parameter calculation unit, is specifically used for:
According to the image information of each sample image, the corresponding illumination pattern of each sample image and camera position, and
The texturing parameter of first pixel in each classification is calculated according to following formula,
Wherein, V (x)=ρd(x)·Dot(N(x),L)+ρs(x)·pow((Dot(N(x),H),g(x))·Dot(N(x),
L), I (x) is the pixel value of pixel x identical with first pixel coordinate in each sample image, ρdIt (x) is picture
The diffusing reflection coefficient of vegetarian refreshments x, ρs(x) specularity factor for being pixel x, N (x) are the normal vector of pixel x, and g (x) is picture
The high backscatter extinction logarithmic ratio of vegetarian refreshments x, L are the light source direction of pixel x, the light source direction and image capture device direction that H is pixel x
Middle Component.
Optionally, further includes: textures parameter correction module is used for:
Submodule is generated according to texturing gain of parameter rendering different from the material in the texturing
After the corresponding each texturing of demand, using the Standard textures parameter of pre-generated Standard, correction is calculated
The texturing parameter of the obtained material.
It optionally, include mirror-reflection information, diffusing reflection information, normal information in the rendering demand in each model face
With high optical information;
Be stored in the texturing library the corresponding mirror-reflection texturing of each material, diffusing reflection texturing,
Normal vector texturing and bloom texturing.
The embodiment of the present invention provides a kind of virtual objects rendering method and device.When carrying out virtual objects rendering, first
The rendering demand of the material and each model face that obtain object to be rendered is built then according to the material of object to be rendered from advance
Texturing corresponding with the rendering demand in each model face is selected in vertical texturing library, finally according to selected
Each texturing renders each model face, and then completes the rendering to the object to be rendered.Using of the invention real
When the method for applying example offer carries out virtual objects rendering, rendered using the texturing obtained previously according to true material,
Without improving the effect of virtual objects rendering, improving the sense of reality of virtual scene in such a way that art designing draws.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is a kind of flow diagram of virtual objects rendering method provided in an embodiment of the present invention;
Fig. 2 is a kind of structural schematic diagram of virtual objects rendering device provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
As shown in Figure 1, being a kind of flow diagram of virtual objects rendering method provided in an embodiment of the present invention, this method
It may comprise steps of:
Step S101: the material of object to be rendered and the rendering demand in each model face are obtained.
Material, it is possible to understand that at the combination of material and texture, surface has specific visual attributes, in brief, just
It is what quality object looks like.These visual attributes refer to the color on surface, texture, smoothness, transparency, reflectivity,
Refractive index, luminosity etc..
For different materials, often there are different visual attributes, therefore, the model face of unlike material is carried out
Rendering demand when rendering also tends to be different.
" rendering demand " described herein, is for indicate how treating the information that rendering objects are rendered.
It should be noted that those skilled in the art can be according to the concrete condition in practical application, reasonable set
For the rendering demand of object to be rendered.
It in a kind of implementation, can specifically include in the rendering demand in each model face: mirror-reflection information, unrestrained anti-
Penetrate information, normal information and high optical information.For example, for mirror surface rendering effect, rendering demand may include: mirror
Face reflective information and normal vector;For diffusing reflection rendering effect, rendering demand may include: diffusing reflection information and normal direction
Amount, it should be noted that be here only to list several specific rendering demands, can also there are other renderings to need certainly
It asks, will not enumerate herein.
Certainly, in practical application, the rendering demand in each model face can also include needing except above-mentioned four enumerated kind render
Other information other than asking, the embodiment of the present invention do not need to be defined the particular content of rendering demand.
Step S102: according to the material of object to be rendered, selection and each mould from the texturing library pre-established
The corresponding texturing of rendering demand in type face.
Wherein, the corresponding texturing of different rendering demands of each material is stored in texturing library.
It should be noted that being stored with the texturing of various material in texturing library, each material may include again
The specific rendering demand of various material textures, each texturing and each material is one-to-one.For example, if being used for
Indicate how to include: mirror-reflection information and normal vector in the rendering demand rendered to material A, then material in texturing library
Matter A should be included at least: mirror-reflection texturing and normal vector texturing.
In a kind of implementation, the corresponding mirror-reflection texturing of each material, unrestrained anti-is stored in texturing library
Penetrate texturing, normal vector texturing and bloom texturing.Certainly, it is merely illustrative here, texturing
The corresponding texturing of each material stored in library can also include the patch in addition to these four cited texturing
Figure.But in texturing library texturing corresponding to each material at least need include with mentioned in step S101
The corresponding texturing of each specific rendering demand in rendering demand.
In a specific embodiment of the invention, texturing library can be established in the following manner:
(11) using display as area source, illumination pattern of different shapes, and benefit are simulated based on Fourier transformation
Each sample image of the preset material under different illumination conditions is obtained with one camera.
Specifically, each sample image of the preset material under different illumination conditions is obtained, it can be according to lower section
Formula carries out: acquiring each sample image of the preset material under the different illumination intensity of each default light source colour, wherein light
Source shape is variable.
The different illumination intensity of default light source colour described herein, refer to carry out sample image collection process in,
The color of light source can be different, for example, white light source, blue-light source, red light source etc., in addition, even with same
The light source of sample color, such as white light source, intensity of illumination be also possible to it is different, for example, the strong light of white, white dim light etc..
It should be noted why acquiring the sample image under different illumination conditions, in order to increase preset
The diversity of the sample of material, accurate texturing can be obtained when guaranteeing to carry out subsequent processing.
In a kind of implementation, default light source can be area source, in practical application, can be simulated with liquid crystal display
The light source of different illumination conditions, can substantially reduce the cost of real light sources in this way, and apply relatively convenient.Certainly, this
In be merely illustrative, the embodiment of the present invention does not need to be defined the color and intensity of illumination of default light source, ability
Technical staff in domain needs reasonably to be arranged according to the concrete condition in practical application.
(12) according to the image information of each sample image, the corresponding illumination pattern of each sample image and phase seat in the plane
It sets, calculates texturing parameter of the material based on different rendering demands using nonlinear optimization method.
Texturing parameter refers to the specific value for treating the texturing that rendering objects are rendered, material patch
May include a variety of specific parameters in graph parameter, and the design parameter of each texturing with it is specific in each rendering demand
Rendering demand is one-to-one.For example, for the mirror-reflection information in rendering demand, in texturing parameter
There is mirror-reflection textures parameter corresponding.It is included in rendering demand, texturing and texturing parameter for further
Particular content be one-to-one, for example, mirror-reflection information (belonging to rendering demand), mirror-reflection texturing (belong to
In texturing) and mirror-reflection texturing parameter (belonging to texturing parameter) be to close correspondingly between any two
System.
Specifically, obtaining texturing parameter of the material based on each rendering demand by following steps:
The pixel value of (12-1) according to pixel, using preset clustering algorithm, to whole pictures in each sample image
Vegetarian refreshments carries out clustering processing, the classification after being clustered.
Preset clustering algorithm described herein can be clustering algorithm commonly used in the prior art, for example, K-means is poly-
Class algorithm is here no longer described clustering algorithm in the prior art.
In addition, can be by those skilled in the art according to the tool in practical application for the number of the classification after cluster
Body situation is reasonably arranged, and the present invention does not need to be defined this.
(12-2) determines first pixel in each classification according to preset pixel traversal rule.
In a kind of implementation, can according to the sequence of sample image collected, for each sample image according to by
The mode of row scanning is traversed, and when pixel identical with the classification after cluster is arrived in scanning, determining fixed pixel is should
First pixel in classification, then, then carries out the scanning of next classification.
For example, it is assumed that acquire I respectively according to acquisition order1、I2And I3Three sample images, the classification after cluster
For C1、C2.Firstly, by C1It is determined as current class, from I1Start scan image, when I is arrived in scanning1Middle coordinate is (2,3) pixel
When, scanning is arrived and C1The identical pixel P of classification1, then by pixel P1It is determined as C1First pixel of classification;Then again
By C2It is determined as current class, is scanned according to same scanning mode, when scanning arrives I to when scanning2Middle coordinate is (5,6)
When pixel, scanning is arrived and C2The identical pixel P of classification2, then by pixel P2It is determined as C2First pixel of classification,
Traversal terminates.
It in another implementation, can be carried out according to the classification after cluster, for the poly- of the pixel in each classification
Class sequence is traversed, and when pixel identical with the classification after cluster is arrived in scanning, determines that the pixel is in the category
Then first pixel, then carries out the scanning of next classification.
For example, it is also assumed that the classification after cluster is C1、C2, wherein C1In include P11、P12、P13And P14Four pictures
Vegetarian refreshments, C2In include P21、P22、P23And P24Four pixels.Firstly, by C1It is determined as current class, from C1Classification starts to scan,
When pixel P is arrived in scanning12When, scanning is arrived and C1The identical pixel of classification, then by pixel P12It is determined as C1The first of classification
A pixel;Then again by C2It is determined as current class, is scanned according to same scanning mode, when scanning is arrived to when scanning
Pixel P23When, scanning is arrived and C2The identical pixel of classification, then by pixel P23It is determined as C2First pixel of classification,
Traversal terminates.
It should be noted that above-mentioned two enumerated kind implementation is merely illustrative, the embodiment of the present invention is not
It needs to be defined specific pixel traversal rule, any possible implementation can be applied to the present invention.
(12-3) is according to the image information of each sample image, the corresponding illumination pattern of each sample image and phase seat in the plane
It sets, using nonlinear optimization method, calculates the texturing parameter of first pixel in each classification.
In a kind of implementation, texturing parameter may include: diffusing reflection coefficient, specularity factor, normal vector, height
Backscatter extinction logarithmic ratio;
According to the image information of each sample image, the corresponding illumination pattern of each sample image and camera position, benefit
With nonlinear optimization method, the texturing parameter of first pixel in each classification is calculated, may include:
According to the image information of each sample image, the corresponding illumination pattern of each sample image and camera position, and
The texturing parameter of first pixel in each classification is calculated according to following formula,
Wherein, V (x)=ρd(x)·Dot(N(x),L)+ρs(x)·pow((Dot(N(x),H),g(x))·Dot(N(x),
L), I (x) is the pixel value of pixel x identical with first pixel coordinate in each sample image, ρdIt (x) is pixel x
Diffusing reflection coefficient, ρs(x) specularity factor for being pixel x, N (x) are the normal vector of pixel x, and g (x) is pixel x
High backscatter extinction logarithmic ratio, L be pixel x light source direction, H be pixel x light source direction and image capture device direction centre
Component.
In a kind of implementation, the light source direction L of pixel, the light source direction of pixel can be obtained in the following way
With the Middle Component H in image capture device direction:
According to preset image rectification rule, image rectification is carried out to each sample image of acquisition;
Light source direction L, Middle Component H are calculated according to the data of image rectification.
It should be noted that the light source direction and Image Acquisition of the light source direction L of the pixel mentioned by here, pixel
The positional relationship of hardware when the Middle Component H of device orientation is acquired with sample image is related, in practical application, in addition to that can lead to
It crosses above mentioned obtaining the mode of light source direction L and Middle Component H by way of image rectification with can also to utilize
Other way, such as by measuring instrument manual measurement etc..Certainly, the embodiment of the present invention do not need to light source direction L and
The acquisition pattern of Middle Component H is defined, and any possible implementation can be applied to the present invention.
(12-4) calculates other pixels according to the brightness ratio of other pixels and first pixel in each classification
Weight coefficient, and according to it is every one kind in first pixel texturing parameter calculate other pixels texturing join
Number.
As can be seen that only actually belongs to often to identified after cluster during obtaining texturing parameter
The texturing parameter of a kind of other first pixel has carried out relevant calculation, and for removing first pixel in each classification
The other materials textures parameter that point is thought is only to replicate the texturing parameter of first pixel in generic, due to
Pixel is usually all thousands of in image data, therefore, utilizes the side provided by step (12-1) to step (12-4)
Method greatly reduces the calculation amount for calculating texturing parameter.
(13) according to the texturing parameter, the corresponding each texturing of different rendering demands of the material is generated.
In a kind of implementation, step (13) is corresponding according to texturing gain of parameter rendering demand different from material
After each texturing, can also include:
(14) using the Standard textures parameter of pre-generated Standard, the texturing ginseng being calculated is corrected
Number.
Correction by the Standard textures parameter using Standard to the texturing parameter being calculated, can
The texturing parameter of calculated distortion is modified as far as possible, further ensures the accurate journey of texturing parameter in this way
Degree and the sense of reality.
Step S103: rendering each model face according to selected each texturing, and then completes to treat wash with watercolours
Contaminate the rendering of object.
When carrying out virtual objects rendering using method provided in an embodiment of the present invention, obtained using previously according to true material
Texturing rendered, without using art designing draw by the way of, improve virtual objects rendering effect, improve void
The sense of reality of quasi- scene.
Further, texturing is obtained in such a way that art designing draws in the prior art, even with same material
Identical material textures there is still a need for art designing draw it is primary, and when scheme provided in an embodiment of the present invention carries out virtual objects rendering,
Being rendered each texturing utilized to each model face is to select to obtain from the texturing library pre-established,
After texturing library is established, only selects suitable texturing to be rendered, drawn without carrying out art designing again
Texturing library is made or established, therefore, the workload of virtual objects rendering is greatly reduced, improves the effect of virtual objects rendering
Rate.
As shown in Fig. 2, being a kind of structural schematic diagram of virtual objects rendering device provided in an embodiment of the present invention, the device
May include with lower module:
Information acquisition module 210, for obtaining the material of object to be rendered and the rendering demand in each model face.
Textures selecting module 220 is selected from the texturing library pre-established for the material according to object to be rendered
Texturing corresponding with the rendering demand in each model face.
Wherein, the corresponding texturing of different rendering demands of each material is stored in texturing library.
Rendering module 230 for being rendered according to selected each texturing to each model face, and then is completed
Treat the rendering of rendering objects.
Specifically, can also include that module is established in texturing library, it may include: that image obtains that module is established in texturing library
It obtains submodule, textures parameter computation module, texturing and generates submodule.
Wherein, image obtains submodule, for simulating difference based on Fourier transformation using display as area source
The illumination pattern of shape, and each sample image of the preset material under different illumination conditions is obtained using one camera.
Textures parameter computation module, for corresponding according to the image information of each sample image, each sample image
Illumination pattern and camera position are calculated the material using nonlinear optimization method and are joined based on the texturing of different rendering demands
Number.
Texturing generates submodule, for according to texturing parameter, the different rendering demands for generating the material to be corresponding
Each texturing.
Specifically, textures parameter computation module, may include: cluster cell, pixel determination unit, the first textures ginseng
Number computing unit, the second textures parameter calculation unit.
Wherein, cluster cell, for the pixel value according to pixel, using preset clustering algorithm, to each sample graph
Whole pixels as in carry out clustering processing, the classification after being clustered.
Pixel determination unit, for determining first picture in each classification according to preset pixel traversal rule
Vegetarian refreshments.
First textures parameter calculation unit, for corresponding according to the image information of each sample image, each sample image
Illumination pattern and camera position calculate in each classification the material patch of first pixel using nonlinear optimization method
Graph parameter.
Second textures parameter calculation unit, for the brightness according to other pixels and first pixel in each classification
Ratio calculates the weight coefficient of other pixels, and calculates it according to the texturing parameter of first pixel in every one kind
The texturing parameter of his pixel.
Specifically, image obtains submodule, it is specifically used for:
Using display as area source, illumination pattern of different shapes is simulated based on Fourier transformation, and utilize list
Camera acquires each sample image of the material under the different illumination intensity of each default light source colour.
Specifically, texturing parameter may include: diffusing reflection coefficient, specularity factor, normal vector, high backscatter extinction logarithmic ratio.
First textures parameter calculation unit, is specifically used for:
According to the image information of each sample image, the corresponding illumination pattern of each sample image and camera position, and
The texturing parameter of first pixel in each classification is calculated according to following formula,
Wherein, V (x)=ρd(x)·Dot(N(x),L)+ρs(x)·pow((Dot(N(x),H),g(x))·Dot(N(x),
L), I (x) is the pixel value of pixel x identical with first pixel coordinate in each sample image, ρdIt (x) is pixel x
Diffusing reflection coefficient, ρs(x) specularity factor for being pixel x, N (x) are the normal vector of pixel x, and g (x) is pixel x
High backscatter extinction logarithmic ratio, L be pixel x light source direction, H be pixel x light source direction and image capture device direction centre
Component.
Specifically, can also include: textures parameter correction module, it be used for:
It is corresponding each according to texturing gain of parameter rendering demand different from material that submodule is generated in texturing
After a texturing, using the Standard textures parameter of pre-generated Standard, the material being calculated is corrected
Texturing parameter.
Specifically, may include mirror-reflection information, diffusing reflection information, normal vector in the rendering demand in each model face
Information and high optical information.
The corresponding mirror-reflection texturing of each material, diffusing reflection texturing, normal direction are stored in texturing library
Measure texturing and bloom texturing.
When carrying out virtual objects rendering using method provided in an embodiment of the present invention, obtained using previously according to true material
Texturing rendered, without using art designing draw by the way of, improve virtual objects rendering effect, improve void
The sense of reality of quasi- scene.
Further, texturing is obtained in such a way that art designing draws in the prior art, even with same material
Identical material textures there is still a need for art designing draw it is primary, and when scheme provided in an embodiment of the present invention carries out virtual objects rendering,
Being rendered each texturing utilized to each model face is to select to obtain from the texturing library pre-established,
After texturing library is established, only selects suitable texturing to be rendered, drawn without carrying out art designing again
Texturing library is made or established, therefore, the workload of virtual objects rendering is greatly reduced, improves the effect of virtual objects rendering
Rate.
For device embodiment, since it is substantially similar to the method embodiment, related so being described relatively simple
Place illustrates referring to the part of embodiment of the method.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in process, method, article or equipment including the element.
Those of ordinary skill in the art will appreciate that all or part of the steps in realization above method embodiment is can
It is completed with instructing relevant hardware by program, the program can store in computer-readable storage medium,
The storage medium designated herein obtained, such as: ROM/RAM, magnetic disk, CD.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all
Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention
It is interior.
Claims (12)
1. a kind of virtual objects rendering method, which is characterized in that the described method includes:
Obtain the material of object to be rendered and the rendering demand in each model face;
According to the material of the object to be rendered, the rendering with each model face is selected from the texturing library pre-established
The corresponding texturing of demand, wherein the different rendering demands that each material is stored in the texturing library are corresponding
Texturing;
Each model face is rendered according to selected each texturing, and then completes the wash with watercolours to the object to be rendered
Dye;
Wherein, the texturing library is established in the following manner:
Using display as area source, illumination pattern of different shapes is simulated based on Fourier transformation, and utilize one camera
Obtain each sample image of the preset material under different illumination conditions;
It is non-according to the image information of each sample image, the corresponding illumination pattern of each sample image and camera position, utilization
Linear optimization method calculates texturing parameter of the material based on different rendering demands;
According to the texturing parameter, the corresponding each texturing of different rendering demands of the material is generated.
2. the method according to claim 1, wherein the image information according to each sample image, each
The corresponding illumination pattern of sample image and camera position calculate the material using nonlinear optimization method and are based on different renderings
The texturing parameter of demand, comprising:
By following steps, texturing parameter of the material based on different rendering demands is calculated:
Whole pixels in each sample image are clicked through using preset clustering algorithm according to the pixel value of pixel
Row clustering processing, the classification after being clustered;
According to preset pixel traversal rule, first pixel in each classification is determined;
It is non-according to the image information of each sample image, the corresponding illumination pattern of each sample image and camera position, utilization
Linear optimization method calculates the texturing parameter of first pixel in each classification;
According to the brightness ratio of other pixels and first pixel in each classification, the weight system of other pixels is calculated
It counts, and calculates the texturing parameter of other pixels according to the texturing parameter of first pixel in described every one kind.
3. method according to claim 1 or 2, which is characterized in that it is described using display as area source, based in Fu
Leaf transformation simulates illumination pattern of different shapes, and it is each under different illumination conditions using one camera to obtain preset material
A sample image, comprising:
Using display as area source, illumination pattern of different shapes is simulated based on Fourier transformation, and utilize one camera,
Acquire each sample image of the material under the different illumination intensity of each default light source colour.
4. according to the method described in claim 2, it is characterized in that, the texturing parameter includes: diffusing reflection coefficient, mirror surface
Reflection coefficient, normal vector, high backscatter extinction logarithmic ratio;
It is non-according to the image information of each sample image, the corresponding illumination pattern of each sample image and camera position, utilization
Linear optimization method calculates the texturing parameter of first pixel in each classification, comprising:
According to the image information of each sample image, the corresponding illumination pattern of each sample image and camera position, and according to
Following formula calculates the texturing parameter of first pixel in each classification,
Wherein, V (x)=ρd(x)·Dot(N(x),L)+ρs(x) pow ((Dot (N (x), H), g (x)) Dot (N (x), L), I
It (x) is the pixel value of pixel x identical with first pixel coordinate in each sample image, ρdIt (x) is pixel x
Diffusing reflection coefficient, ρs(x) specularity factor for being pixel x, N (x) are the normal vector of pixel x, and g (x) is pixel x
High backscatter extinction logarithmic ratio, L be pixel x light source direction, H be pixel x light source direction and image capture device direction centre
Component.
5. method according to claim 1 or 2, which is characterized in that it is described according to the texturing parameter, described in generation
After the corresponding each texturing of different rendering demands of material, further includes:
Using the Standard textures parameter of pre-generated Standard, the texturing for the material being calculated is corrected
Parameter.
6. the method according to claim 1, wherein
It include mirror-reflection information, diffusing reflection information, normal information and high optical information in the rendering demand in each model face;
The corresponding mirror-reflection texturing of each material, diffusing reflection texturing, normal direction are stored in the texturing library
Measure texturing and bloom texturing.
7. a kind of virtual objects rendering device, which is characterized in that described device includes:
Information acquisition module, for obtaining the material of object to be rendered and the rendering demand in each model face;
Textures selecting module, for the material according to the object to be rendered, from the texturing library pre-established selection with
The corresponding texturing of rendering demand in each model face, wherein be stored with each material in the texturing library
The corresponding texturing of different rendering demands;
Rendering module for being rendered according to selected each texturing to each model face, and then is completed to described
The rendering of object to be rendered;
Module is established in texturing library, for establishing the texturing library;
Wherein, module is established in the texturing library, comprising:
Image obtains submodule, for simulating light of different shapes based on Fourier transformation using display as area source
Each sample image of the preset material under different illumination conditions is obtained according to pattern, and using one camera;
Textures parameter computation module, for the image information according to each sample image, the corresponding illumination of each sample image
Pattern and camera position are calculated the material using nonlinear optimization method and are joined based on the texturing of different rendering demands
Number;
Texturing generates submodule, for generating the different rendering demands pair of the material according to the texturing parameter
The each texturing answered.
8. device according to claim 7, which is characterized in that the textures parameter computation module, comprising:
Cluster cell, for the pixel value according to pixel, using preset clustering algorithm, in each sample image
Whole pixels carry out clustering processing, the classification after being clustered;
Pixel determination unit, for determining first pixel in each classification according to preset pixel traversal rule;
First textures parameter calculation unit, for the image information according to each sample image, the corresponding light of each sample image
According to pattern and camera position, using nonlinear optimization method, the texturing ginseng of first pixel in each classification is calculated
Number;
Second textures parameter calculation unit, for the brightness ratio according to other pixels and first pixel in each classification
Value calculates the weight coefficient of other pixels, and is calculated according to the texturing parameter of first pixel in described every one kind
The texturing parameter of other pixels.
9. device according to claim 7 or 8, which is characterized in that described image obtains submodule, is specifically used for:
Using display as area source, illumination pattern of different shapes is simulated based on Fourier transformation, and utilize one camera,
Acquire each sample image of the material under the different illumination intensity of each default light source colour.
10. device according to claim 8, which is characterized in that the texturing parameter includes: diffusing reflection coefficient, mirror
Face reflection coefficient, normal vector, high backscatter extinction logarithmic ratio;
The first textures parameter calculation unit, is specifically used for:
According to the image information of each sample image, the corresponding illumination pattern of each sample image and camera position, and according to
Following formula calculates the texturing parameter of first pixel in each classification,
Wherein, V (x)=ρd(x)·Dot(N(x),L)+ρs(x) pow ((Dot (N (x), H), g (x)) Dot (N (x), L), I
It (x) is the pixel value of pixel x identical with first pixel coordinate in each sample image, ρdIt (x) is pixel x
Diffusing reflection coefficient, ρs(x) specularity factor for being pixel x, N (x) are the normal vector of pixel x, and g (x) is pixel x
High backscatter extinction logarithmic ratio, L be pixel x light source direction, H be pixel x light source direction and image capture device direction centre
Component.
11. device according to claim 7 or 8, which is characterized in that further include: textures parameter correction module is used for:
Submodule is generated according to texturing gain of parameter rendering demand different from the material in the texturing
After corresponding each texturing, using the Standard textures parameter of pre-generated Standard, correction is calculated
The material texturing parameter.
12. device according to claim 7, which is characterized in that
It include mirror-reflection information, diffusing reflection information, normal information and high optical information in the rendering demand in each model face;
The corresponding mirror-reflection texturing of each material, diffusing reflection texturing, normal direction are stored in the texturing library
Measure texturing and bloom texturing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610349066.0A CN106056658B (en) | 2016-05-23 | 2016-05-23 | A kind of virtual objects rendering method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610349066.0A CN106056658B (en) | 2016-05-23 | 2016-05-23 | A kind of virtual objects rendering method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106056658A CN106056658A (en) | 2016-10-26 |
CN106056658B true CN106056658B (en) | 2019-01-25 |
Family
ID=57175260
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610349066.0A Active CN106056658B (en) | 2016-05-23 | 2016-05-23 | A kind of virtual objects rendering method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106056658B (en) |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10288734B2 (en) * | 2016-11-18 | 2019-05-14 | Robert Bosch Start-Up Platform North America, LLC, Series 1 | Sensing system and method |
CN106600712B (en) * | 2016-12-20 | 2019-05-31 | 南京大学 | A kind of texture replacement method based on two dimensional image |
CN106898040B (en) * | 2017-03-06 | 2020-08-04 | 网易(杭州)网络有限公司 | Virtual resource object rendering method and device |
CN108933954A (en) * | 2017-05-22 | 2018-12-04 | 中兴通讯股份有限公司 | Method of video image processing, set-top box and computer readable storage medium |
CN107103638B (en) * | 2017-05-27 | 2020-10-16 | 杭州万维镜像科技有限公司 | Rapid rendering method of virtual scene and model |
CN107909638B (en) * | 2017-11-15 | 2021-05-14 | 杭州易现先进科技有限公司 | Rendering method, medium, system and electronic device of virtual object |
CN109934903B (en) * | 2017-12-19 | 2020-10-16 | 北大方正集团有限公司 | Highlight information extraction method, system, computer equipment and storage medium |
CN109377543B (en) * | 2018-10-29 | 2022-04-15 | 广东明星创意动画有限公司 | Method for quickly establishing material connection |
CN109603155B (en) | 2018-11-29 | 2019-12-27 | 网易(杭州)网络有限公司 | Method and device for acquiring merged map, storage medium, processor and terminal |
CN110021071B (en) * | 2018-12-25 | 2024-03-12 | 创新先进技术有限公司 | Rendering method, device and equipment in augmented reality application |
CN110390713B (en) * | 2019-06-11 | 2023-05-12 | 中新软件(上海)有限公司 | Method and device for realizing mirror surface reciprocal effect in entity decoration and computer equipment |
CN111068314B (en) * | 2019-12-06 | 2023-09-05 | 珠海金山数字网络科技有限公司 | NGUI resource rendering processing method and device based on Unity |
CN111127623B (en) * | 2019-12-25 | 2023-08-29 | 上海米哈游天命科技有限公司 | Model rendering method and device, storage medium and terminal |
CN112132213A (en) * | 2020-09-23 | 2020-12-25 | 创新奇智(南京)科技有限公司 | Sample image processing method and device, electronic equipment and storage medium |
CN112619154A (en) * | 2020-12-28 | 2021-04-09 | 网易(杭州)网络有限公司 | Processing method and device of virtual model and electronic device |
CN113069762A (en) * | 2021-03-15 | 2021-07-06 | 广州三七互娱科技有限公司 | Eye image generation method and device of virtual character and electronic equipment |
CN112915536B (en) * | 2021-04-02 | 2024-03-22 | 网易(杭州)网络有限公司 | Virtual model rendering method and device |
CN113350786A (en) * | 2021-05-08 | 2021-09-07 | 广州三七极创网络科技有限公司 | Skin rendering method and device for virtual character and electronic equipment |
CN113648652B (en) * | 2021-08-20 | 2023-11-14 | 腾讯科技(深圳)有限公司 | Object rendering method and device, storage medium and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001184518A (en) * | 1999-12-27 | 2001-07-06 | Nec Corp | Device and method for plotting three-dimensional image |
CN102346921A (en) * | 2011-09-19 | 2012-02-08 | 广州市凡拓数码科技有限公司 | Renderer-baking light mapping method of three-dimensional software |
CN102722904A (en) * | 2012-05-30 | 2012-10-10 | 北京尔宜居科技有限责任公司 | Local rendering method |
CN103310488A (en) * | 2013-03-20 | 2013-09-18 | 常州依丽雅斯纺织品有限公司 | mental ray rendering-based virtual reality rendering method |
CN104866861A (en) * | 2015-04-08 | 2015-08-26 | 浙江大学 | Material appearance acquisition method based on Kinect equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007066064A (en) * | 2005-08-31 | 2007-03-15 | Sega Corp | Image generating device and image generating program |
-
2016
- 2016-05-23 CN CN201610349066.0A patent/CN106056658B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001184518A (en) * | 1999-12-27 | 2001-07-06 | Nec Corp | Device and method for plotting three-dimensional image |
CN102346921A (en) * | 2011-09-19 | 2012-02-08 | 广州市凡拓数码科技有限公司 | Renderer-baking light mapping method of three-dimensional software |
CN102722904A (en) * | 2012-05-30 | 2012-10-10 | 北京尔宜居科技有限责任公司 | Local rendering method |
CN103310488A (en) * | 2013-03-20 | 2013-09-18 | 常州依丽雅斯纺织品有限公司 | mental ray rendering-based virtual reality rendering method |
CN104866861A (en) * | 2015-04-08 | 2015-08-26 | 浙江大学 | Material appearance acquisition method based on Kinect equipment |
Non-Patent Citations (4)
Title |
---|
Rendering synthetic objects into real scens:Bridging traditional and image-based graphics with global illuminatin and high dynamic range photography;Paul Debevec;《ACM SIGGRAPH 2008 classes.ACM》;20080815(第32期);第1-10页 |
在3D MAX虚拟现实中反光材质的实现;薛媛媛;《科学之友》;20100228;第98-99页 |
基于WEB的虚拟仿真校园开发设计与实现;张莹;《中国优秀硕士学位论文全文数据库_信息科技辑》;20140815(第8期);第I138-1486页 |
虚拟展馆建设中的贴图烘焙技术;王一轩;《信息与电脑》;20151223(第24期);第13-14页第2节 |
Also Published As
Publication number | Publication date |
---|---|
CN106056658A (en) | 2016-10-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106056658B (en) | A kind of virtual objects rendering method and device | |
Vangorp et al. | The influence of shape on the perception of material reflectance | |
US7953275B1 (en) | Image shader for digital image modification | |
US8019182B1 (en) | Digital image modification using pyramid vignettes | |
JP7235875B2 (en) | Point cloud colorization system with real-time 3D visualization | |
CN108986195A (en) | A kind of single-lens mixed reality implementation method of combining environmental mapping and global illumination rendering | |
Zhou et al. | Accurate depth of field simulation in real time | |
CN102136156B (en) | System and method for mesoscopic geometry modulation | |
CN107644453A (en) | A kind of rendering intent and system based on physical colored | |
CN109993838A (en) | Method and system is sent out in virtual examination based on WebGL and human face rebuilding | |
Ritschel et al. | Interactive reflection editing | |
Bruder et al. | Voronoi-Based Foveated Volume Rendering. | |
US6753875B2 (en) | System and method for rendering a texture map utilizing an illumination modulation value | |
US20200312034A1 (en) | Method of construction of a computer-generated image and a virtual environment | |
CN110298910A (en) | A kind of illumination calculation method, apparatus calculates equipment and storage medium | |
Liu et al. | Research on Optimization Algorithms for Visual Communication Systems Based on VR Technology | |
Bratuž et al. | Defining Optimal Conditions of Colors in 3D Space in Dependence on Gamma Values, Illumination, and Background Color. | |
CN117274466B (en) | Realistic water surface rendering method and system integrating multisource tile map service | |
US20180005432A1 (en) | Shading Using Multiple Texture Maps | |
CN117058301B (en) | Knitted fabric real-time rendering method based on delayed coloring | |
Atié et al. | Towards a Calibrated 360° Stereoscopic HDR Image Dataset for Architectural Lighting Studies | |
Kips et al. | Hair Color Digitization through Imaging and Deep Inverse Graphics | |
Ha et al. | Lighting-by-example with wavelets | |
Law et al. | Projecting Restorations in Real-Time for Real-World Objects | |
Tada et al. | BTF Image Recovery based on U-Net and Texture Interpolation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address | ||
CP03 | Change of name, title or address |
Address after: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329 Patentee after: Zhuhai Jinshan Digital Network Technology Co.,Ltd. Address before: No. 8 Lianshan Lane, Jingshan Road, Jida, Zhuhai, Guangdong, 519000 Patentee before: ZHUHAI KINGSOFT ONLINE GAME TECHNOLOGY Co.,Ltd. |