WO2023066121A1 - Rendering of three-dimensional model - Google Patents

Rendering of three-dimensional model Download PDF

Info

Publication number
WO2023066121A1
WO2023066121A1 PCT/CN2022/125043 CN2022125043W WO2023066121A1 WO 2023066121 A1 WO2023066121 A1 WO 2023066121A1 CN 2022125043 W CN2022125043 W CN 2022125043W WO 2023066121 A1 WO2023066121 A1 WO 2023066121A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
rendering
color
shadow
fragment
Prior art date
Application number
PCT/CN2022/125043
Other languages
French (fr)
Chinese (zh)
Inventor
陶然
杨瑞健
赵代平
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023066121A1 publication Critical patent/WO2023066121A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular to a three-dimensional model rendering method and device, a storage medium, and computer equipment.
  • the rendering of three-dimensional (3D) models can generally be divided into photorealistic rendering (PR) and non-photorealistic rendering (NPR).
  • PR photorealistic rendering
  • NPR non-photorealistic rendering
  • the main purpose of rendering is to simulate a specific art style.
  • the purpose of realistic rendering is to obtain a realistic rendering effect, while the purpose of non-realistic rendering is more diverse, mainly to simulate an artistic drawing style and present a hand-painted effect.
  • the rendering method in the related art needs to first completely draw the entire 3D model on the 2D image, and then perform post-processing on the drawn image, and the rendering efficiency of this method is low.
  • an embodiment of the present disclosure provides a method for rendering a 3D model, the method including: determining the light and dark distribution of the 3D model under the viewing angle of the rendering camera, and the light and dark distribution is used to characterize each slice on the 3D model The luminance value of the element; divide the 3D model into a bright area and a shadow area based on the light and dark distribution; render the bright area based on the color of the 3D model, and render the shadow based on a predetermined shadow color area to render.
  • an embodiment of the present disclosure provides a method for rendering a 3D model, the method comprising: in a first rendering pass, rendering a bright area of the 3D model based on the color of the 3D model, and based on a predetermined The shadow color renders the shadow area of the three-dimensional model; in the second rendering pass, the three-dimensional model is enlarged to obtain an enlarged model, and the enlarged model is rendered based on a predetermined stroke color, and the enlarged The middle area of the model is occluded by the three-dimensional model.
  • an embodiment of the present disclosure provides a method for rendering a 3D model.
  • the method includes: zooming in on the 3D model to obtain an enlarged model, the middle area of the zoomed-in model is blocked by the 3D model;
  • the stroke color renders the enlarged model.
  • the embodiments of the present disclosure directly use the stroke color to render the enlarged model, which can effectively improve the rendering efficiency compared with the traditional rendering method in which the entire 3D model is first rendered and then edge segmentation is performed through post-processing.
  • an embodiment of the present disclosure provides a rendering device for a 3D model, the device comprising: a determination module configured to determine the light and dark distribution of the 3D model under the viewing angle of the rendering camera, and the light and dark distribution is used to characterize the 3D model Brightness values of each fragment; a division module, configured to divide the three-dimensional model into a bright area and a shadow area based on the light and dark distribution; a rendering module, configured to render the bright area based on the color of the three-dimensional model, The shaded area is rendered based on a predetermined shade color.
  • an embodiment of the present disclosure provides an apparatus for rendering a 3D model, the apparatus comprising: a first rendering module, configured to, in a first rendering pass, perform color rendering on a bright area of the 3D model based on the color of the 3D model Rendering, and rendering the shadow area of the 3D model based on a predetermined shadow color; a second rendering module, configured to amplify the 3D model in a second rendering pass to obtain an enlarged model, based on a predetermined The stroke color renders the enlarged model, and the middle area of the enlarged model is blocked by the three-dimensional model.
  • an embodiment of the present disclosure provides a three-dimensional model rendering device, the device comprising: an enlargement module configured to enlarge the three-dimensional model to obtain an enlarged model, and the middle area of the enlarged model is blocked by the three-dimensional model ; a rendering module, configured to render the enlarged model based on a predetermined stroke color.
  • the embodiments of the present disclosure directly use the stroke color to render the enlarged model. Compared with the traditional rendering method in which the entire 3D model is rendered first, and then edge segmentation is performed through post-processing, the embodiments of the present disclosure can effectively improve rendering efficiency.
  • an embodiment of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the method described in any one of the first aspect to the third aspect is implemented.
  • an embodiment of the present disclosure provides a computer device, including a memory, a processor, and a computer program stored on the memory and operable on the processor.
  • the processor executes the program, the first aspect to the second aspect are implemented. The method described in any one of the three aspects.
  • an embodiment of the present disclosure provides a computer program product, the product includes a computer program, and when the computer program is executed by a processor, the method described in any one of the first aspect to the third aspect is implemented.
  • FIG. 1A and FIG. 1B are schematic diagrams of different rendering styles of some embodiments, respectively.
  • Figure 2 is a schematic diagram of the rendering method.
  • Fig. 3 is a flow chart of a rendering method of a 3D model according to an embodiment of the present disclosure.
  • FIG. 4A and FIG. 4B are schematic diagrams of different rendering camera perspectives, respectively.
  • FIG. 5 is a schematic diagram of an original three-dimensional model and an enlarged model of an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of a stroke effect according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of rendering methods of culling front and culling back surfaces according to an embodiment of the present disclosure.
  • Fig. 8 is a schematic diagram of a moving manner of an enlarged model.
  • FIG. 9A is a flowchart of a three-dimensional model rendering method according to another embodiment of the present disclosure.
  • FIG. 9B is an overall flowchart of an embodiment of the present disclosure.
  • Fig. 10 is a flowchart of a rendering method of a 3D model according to yet another embodiment of the present disclosure.
  • Fig. 11 is a block diagram of a three-dimensional model rendering device according to an embodiment of the present disclosure.
  • Fig. 12 is a block diagram of a three-dimensional model rendering device according to another embodiment of the present disclosure.
  • Fig. 13 is a block diagram of an apparatus for rendering a 3D model according to yet another embodiment of the present disclosure.
  • FIG. 14 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure.
  • first, second, third, etc. may be used in the present disclosure to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of the present disclosure, first information may also be called second information, and similarly, second information may also be called first information. Depending on the context, the word “if” as used herein may be interpreted as “at” or “when” or “in response to a determination.”
  • the rendering of 3D models can generally be divided into realistic rendering and non-realistic rendering.
  • the purpose of the realistic rendering is to obtain a realistic rendering effect.
  • the rendering effect of continuously changing colors can be obtained through the realistic rendering.
  • Non-realistic rendering can simulate an artistic drawing style, and the two-dimensional rendering style is one of the effects that has been widely used in recent years.
  • the two-dimensional rendering can simulate the effect of color and brightness block in animation.
  • the rendering of the 3D model is performed by post-processing.
  • Figure 2 shows a rendering process.
  • model information of a 3D model illuminated by a light source may be acquired, and the model information may include position information, color information, etc. of each vertex in the 3D model.
  • the three-dimensional model can be completely drawn on the two-dimensional image based on the model information acquired in step 201.
  • color correction is performed on the drawn two-dimensional image.
  • the color of each pixel in the two-dimensional image can be obtained, and multiple different colors whose color difference values are within a preset range are corrected to the same color, thereby realizing color partitioning of the two-dimensional image, Achieve the effect of color mutation in the second dimension. It can be seen that the above rendering process needs to draw the 3D model completely first, and then read the color of each pixel in the 2D image one by one. The rendering takes a long time and the rendering efficiency is low.
  • the present disclosure provides a three-dimensional model rendering method, referring to FIG. 3 , the method includes steps 301 to 303 .
  • Step 301 Determine the light and dark distribution of the 3D model under the viewing angle of the rendering camera, and the light and dark distribution is used to characterize the brightness value of each fragment on the 3D model.
  • Step 302 Divide the three-dimensional model into a bright area and a shadow area based on the light and dark distribution.
  • Step 303 Render the bright area based on the color of the 3D model, and render the shadow area based on a predetermined shadow color.
  • the three-dimensional model can be rendered in a two-dimensional rendering style through the methods of the embodiments of the present disclosure.
  • the rendering method of the embodiment of the present disclosure has higher rendering efficiency and is more suitable for real-time rendering scenarios.
  • the three-dimensional model may be a model corresponding to a target object in the scene to be rendered, and the scene to be rendered may include one or more target objects, and the target object may include but not limited to people, animals, tables, Bags, houses, etc., can also include local areas on objects, eg, faces, roofs of houses.
  • the rendering manner of the embodiment of the present disclosure may be used to render the three-dimensional models corresponding to all target objects in the scene to be rendered.
  • the rendering method of the embodiment of the present disclosure it is also possible to render only the 3D model corresponding to a part of the target objects in the scene to be rendered by using the rendering method of the embodiment of the present disclosure, and use other rendering methods to render the 3D model corresponding to another part of the target object in the scene to be rendered.
  • the model is rendered, or the 3D models corresponding to different target objects are rendered in different styles. For example, if the scene to be rendered includes target object A and target object B, the 3D model corresponding to target object A can be rendered in a two-dimensional rendering style, and the 3D model corresponding to target object B can be rendered in a realistic rendering style, so as to realize the A mix and match of the two-dimensional rendering style and other rendering styles in the scene.
  • the rendering camera may be a virtual camera.
  • a rendering camera may be created at a preset position of the 3D model, and the 3D model is under the perspective of the rendering camera.
  • the rendering camera is at different positions, the parts of the 3D model that can be photographed under the perspective of the rendering camera are often different, and thus the final rendering effects are also different.
  • the 3D model of a person as an example, as shown in Figure 4A, when the perspective of the rendering camera is facing the face, the part of the 3D model under the perspective of the rendering camera is the front of the person (including the side of the face), and the final rendering results in The 2D image of also includes the front of the person.
  • the part of the 3D model under the perspective of the rendering camera is the back of the character (including the back of the head), and the final rendered 2D image also includes the back of the character.
  • the light and dark distribution of the 3D model is used to represent the brightness value of each fragment on the 3D model.
  • a fragment refers to a surface formed by connecting vertices.
  • a real number between 0 and 1 may be used to represent the brightness value of the fragment. The closer the brightness value of the fragment is to 1, the brighter the fragment is; the closer the brightness value is to 0, the darker the fragment is.
  • Those skilled in the art can understand that the way of expressing the brightness is not limited thereto, and other numerical ranges (for example, 0 to 100) can also be used to represent the brightness value.
  • Letters and symbols can also be used to indicate the brightness level, for example, the letters "A”, "B” and “C” are used to indicate the brightness level from low to high in turn, and the higher the brightness level of the slice, the brighter the slice ; The lower the brightness level of the fragment, the darker the fragment.
  • this step can only obtain the light and dark distribution of the model area of the 3D model under the rendering camera perspective, that is, each slice in the model area The brightness value of the element. Still taking the 3D model of a person as an example, when the perspective of the rendering camera is facing the face, only the brightness value of each fragment in front of the person can be obtained.
  • the first light and dark distribution of the 3D model under the viewing angle of the rendering camera may be determined based on the illumination direction, the normal map of the 3D model, and the position of the rendering camera, and based on the A shadow map, determining a second light and dark distribution of the 3D model under the rendering camera viewing angle; determining the light and dark distribution of the 3D model under the rendering camera viewing angle based on the first light and dark distribution and the second light and dark distribution .
  • the normal map is used to represent the normal of each vertex on the three-dimensional model, and can determine the unevenness of each vertex.
  • the shadow map is used to characterize the distance between each vertex of the 3D model and the light source, so as to determine whether each vertex is occluded. Using the normal map and the shadow map to determine the light and dark distribution together can obtain higher accuracy.
  • the first brightness value of the fragment may be determined based on the first light and dark distribution, and the first brightness value of the fragment may be determined based on the second light and dark distribution Determine a second brightness value of the slice; determine the brightness value of the slice according to the first brightness value and the second brightness value.
  • the product of the first brightness value and the second brightness value may be determined as the brightness value of the fragment.
  • the smaller one of the first brightness value and the second brightness value may also be determined as the brightness value of the fragment.
  • the first luminance value of the fragment may be determined first, and if the first luminance value of the fragment is less than or equal to a preset luminance threshold, the first luminance value of the fragment may be directly determined as the If the first brightness value is greater than the brightness threshold, determine the second brightness value of the fragment, and if the second brightness value of the fragment is less than or equal to the brightness threshold, determine the The second brightness value of the fragment is determined as the brightness value of the fragment, and if the second brightness value of the fragment is greater than the brightness threshold, the first brightness value of the fragment is determined as the brightness value of the fragment.
  • the first light-dark distribution or the second light-dark distribution may also be directly determined as the light-dark distribution of the three-dimensional model under the viewing angle of the rendering camera. Or combine at least one of the first light-dark distribution and the second light-dark distribution and the third light-dark distribution determined by other means to determine the light-dark distribution of the three-dimensional model under the viewing angle of the rendering camera.
  • the three-dimensional model may be divided into bright areas and shadow areas based on the light and dark distribution determined in step 301 .
  • the model area of the 3D model under the perspective of the rendering camera can be divided into a bright area and a shadow area.
  • the shadow area refers to the area on the 3D model that is not irradiated by the light source or can only be irradiated by a small amount of light, so that it is in the shadow, and the brightness value of the fragments in the shadow area is relatively small.
  • the bright area refers to the area outside the shadow area on the 3D model. These areas can be illuminated by a large amount of light, and the brightness value of the fragments in the bright area is relatively large.
  • an area whose brightness value satisfies a first preset brightness condition on the three-dimensional model may be determined as the bright area; an area whose brightness value satisfies a second preset brightness condition on the three-dimensional model may be determined as the shaded area.
  • the first preset brightness condition may be that the brightness value is greater than or equal to the lower brightness threshold
  • the second preset brightness condition may be that the brightness value is smaller than the upper brightness threshold.
  • the brightness lower threshold and the brightness upper threshold may be equal.
  • the brightness lower threshold may be greater than the brightness upper threshold.
  • the area whose luminance value is between the upper luminance threshold and the lower luminance threshold may be randomly determined as a bright area or a shadow area, or may be further determined as a bright area or a shadow area based on other conditions.
  • the first preset brightness condition may be that the brightness value is within a preset range of brightness values [L1, L2], and the second preset brightness condition may be that the brightness value is within the brightness value Out of the interval [L1, L2].
  • the first preset brightness condition may be that the brightness value is equal to any one of a preset group of brightness values
  • the second preset brightness condition may be that the brightness value is not equal to the preset group of brightness values. any of the values.
  • the first preset brightness condition and the second preset brightness condition can also be set as other conditions according to actual needs, so as to obtain different rendering effects.
  • the brightness threshold can be set as desired. If you want to divide more model areas on the 3D model into shadow areas, you can set a larger brightness threshold; conversely, if you want to divide more model areas on the 3D model into bright areas, you can set a smaller brightness threshold.
  • the colors for rendering the bright area and the shadow area may be respectively determined in different ways.
  • the color used to render the bright area is the color corresponding to the bright area in the 3D model. For example, if the color of the fragments in the bright area in the 3D model is red, the fragments in the bright area are rendered as red.
  • the color used for rendering the shadow area is a shadow color, or is determined based on the shadow color and the color corresponding to the shadow area in the three-dimensional model.
  • the shadow color can be pre-specified by the user, or a default color can be used. It is also possible to randomly select one of multiple predefined candidate colors as the shadow color, or determine the shadow color based on other methods.
  • the color of the shadow area can be directly rendered as a shadow color, that is, the shadow color is used to replace the color corresponding to the shadow area in the 3D model. For example, if the color corresponding to the shaded area in the 3D model is red, and the shaded color is black, then the color of the shaded area can be directly rendered as black.
  • model color the color corresponding to the shadow area in the 3D model
  • the diffuse reflection map can be used to characterize the reflection and surface color of the 3D model surface
  • the model color is corrected by the shadow color, and the shadow area is rendered with the corrected color.
  • the R channel value, G channel value, and B channel value of the model color can be summed with the corresponding channel value of the shadow color to obtain the corrected color.
  • the corrected color is (R1+R2, G1+G2, B1+B2).
  • the correction method is not limited to simple superposition of each channel component of the color.
  • the shadow area with the model color and shadow color on different layers, and combine the rendered two layers to obtain the rendering result of the shadow area. For example, if the model color is red and the shadow color is black, you can render red on the first layer and black on the second layer, and superimpose the two layers together to get the rendering result of the shadow area. At the same time, the transparency of the first layer and the transparency of the second layer can also be adjusted separately, so as to present different rendering effects.
  • the color of the 3D model corresponding to the bright area and the color of the 3D model corresponding to the shadow area may be determined based on the diffuse reflection map corresponding to the 3D model.
  • the brightness threshold is a single threshold
  • the original diffuse reflection map color of the 3D model is adjusted according to the predefined shadow color and then Perform rendering to achieve the rendering effect of the secondary color in the two-dimensional style.
  • the rendering effect of multi-level shading can also be achieved by specifying multiple thresholds.
  • the second preset brightness condition includes that the brightness value is less than or equal to a first brightness threshold.
  • the predetermined shadow color includes a plurality of shadow colors, and the plurality of shadow colors correspond to a plurality of different brightness sub-intervals;
  • the brightness interval is obtained by dividing the brightness interval, wherein the maximum brightness value in the plurality of different brightness sub-intervals is less than or equal to the first brightness threshold.
  • the luminance subinterval to which the luminance value of each fragment in the shadow area belongs may be determined; and each fragment in the shadow area is rendered according to the shadow colors corresponding to the luminance subintervals to which the luminance value of each fragment belongs.
  • subinterval-based This way of determining the shade color is called subinterval-based. Assuming that a real number between 0 and 1 is used to represent the brightness value of the fragment, and the first brightness threshold is 0.8, the brightness interval of [0,0.8] can be divided into two brightnesses of [0,0.3] and [0.3,0.8] subinterval, and the shadow color corresponding to the brightness subinterval [0,0.3] is black, and the shadow color corresponding to the brightness subinterval [0.3,0.8] is gray.
  • the shade color for rendering fragment A is black; if the luminance value of fragment A in the shadow area is within the luminance subinterval Within the interval [0.3,0.8], the shade color for rendering fragment A is gray.
  • the larger the value of the brightness range of the brightness subinterval to which the brightness value of a fragment belongs the lighter the shadow color of the fragment (that is, the larger the brightness value of the fragment); on the contrary, the brightness value of a fragment belongs to
  • the smaller the value of the luminance range of the sub-interval the darker the shadow color of the fragment (that is, the smaller the luminance value of the fragment). In this way, the effect that the shadow color becomes darker in layers as the brightness value of the fragment decreases can be presented.
  • the above-mentioned change trend is not necessary, and other ways can also be used to set the shadow color corresponding to each sub-interval, so as to obtain other shadow rendering effects.
  • the second preset brightness condition includes that the brightness value is less than or equal to a second brightness threshold; the predetermined shadow color includes a plurality of shadow colors, and the plurality of shadow colors respectively correspond to multiple values in the lookup table.
  • reference brightness values each reference brightness value in the lookup table is less than or equal to the second brightness threshold.
  • the shadow color corresponding to the reference brightness value matched with the brightness value of each fragment in the shadow area can be searched from the lookup table; according to the shadow color corresponding to the reference brightness value matched with each fragment, the Each fragment in the shaded area is rendered. This way of determining the shade color is called a lookup table based way. If the reference luminance value identical to the luminance value of the fragment is found in the lookup table, the reference luminance value identical to the luminance value of the fragment can be directly determined as the reference luminance value matching the luminance value of the fragment.
  • the reference luminance value closest to the luminance value of the fragment in the lookup table is determined as the reference luminance value matching the luminance value of the fragment.
  • the average value of multiple reference brightness values in the lookup table whose difference from the brightness value of the fragment is within a preset range may be determined as the reference brightness value matching the brightness value of the fragment.
  • the larger the reference brightness value stored in the lookup table the lighter the shadow color corresponding to the reference brightness value; conversely, the smaller the reference brightness value stored in the lookup table, the darker the shadow color corresponding to the reference brightness value. In this way, the effect that the shadow color becomes darker in layers as the brightness value of the fragment decreases can be presented.
  • the above-mentioned change trend is not necessary, and other methods can also be used to set the shadow color corresponding to each brightness value in the lookup table, so as to obtain other shadow rendering effects.
  • the first brightness threshold and the second brightness threshold in the above embodiments may be the same or different.
  • the shadow color used for correcting the color of the shadow area may be determined based on any one of brightness subinterval-based manners or look-up table-based manners, or may be determined in combination of the two manners.
  • the shadow color determined based on the brightness sub-interval is called the first shadow color
  • the shadow color determined based on the look-up table is called the second shadow color
  • the shadow color may be a darker or lighter shadow color among the first shadow color and the second shadow color, or a shadow color obtained by weighting the first shadow color and the second shadow color.
  • the rendering process described above may be referred to as flat shading.
  • the rendering result obtained in step 303 may also be stroked to form a line outline in a two-dimensional rendering effect.
  • the so-called stroke refers to rendering the edge outline of the 3D model with a predetermined stroke color.
  • the traditional stroke method is generally to perform edge detection on the two-dimensional image after rendering the entire three-dimensional model into a two-dimensional image to determine the edge pixels in the two-dimensional image, and then use the stroke color to render the edge pixels.
  • This stroke method also needs to render the entire 3D model into a 2D image first, and then perform post-processing, and the rendering efficiency is low.
  • the three-dimensional model may be enlarged to obtain an enlarged model, and then the enlarged model may be rendered based on a predetermined stroke color.
  • the stroke effect shown in FIG. 6 can be realized.
  • each vertex on the three-dimensional model may be displaced along a direction of a projection vector of a normal of the vertex on a projection plane to obtain an enlarged model.
  • the enlarged model obtained in this way does not need to be displaced, the middle part of the enlarged model will be blocked by the 3D model before the enlargement, and the enlarged part around the 3D model will cover the outside of the 3D model before the enlargement, forming a two-dimensional rendering effect line outline.
  • the three-dimensional model may also be enlarged in other ways, which is not limited in the present disclosure.
  • the circle shown by the solid line represents the three-dimensional model before the enlargement
  • A, B and C are the three vertices on the three-dimensional model before the enlargement respectively
  • f A , f B and f C represent the vertex A respectively , the directions of the projection vectors of the normals corresponding to vertices B and C on the projection plane.
  • the whole three-dimensional model can be uniformly enlarged, that is, the enlargement degree of each part of the three-dimensional model is the same.
  • the magnification degree corresponding to the vertex with a larger displacement amount is larger than that of a vertex with a smaller displacement amount.
  • the amount of displacement of the vertices is related to the desired width of the edge profile. If you need to render a wider edge outline, you can set a larger displacement; if you need to render a narrower edge outline, you can set a smaller displacement. That is, the amount of displacement is positively correlated with the width of the rendered edge outline.
  • the desired edge profile width can be input in advance, and the displacement of each vertex is automatically determined based on the input width.
  • the 3D model can be rendered with the stroke color.
  • the entire enlarged 3D model can be rendered with the stroke color, or only the part outside the enlarged 3D model can be rendered.
  • the former rendering method can effectively avoid the gap between the stroke rendering result and the flat shading rendering result, which will affect the rendering effect.
  • the aforementioned stroke color may be a single color or a gradient color.
  • the stroke color can be pre-specified by the user, or one of multiple colors can be randomly selected, or the default color can be used. In order to highlight the difference between the edge outline and the bright area, you can select a color that is greater than the preset difference value from the bright area as the stroke color.
  • the front face of each fragment in the bright area may be rendered based on the color of the three-dimensional model, and the front face of each fragment in the shadow area may be rendered based on a predetermined shadow color.
  • a predetermined shadow color In other words, only the front faces of the fragments in the bright area and the front faces of the shadows in the shadow area may be rendered, but not the back faces of the fragments in the bright area and the shadows in the shadow area Render the back of the .
  • This rendering method is called culling backface rendering.
  • the back surface of each fragment in the enlarged model may also be rendered based on a predetermined stroke color. That is to say, only the backside of each fragment in the enlarged model may be rendered, but the front side of each fragment in the enlarged model may not be rendered.
  • This rendering method is called culling front-facing rendering.
  • the front face of a fragment is the face obtained by connecting multiple vertices on the fragment along the first direction
  • the back face of a fragment is the face obtained by connecting multiple vertices on the fragment along the second direction
  • the first direction is opposite to the second direction.
  • the front face of a fragment can be defined as a face formed by connecting multiple vertices in a clockwise manner
  • the back face of a fragment can be defined as a face formed by connecting multiple vertices in a counterclockwise manner.
  • the front and the back can also be defined in other ways according to actual needs.
  • a, b, and c represent three vertices of a fragment, and the direction of the arrow represents the connection order of the vertices. Only render the surface (7-1) formed by connecting the three vertices a, b, and c in a clockwise direction, but not render the surface (7-2) formed by connecting the three vertices a, b, and c in a counterclockwise direction.
  • the enlarged model before rendering the enlarged three-dimensional model based on a predetermined stroke color, the enlarged model may be moved in a direction away from the rendering camera. This step can avoid the problem of magnified model and 3D model penetrating in places where the thickness of the model is small, resulting in wrong rendering effects. As shown in FIG. 8 , assuming that the distance between the enlarged model and the rendering camera before moving is d1, the enlarged model can be moved to a position d2 away from the rendering camera, where d2 is greater than d1.
  • the above two processes of flat shading and stroke can be realized by using different rendering passes (Pass).
  • the processes on the two rendering passes can be executed in parallel, or the flat shading can be performed through the first rendering pass first, and after the flat shading process is completed , and then stroke through the second rendering pass.
  • the embodiments of the present disclosure can be applied to scenarios such as 2D-style 3D games, 2D-style 3D stickers of anchors, or 3D virtual anchor rendering of 2D-style.
  • the 3D model is the 3D model of the anchor.
  • the image of the anchor can be obtained in real time, and the three-dimensional modeling of the anchor can be performed based on the image of the anchor to obtain the three-dimensional model of the anchor.
  • the three-dimensional model of the anchor is rendered in real time by using the rendering method of the embodiment of the present disclosure, so as to obtain a virtual anchor image in a two-dimensional style. Since the rendering method of the embodiment of the present disclosure does not require post-processing and has high real-time performance, it can be applied to the above-mentioned scenarios.
  • the embodiment of the present disclosure also provides another method for rendering a three-dimensional model, and the method includes steps 901 to 902 .
  • Step 901 In a first rendering pass, render the bright area of the 3D model based on the color of the 3D model, and render the shadow area of the 3D model based on a predetermined shadow color.
  • Step 902 In the second rendering pass, enlarge the 3D model to obtain an enlarged model, render the enlarged model based on a predetermined stroke color, and the middle area of the enlarged model is blocked by the 3D model .
  • the embodiments of the present disclosure respectively perform flat shading and stroke through two rendering passes, wherein the flat shading process is implemented by rendering a 3D model, and the stroke process is implemented by rendering an enlarged model. Since both the flat shading process and the stroke process are directly rendered with the determined color, there is no need to render the 3D model into a 2D image before post-processing, thus improving the rendering efficiency.
  • FIG. 9B shows a specific process of performing flat shading through the first rendering pass and a specific process of performing model strokes through the second rendering pass.
  • (1-2) Determine the first light and dark distribution of the 3D model under the perspective of the rendering camera due to the direction of the light through the direction of the light, the normal map and the position of the rendering camera.
  • (2-1) Obtain the input for rendering, such as vertex position, projection matrix of rendering camera, model matrix, bone data, vertex normal of 3D model, etc.; among them, the projection matrix is the distance between the rendering camera coordinate system and the world coordinate system
  • the transformation matrix the model matrix is used to describe the displacement and rotation of the 3D model
  • the bone data is used to represent the deformation of the 3D model with bones (for example, a 3D model of a character), and for a 3D model without bones (for example, a table), input Skeleton data may not be included.
  • (2-2) Calculate the position and normal information of each vertex of the 3D model in the world coordinate system through the vertex position, vertex normal and model matrix of the 3D model. Since the position of the light source is generally represented by the position in the world coordinate system, the purpose of this step is to project the 3D model to the world coordinate system, so that the 3D model and the light source are in the same coordinate system. In addition to converting the 3D model to the world coordinate system, you can also convert the light source to the rendering camera coordinate system, or convert both the 3D model and the rendering camera to other coordinate systems, as long as they are in the same coordinate system.
  • (2-5) Render the enlarged model with the pre-specified stroke color by excluding the front (the shadow area and bright area of the original 3D model are rendered by excluding the back). In this way, the middle part of the enlarged model will be blocked by the original 3D model, and the enlarged peripheral part will cover the outside of the original 3D model, forming the outline of the lines in the 2D rendering effect.
  • the result is a celluloid-style two-dimensional rendering effect with edge outlines and secondary coloring.
  • the stroke effect in this process can also use the tangent to indirectly calculate the normal of the smooth transition, and then enlarge it, so as to achieve smoother strokes and avoid the appearance of strokes on sharp corners.
  • the problem of fracture, but the core is to draw a reverse model that is larger than the original model to simulate the effect of stroke.
  • the rendering result of the method of the embodiment of the present disclosure is a two-dimensional drawing style, which saves the steps and time of post-processing.
  • the embodiment of the present disclosure also provides another method for rendering a 3D model, and the method includes steps 1001 to 1002 .
  • Step 1001 Enlarge the three-dimensional model to obtain an enlarged model, and the middle area of the enlarged model is blocked by the three-dimensional model.
  • Step 1002 Render the enlarged model based on a predetermined stroke color.
  • the embodiment of the present disclosure directly uses the stroke color to render the enlarged model. Compared with the traditional rendering method in which the entire 3D model is first rendered into a 2D image, and then edge segmentation is performed through post-processing, the embodiment of the present disclosure can effectively Improve rendering efficiency.
  • the amplifying the three-dimensional model to obtain the enlarged model includes: displacing each vertex on the three-dimensional model along the direction of the projection vector of the normal of the vertex on the projection plane , to obtain the enlarged model.
  • the middle area of the enlarged model obtained in this way can be directly blocked by the three-dimensional model before the enlargement without displacement, and the processing complexity is low.
  • the method before rendering the enlarged model based on a predetermined stroke color, the method further includes: moving the enlarged model in a direction away from the rendering camera. This step can avoid the problem of magnified model and 3D model penetrating in places where the thickness of the model is small, resulting in wrong rendering effects.
  • the method further includes: rendering a bright area of the three-dimensional model based on a color of the three-dimensional model, and rendering a shaded area of the three-dimensional model based on a predetermined shade color.
  • it is not necessary to first render the entire 3D model into a 2D image, but can directly obtain the final rendering effect.
  • the rendering method of the embodiment of the present disclosure has higher rendering efficiency and is more suitable for real-time rendering scenarios.
  • rendering the bright area of the 3D model based on the color of the 3D model, and rendering the shaded area of the 3D model based on a predetermined shadow color includes: pairing the color of the 3D model based on rendering the front of each fragment in the bright area, and rendering the front of each fragment in the shadow area based on a predetermined shadow color; rendering the enlarged model based on a predetermined stroke color, including : Render the back of each fragment in the enlarged model based on a predetermined stroke color; wherein, the front of a fragment is a surface obtained by connecting multiple vertices on the fragment along the first direction, and a The back surface of the fragment is a surface obtained by connecting multiple vertices on the fragment along a second direction, and the first direction is opposite to the second direction.
  • the influence of the stroke color on the color of the bright area and the shadow area of the 3D model can be reduced, and the rendering effect can be improved.
  • This disclosure relates to the field of augmented reality.
  • acquiring the image information of the target object in the real environment and then using various visual correlation algorithms to detect or identify the relevant features, states and attributes of the target object, and thus obtain the image information that matches the specific application.
  • AR effect combining virtual and reality.
  • the target object may involve faces, limbs, gestures, actions, etc. related to the human body, or markers and markers related to objects, or sand tables, display areas or display items related to venues or places.
  • Vision-related algorithms can involve visual positioning, SLAM, 3D reconstruction, image registration, background segmentation, object key point extraction and tracking, object pose or depth detection, etc.
  • Specific applications can not only involve interactive scenes such as guided tours, navigation, explanations, reconstructions, virtual effect overlays and display related to real scenes or objects, but also special effects processing related to people, such as makeup beautification, body beautification, special effect display, virtual Interactive scenarios such as model display.
  • the relevant features, states and attributes of the target object can be detected or identified through the convolutional neural network.
  • the above-mentioned convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
  • the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possible
  • the inner logic is OK.
  • an embodiment of the present disclosure also provides a rendering device for a 3D model
  • the device includes: a determining module 1101, configured to determine the light and dark distribution of the 3D model under the viewing angle of the rendering camera, and the light and dark distribution is used to represent The luminance value of each fragment on the 3D model; the division module 1102, used to divide the 3D model into bright areas and shadow areas based on the light and dark distribution; the rendering module 1103, used to color the 3D model based on The bright area is rendered, and the shadow area is rendered based on a predetermined shadow color.
  • an embodiment of the present disclosure also provides a three-dimensional model rendering device, which includes: a first rendering module 1201, configured to, in the first rendering pass, adjust the brightness of the three-dimensional model based on the color of the three-dimensional model The region is rendered, and the shadow region of the three-dimensional model is rendered based on a predetermined shadow color; the second rendering module 1202 is configured to enlarge the three-dimensional model in a second rendering pass to obtain an enlarged model, based on The predetermined stroke color renders the enlarged model, and the middle area of the enlarged model is blocked by the three-dimensional model.
  • a first rendering module 1201 configured to, in the first rendering pass, adjust the brightness of the three-dimensional model based on the color of the three-dimensional model The region is rendered, and the shadow region of the three-dimensional model is rendered based on a predetermined shadow color
  • the second rendering module 1202 is configured to enlarge the three-dimensional model in a second rendering pass to obtain an enlarged model, based on The predetermined stroke color renders the
  • the embodiment of the present disclosure also provides a rendering device for a three-dimensional model, the device includes: an enlargement module 1301, configured to enlarge the three-dimensional model to obtain an enlarged model, the middle area of the enlarged model is covered by the three-dimensional Model occlusion; a rendering module 1302, configured to render the enlarged model based on a predetermined stroke color.
  • the functions or modules included in the device provided by the embodiments of the present disclosure can be used to execute the methods described in the method embodiments above, and its specific implementation can refer to the description of the method embodiments above. For brevity, here No longer.
  • the embodiment of this specification also provides a computer device, which at least includes a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein, when the processor executes the program, the computer program described in any of the preceding embodiments is implemented. described method.
  • FIG. 14 shows a schematic diagram of a more specific hardware structure of a computing device provided by the embodiment of this specification.
  • the device may include: a processor 1401 , a memory 1402 , an input/output interface 1403 , a communication interface 1404 and a bus 1405 .
  • the processor 1401 , the memory 1402 , the input/output interface 1403 and the communication interface 1404 are connected to each other within the device through the bus 1405 .
  • the processor 1401 may be implemented by a general-purpose CPU (Central Processing Unit, central processing unit), a microprocessor, an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, and is used to execute related programs to realize the technical solutions provided by the embodiments of this specification.
  • the processor 1401 may also include a graphics card, and the graphics card may be an Nvidia titan X graphics card or a 1080Ti graphics card.
  • the memory 1402 can be implemented in the form of ROM (Read Only Memory, read-only memory), RAM (Random Access Memory, random access memory), static storage device, dynamic storage device, etc.
  • the memory 1402 can store operating systems and other application programs. When implementing the technical solutions provided by the embodiments of this specification through software or firmware, the relevant program codes are stored in the memory 1402 and invoked by the processor 1401 for execution.
  • the input/output interface 1403 is used to connect the input/output module to realize information input and output.
  • the input/output/module can be configured in the device as a component (not shown in the figure), or can be externally connected to the device to provide corresponding functions.
  • the input device may include a keyboard, mouse, touch screen, microphone, various sensors, etc.
  • the output device may include a display, a speaker, a vibrator, an indicator light, and the like.
  • the communication interface 1404 is used to connect a communication module (not shown in the figure), so as to realize the communication interaction between the device and other devices.
  • the communication module can realize communication through wired means (such as USB, network cable, etc.), and can also realize communication through wireless means (such as mobile network, WIFI, Bluetooth, etc.).
  • Bus 1405 includes a path for transferring information between the various components of the device (eg, processor 1401, memory 1402, input/output interface 1403, and communication interface 1404).
  • the device may also include other necessary components for normal operation. components.
  • the above-mentioned device may only include components necessary to implement the solutions of the embodiments of this specification, and does not necessarily include all the components shown in the figure.
  • An embodiment of the present disclosure further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method described in any one of the foregoing embodiments is implemented.
  • Computer-readable media includes both volatile and non-volatile, removable and non-removable media, and can be implemented by any method or technology for storage of information.
  • Information may be computer readable instructions, data structures, modules of a program, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridge, tape magnetic disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer-readable media excludes transitory computer-readable media, such as modulated data signals and carrier waves.
  • a typical implementing device is a computer, which may take the form of a personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation device, e-mail device, game control device, etc. desktops, tablets, wearables, or any combination of these.
  • each embodiment in this specification is described in a progressive manner, the same and similar parts of each embodiment can be referred to each other, and each embodiment focuses on the differences from other embodiments.
  • the description is relatively simple, and for relevant parts, please refer to part of the description of the method embodiments.
  • the device embodiments described above are only illustrative, and the modules described as separate components may or may not be physically separated, and the functions of each module may be integrated in the same or multiple software and/or hardware implementations. Part or all of the modules can also be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without creative effort.

Abstract

Embodiments of the present disclosure provide a three-dimensional model rendering method and apparatus, a storage medium, and a computer device. The method comprises: determining light-dark distribution of a three-dimensional model from the perspective of a rendering camera, wherein the light-dark distribution is used for representing a brightness value of each fragment on the three-dimensional model; dividing, on the basis of the light-dark distribution, the three-dimensional model into a bright area and a shadow area; and rendering the bright area on the basis of the color of the three-dimensional model, and rendering the shadow area on the basis of a predetermined shadow color.

Description

三维模型的渲染Rendering of 3D models
相关申请交叉引用Related Application Cross Reference
本申请主张申请号为202111211776.4、申请日为2021年10月18日的中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。This application claims the priority of the Chinese patent application with the application number 202111211776.4 and the filing date of October 18, 2021. The entire content of the Chinese patent application is hereby incorporated by reference into this application.
技术领域technical field
本公开涉及图像处理技术领域,尤其涉及三维模型的渲染方法和装置、存储介质及计算机设备。The present disclosure relates to the technical field of image processing, and in particular to a three-dimensional model rendering method and device, a storage medium, and computer equipment.
背景技术Background technique
三维(three-dimensional,3D)模型的渲染一般可分为真实渲染(Photorealistic Rendering,PR)和非真实渲染(Non-photorealistic rendering,NPR)。渲染的主要目的是模拟特定的艺术风格。真实渲染目的在于获取具有真实感的渲染效果,而非真实渲染的目的更加的多样,主要在于模拟艺术化的绘制风格,呈现出手绘的效果。然而,相关技术中的渲染方式需要先将整个三维模型完整地绘制到二维图像上,再对绘制好的图像进行后处理,这种方式的渲染效率较低。The rendering of three-dimensional (3D) models can generally be divided into photorealistic rendering (PR) and non-photorealistic rendering (NPR). The main purpose of rendering is to simulate a specific art style. The purpose of realistic rendering is to obtain a realistic rendering effect, while the purpose of non-realistic rendering is more diverse, mainly to simulate an artistic drawing style and present a hand-painted effect. However, the rendering method in the related art needs to first completely draw the entire 3D model on the 2D image, and then perform post-processing on the drawn image, and the rendering efficiency of this method is low.
发明内容Contents of the invention
第一方面,本公开实施例提供一种三维模型的渲染方法,所述方法包括:确定三维模型在渲染相机视角下的光暗分布,所述光暗分布用于表征所述三维模型上各个片元的亮度值;基于所述光暗分布将所述三维模型划分为明亮区域和阴影区域;基于所述三维模型的颜色对所述明亮区域进行渲染,并基于预先确定的阴影颜色对所述阴影区域进行渲染。In the first aspect, an embodiment of the present disclosure provides a method for rendering a 3D model, the method including: determining the light and dark distribution of the 3D model under the viewing angle of the rendering camera, and the light and dark distribution is used to characterize each slice on the 3D model The luminance value of the element; divide the 3D model into a bright area and a shadow area based on the light and dark distribution; render the bright area based on the color of the 3D model, and render the shadow based on a predetermined shadow color area to render.
第二方面,本公开实施例提供一种三维模型的渲染方法,所述方法包括:在第一渲染通道中,基于三维模型的颜色对所述三维模型的明亮区域进行渲染,并基于预先确定的阴影颜色对所述三维模型的阴影区域进行渲染;在第二渲染通道中,对所述三维模型进行放大,得到放大模型,基于预先确定的描边颜色对所述放大模型进行渲染,所述放大模型的中间区域被所述三维模型遮挡。In a second aspect, an embodiment of the present disclosure provides a method for rendering a 3D model, the method comprising: in a first rendering pass, rendering a bright area of the 3D model based on the color of the 3D model, and based on a predetermined The shadow color renders the shadow area of the three-dimensional model; in the second rendering pass, the three-dimensional model is enlarged to obtain an enlarged model, and the enlarged model is rendered based on a predetermined stroke color, and the enlarged The middle area of the model is occluded by the three-dimensional model.
第三方面,本公开实施例提供一种三维模型的渲染方法,所述方法包括:对三维模型进行放大,得到放大模型,所述放大模型的中间区域被所述三维模型遮挡;基于预先确定的描边颜色对所述放大模型进行渲染。本公开实施例直接采用描边颜色对放大模型进行渲染,相比于传统的渲染方式中先渲染整个三维模型,再通过后处理进行边缘分割的方式,能够有效提高渲染效率。In a third aspect, an embodiment of the present disclosure provides a method for rendering a 3D model. The method includes: zooming in on the 3D model to obtain an enlarged model, the middle area of the zoomed-in model is blocked by the 3D model; The stroke color renders the enlarged model. The embodiments of the present disclosure directly use the stroke color to render the enlarged model, which can effectively improve the rendering efficiency compared with the traditional rendering method in which the entire 3D model is first rendered and then edge segmentation is performed through post-processing.
第四方面,本公开实施例提供一种三维模型的渲染装置,所述装置包括:确定模块,用于确定三维模型在渲染相机视角下的光暗分布,所述光暗分布用于表征三维模型上各个片元的亮度值;划分模块,用于基于所述光暗分布将所述三维模型划分为明亮区域和阴影区域;渲染模块,用于基于三维模型的颜色对所述明亮区域进行渲染,基于预先确定的阴影颜色对所述阴影区域进行渲染。In a fourth aspect, an embodiment of the present disclosure provides a rendering device for a 3D model, the device comprising: a determination module configured to determine the light and dark distribution of the 3D model under the viewing angle of the rendering camera, and the light and dark distribution is used to characterize the 3D model Brightness values of each fragment; a division module, configured to divide the three-dimensional model into a bright area and a shadow area based on the light and dark distribution; a rendering module, configured to render the bright area based on the color of the three-dimensional model, The shaded area is rendered based on a predetermined shade color.
第五方面,本公开实施例提供一种三维模型的渲染装置,所述装置包括:第一渲染模块,用于在第一渲染通道中,基于三维模型的颜色对所述三维模型的明亮区域进行渲染,并基于预先确定的阴影颜色对所述三维模型的阴影区域进行渲染;第二渲染模块, 用于在第二渲染通道中,对所述三维模型进行放大,得到放大模型,基于预先确定的描边颜色对所述放大模型进行渲染,所述放大模型的中间区域被所述三维模型遮挡。In a fifth aspect, an embodiment of the present disclosure provides an apparatus for rendering a 3D model, the apparatus comprising: a first rendering module, configured to, in a first rendering pass, perform color rendering on a bright area of the 3D model based on the color of the 3D model Rendering, and rendering the shadow area of the 3D model based on a predetermined shadow color; a second rendering module, configured to amplify the 3D model in a second rendering pass to obtain an enlarged model, based on a predetermined The stroke color renders the enlarged model, and the middle area of the enlarged model is blocked by the three-dimensional model.
第六方面,本公开实施例提供一种三维模型的渲染装置,所述装置包括:放大模块,用于对三维模型进行放大,得到放大模型,所述放大模型的中间区域被所述三维模型遮挡;渲染模块,用于基于预先确定的描边颜色对所述放大模型进行渲染。本公开实施例直接采用描边颜色对放大模型进行渲染,相比于传统的渲染方式中先渲染整个三维模型,再通过后处理进行边缘分割的方式,本公开实施例能够有效提高渲染效率。In a sixth aspect, an embodiment of the present disclosure provides a three-dimensional model rendering device, the device comprising: an enlargement module configured to enlarge the three-dimensional model to obtain an enlarged model, and the middle area of the enlarged model is blocked by the three-dimensional model ; a rendering module, configured to render the enlarged model based on a predetermined stroke color. The embodiments of the present disclosure directly use the stroke color to render the enlarged model. Compared with the traditional rendering method in which the entire 3D model is rendered first, and then edge segmentation is performed through post-processing, the embodiments of the present disclosure can effectively improve rendering efficiency.
第七方面,本公开实施例提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现第一方面至第三方面任一所述的方法。In a seventh aspect, an embodiment of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the method described in any one of the first aspect to the third aspect is implemented.
第八方面,本公开实施例提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现第一方面至第三方面任一所述的方法。In an eighth aspect, an embodiment of the present disclosure provides a computer device, including a memory, a processor, and a computer program stored on the memory and operable on the processor. When the processor executes the program, the first aspect to the second aspect are implemented. The method described in any one of the three aspects.
第九方面,本公开实施例提供一种计算机程序产品,该产品包括计算机程序,所述计算机程序在被处理器执行时实现第一方面至第三方面任一所述的方法。In a ninth aspect, an embodiment of the present disclosure provides a computer program product, the product includes a computer program, and when the computer program is executed by a processor, the method described in any one of the first aspect to the third aspect is implemented.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
附图说明Description of drawings
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。The accompanying drawings here are incorporated into the description and constitute a part of the present description. These drawings show embodiments consistent with the present disclosure, and are used together with the description to explain the technical solution of the present disclosure.
图1A和图1B分别是一些实施例的不同渲染风格的示意图。FIG. 1A and FIG. 1B are schematic diagrams of different rendering styles of some embodiments, respectively.
图2是渲染方式的示意图。Figure 2 is a schematic diagram of the rendering method.
图3是本公开实施例的三维模型的渲染方法的流程图。Fig. 3 is a flow chart of a rendering method of a 3D model according to an embodiment of the present disclosure.
图4A和图4B分别是不同渲染相机视角的示意图。FIG. 4A and FIG. 4B are schematic diagrams of different rendering camera perspectives, respectively.
图5是本公开实施例的原始三维模型与放大模型的示意图。FIG. 5 is a schematic diagram of an original three-dimensional model and an enlarged model of an embodiment of the present disclosure.
图6是本公开实施例的描边效果的示意图。FIG. 6 is a schematic diagram of a stroke effect according to an embodiment of the present disclosure.
图7是本公开实施例的剔除正面和剔除背面的渲染方式的示意图。FIG. 7 is a schematic diagram of rendering methods of culling front and culling back surfaces according to an embodiment of the present disclosure.
图8是放大模型的移动方式的示意图。Fig. 8 is a schematic diagram of a moving manner of an enlarged model.
图9A是本公开另一实施例的三维模型的渲染方法的流程图。FIG. 9A is a flowchart of a three-dimensional model rendering method according to another embodiment of the present disclosure.
图9B是本公开实施例的整体流程图。FIG. 9B is an overall flowchart of an embodiment of the present disclosure.
图10是本公开再一实施例的三维模型的渲染方法的流程图。Fig. 10 is a flowchart of a rendering method of a 3D model according to yet another embodiment of the present disclosure.
图11是本公开实施例的三维模型的渲染装置的框图。Fig. 11 is a block diagram of a three-dimensional model rendering device according to an embodiment of the present disclosure.
图12是本公开另一实施例的三维模型的渲染装置的框图。Fig. 12 is a block diagram of a three-dimensional model rendering device according to another embodiment of the present disclosure.
图13是本公开再一实施例的三维模型的渲染装置的框图。Fig. 13 is a block diagram of an apparatus for rendering a 3D model according to yet another embodiment of the present disclosure.
图14是本公开实施例的计算机设备的结构示意图。FIG. 14 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure.
具体实施方式Detailed ways
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numerals in different drawings refer to the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatuses and methods consistent with aspects of the present disclosure as recited in the appended claims.
在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合。The terminology used in the present disclosure is for the purpose of describing particular embodiments only, and is not intended to limit the present disclosure. As used in this disclosure and the appended claims, the singular forms "a", "the", and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It should also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality.
应当理解,尽管在本公开可能采用术语“第一”、“第二”、“第三”等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。It should be understood that although the terms "first", "second", "third", etc. may be used in the present disclosure to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of the present disclosure, first information may also be called second information, and similarly, second information may also be called first information. Depending on the context, the word "if" as used herein may be interpreted as "at" or "when" or "in response to a determination."
为了使本技术领域的人员更好的理解本公开实施例中的技术方案,并使本公开实施例的上述目的、特征和优点能够更加明显易懂,下面结合附图对本公开实施例中的技术方案作进一步详细的说明。In order to enable those skilled in the art to better understand the technical solutions in the embodiments of the present disclosure, and to make the above-mentioned purposes, features and advantages of the embodiments of the present disclosure more obvious and understandable, the technical solutions in the embodiments of the present disclosure are described below in conjunction with the accompanying drawings The program is described in further detail.
三维模型的渲染一般可分为真实渲染和非真实渲染。真实渲染目的在于获取具有真实感的渲染效果,如图1A所示,通过真实渲染能够获取颜色连续变化的渲染效果。非真实渲染能够模拟艺术化的绘制风格,而二次元渲染风格就是其中一种近年来应用比较广泛的效果。如图1B所示,二次元渲染能够模拟动漫中颜色和亮度分块的效果。The rendering of 3D models can generally be divided into realistic rendering and non-realistic rendering. The purpose of the realistic rendering is to obtain a realistic rendering effect. As shown in FIG. 1A , the rendering effect of continuously changing colors can be obtained through the realistic rendering. Non-realistic rendering can simulate an artistic drawing style, and the two-dimensional rendering style is one of the effects that has been widely used in recent years. As shown in Figure 1B, the two-dimensional rendering can simulate the effect of color and brightness block in animation.
一般通过后处理的方式进行三维模型的渲染。图2示出了一种渲染流程。首先,在步骤201中,可以获取光源照射下的三维模型的模型信息,该模型信息中可以包括三维模型中各个顶点的位置信息、颜色信息等。在步骤202中,可以基于步骤201获取的模型信息将三维模型完整地绘制到二维图像上。在步骤203中,再对绘制好的二维图像进行颜色修正。在一些实施例中,可以获取二维图像中每个像素点的颜色,将颜色差异值在预设范围内的多种不同的颜色修正为相同的颜色,从而实现对二维图像进行颜色分区,达到二次元中颜色突变的效果。可以看出,上述渲染过程需要先完整地绘制三维模型,再逐一读取二维图像中各像素点的颜色,渲染耗时较长,渲染效率低。Generally, the rendering of the 3D model is performed by post-processing. Figure 2 shows a rendering process. First, in step 201, model information of a 3D model illuminated by a light source may be acquired, and the model information may include position information, color information, etc. of each vertex in the 3D model. In step 202, the three-dimensional model can be completely drawn on the two-dimensional image based on the model information acquired in step 201. In step 203, color correction is performed on the drawn two-dimensional image. In some embodiments, the color of each pixel in the two-dimensional image can be obtained, and multiple different colors whose color difference values are within a preset range are corrected to the same color, thereby realizing color partitioning of the two-dimensional image, Achieve the effect of color mutation in the second dimension. It can be seen that the above rendering process needs to draw the 3D model completely first, and then read the color of each pixel in the 2D image one by one. The rendering takes a long time and the rendering efficiency is low.
基于此,本公开提供一种三维模型的渲染方法,参见图3,所述方法包括步骤301至步骤303。Based on this, the present disclosure provides a three-dimensional model rendering method, referring to FIG. 3 , the method includes steps 301 to 303 .
步骤301:确定三维模型在渲染相机视角下的光暗分布,所述光暗分布用于表征所述三维模型上各个片元的亮度值。Step 301: Determine the light and dark distribution of the 3D model under the viewing angle of the rendering camera, and the light and dark distribution is used to characterize the brightness value of each fragment on the 3D model.
步骤302:基于所述光暗分布将所述三维模型划分为明亮区域和阴影区域。Step 302: Divide the three-dimensional model into a bright area and a shadow area based on the light and dark distribution.
步骤303:基于所述三维模型的颜色对所述明亮区域进行渲染,并基于预先确定的阴影颜色对所述阴影区域进行渲染。Step 303: Render the bright area based on the color of the 3D model, and render the shadow area based on a predetermined shadow color.
通过本公开实施例的方法能够将三维模型渲染成二次元渲染风格。本公开实施例在进行渲染时,无需先对整个三维模型绘制到二维图像上,而是先将三维模型划分为明亮区域和阴影区域,然后直接基于三维模型的颜色对所述明亮区域进行渲染,并基于预先确定的阴影颜色对所述阴影区域进行渲染,从而能够直接获取最终的渲染效果。相比于先将待渲染三维模型绘制到二维图像上后再进行后处理的渲染方式,本公开实施例的渲 染方式渲染效率较高,更加适用于实时渲染场景。The three-dimensional model can be rendered in a two-dimensional rendering style through the methods of the embodiments of the present disclosure. When rendering in the embodiment of the present disclosure, it is not necessary to first draw the entire 3D model on a 2D image, but first divide the 3D model into a bright area and a shadow area, and then directly render the bright area based on the color of the 3D model , and render the shadow area based on a predetermined shadow color, so that the final rendering effect can be obtained directly. Compared with the rendering method in which the 3D model to be rendered is first drawn on a 2D image and then post-processed, the rendering method of the embodiment of the present disclosure has higher rendering efficiency and is more suitable for real-time rendering scenarios.
在步骤301中,三维模型可以是待渲染场景中的目标对象对应的模型,所述待渲染场景中可以包括一个或多个目标对象,所述目标对象可以包括但不限于人、动物、桌子、皮包、房屋等,还可以包括对象上的局部区域,例如,脸部、房屋的屋顶。在一些实施例中,可以采用本公开实施例的渲染方式对待渲染场景中的全部目标对象对应的三维模型进行渲染。在另一些实施例中,也可以仅采用本公开实施例的渲染方式对待渲染场景中的一部分目标对象对应的三维模型进行渲染,并采用其他渲染方式对待渲染场景中的另一部分目标对象对应的三维模型进行渲染,或者将不同的目标对象对应的三维模型渲染成不同的风格。例如,待渲染场景中包括目标对象A和目标对象B,可以将目标对象A对应的三维模型渲染成二次元渲染风格,并将目标对象B对应的三维模型渲染成真实渲染风格,从而实现待渲染场景中二次元渲染风格和其他渲染风格的混搭。In step 301, the three-dimensional model may be a model corresponding to a target object in the scene to be rendered, and the scene to be rendered may include one or more target objects, and the target object may include but not limited to people, animals, tables, Bags, houses, etc., can also include local areas on objects, eg, faces, roofs of houses. In some embodiments, the rendering manner of the embodiment of the present disclosure may be used to render the three-dimensional models corresponding to all target objects in the scene to be rendered. In other embodiments, it is also possible to render only the 3D model corresponding to a part of the target objects in the scene to be rendered by using the rendering method of the embodiment of the present disclosure, and use other rendering methods to render the 3D model corresponding to another part of the target object in the scene to be rendered. The model is rendered, or the 3D models corresponding to different target objects are rendered in different styles. For example, if the scene to be rendered includes target object A and target object B, the 3D model corresponding to target object A can be rendered in a two-dimensional rendering style, and the 3D model corresponding to target object B can be rendered in a realistic rendering style, so as to realize the A mix and match of the two-dimensional rendering style and other rendering styles in the scene.
渲染相机可以是一个虚拟的相机,在进行渲染时,可以在三维模型的预设位置处创建一个渲染相机,三维模型处于渲染相机的视角下。渲染相机处于不同位置处时,渲染相机视角下能够拍摄到的三维模型的部分往往是不同的,从而最终呈现的渲染效果也是不同的。以人物类的三维模型为例,如图4A所示,当渲染相机的视角正对人脸时,三维模型在渲染相机视角下的部分为人物的正面(包括人脸的一面),最终渲染得到的二维图像也包括人物的正面。当渲染相机的视角正对人物的后脑时,如图4B,三维模型在渲染相机视角下的部分为人物的背面(包括后脑的一面),最终渲染得到的二维图像也包括人物的背面。The rendering camera may be a virtual camera. When rendering, a rendering camera may be created at a preset position of the 3D model, and the 3D model is under the perspective of the rendering camera. When the rendering camera is at different positions, the parts of the 3D model that can be photographed under the perspective of the rendering camera are often different, and thus the final rendering effects are also different. Taking the 3D model of a person as an example, as shown in Figure 4A, when the perspective of the rendering camera is facing the face, the part of the 3D model under the perspective of the rendering camera is the front of the person (including the side of the face), and the final rendering results in The 2D image of also includes the front of the person. When the perspective of the rendering camera is facing the back of the character, as shown in Figure 4B, the part of the 3D model under the perspective of the rendering camera is the back of the character (including the back of the head), and the final rendered 2D image also includes the back of the character.
三维模型的光暗分布用于表征三维模型上各个片元的亮度值。其中,片元(fragment)是指由顶点连接成的面。在一些实施例中,可以用0到1之间的实数表示片元的亮度值,片元的亮度值越接近1,表示片元越亮;亮度值越接近0,表示片元越暗。本领域技术人员可以理解,亮度的表示方式不限于此,也可以采用其他数值范围(例如,0到100)表示亮度值。还可以采用字母、符号表示亮度等级,例如,用字母“A”、“B”和“C”依次表示由低到高的亮度等级,其中,片元的亮度等级越高,表示片元越亮;片元的亮度等级越低,表示片元越暗。The light and dark distribution of the 3D model is used to represent the brightness value of each fragment on the 3D model. Wherein, a fragment refers to a surface formed by connecting vertices. In some embodiments, a real number between 0 and 1 may be used to represent the brightness value of the fragment. The closer the brightness value of the fragment is to 1, the brighter the fragment is; the closer the brightness value is to 0, the darker the fragment is. Those skilled in the art can understand that the way of expressing the brightness is not limited thereto, and other numerical ranges (for example, 0 to 100) can also be used to represent the brightness value. Letters and symbols can also be used to indicate the brightness level, for example, the letters "A", "B" and "C" are used to indicate the brightness level from low to high in turn, and the higher the brightness level of the slice, the brighter the slice ; The lower the brightness level of the fragment, the darker the fragment.
由于最终呈现的渲染结果中仅包括三维模型在渲染相机视角下的模型区域,因此,本步骤可以仅获取三维模型在渲染相机视角下的模型区域的光暗分布,即,该模型区域内各个片元的亮度值。仍以人物类的三维模型为例,当渲染相机的视角正对人脸时,可以仅获取人物正面的各个片元的亮度值。Since the final rendering result only includes the model area of the 3D model under the rendering camera perspective, this step can only obtain the light and dark distribution of the model area of the 3D model under the rendering camera perspective, that is, each slice in the model area The brightness value of the element. Still taking the 3D model of a person as an example, when the perspective of the rendering camera is facing the face, only the brightness value of each fragment in front of the person can be obtained.
在一些实施例中,可以基于光照方向、所述三维模型的法线贴图和所述渲染相机的位置,确定所述三维模型在渲染相机视角下的第一光暗分布,基于所述三维模型的阴影贴图,确定所述三维模型在渲染相机视角下的第二光暗分布;基于所述第一光暗分布和所述第二光暗分布确定所述三维模型在渲染相机视角下的光暗分布。其中,法线贴图用于表征所述三维模型上各顶点的法线,能够确定各个顶点的凹凸情况。阴影贴图用于表征三维模型每个顶点到光源之间的距离,从而确定每个顶点是否被遮挡。采用法线贴图和阴影贴图共同确定光暗分布,能够获得较高的准确度。In some embodiments, the first light and dark distribution of the 3D model under the viewing angle of the rendering camera may be determined based on the illumination direction, the normal map of the 3D model, and the position of the rendering camera, and based on the A shadow map, determining a second light and dark distribution of the 3D model under the rendering camera viewing angle; determining the light and dark distribution of the 3D model under the rendering camera viewing angle based on the first light and dark distribution and the second light and dark distribution . Wherein, the normal map is used to represent the normal of each vertex on the three-dimensional model, and can determine the unevenness of each vertex. The shadow map is used to characterize the distance between each vertex of the 3D model and the light source, so as to determine whether each vertex is occluded. Using the normal map and the shadow map to determine the light and dark distribution together can obtain higher accuracy.
在一些实施例中,针对所述三维模型在渲染相机视角下的每个片元,可以基于所述第一光暗分布确定该片元的第一亮度值,并基于所述第二光暗分布确定该片元的第二亮度值;根据所述第一亮度值及所述第二亮度值,确定该片元的亮度值。例如,可以将所述第一亮度值与所述第二亮度值的乘积确定为该片元的亮度值。或者,也可以将所述第一亮度值与所述第二亮度值中较小的一者确定为该片元的亮度值。或者,可以先确定该片元的第一亮度值,在该片元的第一亮度值小于或等于预设的亮度阈值的情况下,将该片元的第一亮度值直接确定为该片元的亮度值;在所述第一亮度值大于所述亮度阈值的 情况下,确定该片元的第二亮度值,如果该片元的第二亮度值小于或等于所述亮度阈值,则将该片元的第二亮度值确定为该片元的亮度值,如果该片元的第二亮度值大于所述亮度阈值,则将该片元的第一亮度值确定为该片元的亮度值。In some embodiments, for each fragment of the 3D model under the viewing angle of the rendering camera, the first brightness value of the fragment may be determined based on the first light and dark distribution, and the first brightness value of the fragment may be determined based on the second light and dark distribution Determine a second brightness value of the slice; determine the brightness value of the slice according to the first brightness value and the second brightness value. For example, the product of the first brightness value and the second brightness value may be determined as the brightness value of the fragment. Alternatively, the smaller one of the first brightness value and the second brightness value may also be determined as the brightness value of the fragment. Alternatively, the first luminance value of the fragment may be determined first, and if the first luminance value of the fragment is less than or equal to a preset luminance threshold, the first luminance value of the fragment may be directly determined as the If the first brightness value is greater than the brightness threshold, determine the second brightness value of the fragment, and if the second brightness value of the fragment is less than or equal to the brightness threshold, determine the The second brightness value of the fragment is determined as the brightness value of the fragment, and if the second brightness value of the fragment is greater than the brightness threshold, the first brightness value of the fragment is determined as the brightness value of the fragment.
在另一些实施例中,也可以将所述第一光暗分布或者所述第二光暗分布直接确定为所述三维模型在渲染相机视角下的光暗分布。或者结合所述第一光暗分布和所述第二光暗分布中的至少一者以及通过其他方式确定的第三光暗分布来确定所述三维模型在渲染相机视角下的光暗分布。In some other embodiments, the first light-dark distribution or the second light-dark distribution may also be directly determined as the light-dark distribution of the three-dimensional model under the viewing angle of the rendering camera. Or combine at least one of the first light-dark distribution and the second light-dark distribution and the third light-dark distribution determined by other means to determine the light-dark distribution of the three-dimensional model under the viewing angle of the rendering camera.
在步骤302中,可以基于步骤301中确定的光暗分布将三维模型划分为明亮区域和阴影区域。例如,可以将三维模型在渲染相机视角下的模型区域划分为明亮区域和阴影区域。其中,阴影区域是指三维模型上未被光源照射到或者只能被少量的光线照射到,从而处于阴影之中的区域,阴影区域内的片元亮度值较小。明亮区域是指三维模型上阴影区域以外的区域,这些区域能够被大量光线照射到,明亮区域内的片元亮度值较大。通过划分明亮区域和阴影区域,能够模拟二次元中亮度突变的效果。In step 302 , the three-dimensional model may be divided into bright areas and shadow areas based on the light and dark distribution determined in step 301 . For example, the model area of the 3D model under the perspective of the rendering camera can be divided into a bright area and a shadow area. Wherein, the shadow area refers to the area on the 3D model that is not irradiated by the light source or can only be irradiated by a small amount of light, so that it is in the shadow, and the brightness value of the fragments in the shadow area is relatively small. The bright area refers to the area outside the shadow area on the 3D model. These areas can be illuminated by a large amount of light, and the brightness value of the fragments in the bright area is relatively large. By dividing bright areas and shadow areas, it is possible to simulate the effect of sudden changes in brightness in the second dimension.
在一些实施例中,可以将所述三维模型上亮度值满足第一预设亮度条件的区域确定为所述明亮区域;将所述三维模型上亮度值满足第二预设亮度条件的区域确定为所述阴影区域。In some embodiments, an area whose brightness value satisfies a first preset brightness condition on the three-dimensional model may be determined as the bright area; an area whose brightness value satisfies a second preset brightness condition on the three-dimensional model may be determined as the shaded area.
在一些实施例中,所述第一预设亮度条件可以是亮度值大于或等于亮度下限阈值,所述第二预设亮度条件可以是亮度值小于亮度上限阈值。其中,所述亮度下限阈值与所述亮度上限阈值可以相等。或者,所述亮度下限阈值可以大于所述亮度上限阈值。亮度值处于所述亮度上限阈值与所述亮度下限阈值之间的区域可以随机确定为明亮区域或阴影区域,也可以基于其他条件进一步确定为明亮区域或阴影区域。In some embodiments, the first preset brightness condition may be that the brightness value is greater than or equal to the lower brightness threshold, and the second preset brightness condition may be that the brightness value is smaller than the upper brightness threshold. Wherein, the brightness lower threshold and the brightness upper threshold may be equal. Alternatively, the brightness lower threshold may be greater than the brightness upper threshold. The area whose luminance value is between the upper luminance threshold and the lower luminance threshold may be randomly determined as a bright area or a shadow area, or may be further determined as a bright area or a shadow area based on other conditions.
在一些实施例中,所述第一预设亮度条件可以是亮度值处于预设的亮度值区间[L1,L2]范围内,所述第二预设亮度条件可以是亮度值处于所述亮度值区间[L1,L2]范围以外。或者,所述第一预设亮度条件可以是亮度值等于预设的一组亮度值中的任意一者,所述第二预设亮度条件可以是亮度值不等于所述预设的一组亮度值中的任意一者。所述第一预设亮度条件与所述第二预设亮度条件还可以根据实际需要设置为其他条件,以获得不同的渲染效果。In some embodiments, the first preset brightness condition may be that the brightness value is within a preset range of brightness values [L1, L2], and the second preset brightness condition may be that the brightness value is within the brightness value Out of the interval [L1, L2]. Alternatively, the first preset brightness condition may be that the brightness value is equal to any one of a preset group of brightness values, and the second preset brightness condition may be that the brightness value is not equal to the preset group of brightness values. any of the values. The first preset brightness condition and the second preset brightness condition can also be set as other conditions according to actual needs, so as to obtain different rendering effects.
下面以所述第一预设亮度条件是亮度值大于或等于亮度阈值,所述第二预设亮度条件是亮度值小于亮度阈值为例,对本公开实施例的方案进行说明。The solutions of the embodiments of the present disclosure will be described below by taking the first preset brightness condition that the brightness value is greater than or equal to the brightness threshold and the second preset brightness condition that the brightness value is less than the brightness threshold as an example.
亮度阈值设置得越大,明亮区域的面积越小,阴影区域的面积越大。因此,可以根据需要设置亮度阈值。如果希望将三维模型上较多的模型区域划分为阴影区域,则可以设置较大的亮度阈值;反之,如果希望将三维模型上较多的模型区域划分为明亮区域,则可以设置较小的亮度阈值。The larger the brightness threshold is set, the smaller the area of the bright area and the larger the area of the shadow area. Therefore, the brightness threshold can be set as desired. If you want to divide more model areas on the 3D model into shadow areas, you can set a larger brightness threshold; conversely, if you want to divide more model areas on the 3D model into bright areas, you can set a smaller brightness threshold.
在步骤303中,可以采用不同的方式分别确定对明亮区域和阴影区域进行渲染的颜色。其中,用于渲染明亮区域的颜色即为三维模型中对应于该明亮区域的颜色。例如,三维模型中处于该明亮区域内的片元的颜色为红色,则将该明亮区域的片元渲染为红色。用于渲染阴影区域的颜色为阴影颜色,或者基于阴影颜色与三维模型中对应于该阴影区域的颜色共同确定。阴影颜色可以预先由用户指定,也可以采用默认颜色。还可以从预先定义的多种候选颜色中随机选择一种作为阴影颜色,或者基于其他方式确定阴影颜色。In step 303, the colors for rendering the bright area and the shadow area may be respectively determined in different ways. Wherein, the color used to render the bright area is the color corresponding to the bright area in the 3D model. For example, if the color of the fragments in the bright area in the 3D model is red, the fragments in the bright area are rendered as red. The color used for rendering the shadow area is a shadow color, or is determined based on the shadow color and the color corresponding to the shadow area in the three-dimensional model. The shadow color can be pre-specified by the user, or a default color can be used. It is also possible to randomly select one of multiple predefined candidate colors as the shadow color, or determine the shadow color based on other methods.
在进行渲染时,可直接将阴影区域的颜色渲染为阴影颜色,即,采用阴影颜色代替三维模型中对应于阴影区域的颜色。例如,三维模型中对应于阴影区域的颜色为红色,而阴影颜色为黑色,则可直接将阴影区域的颜色渲染为黑色。When rendering, the color of the shadow area can be directly rendered as a shadow color, that is, the shadow color is used to replace the color corresponding to the shadow area in the 3D model. For example, if the color corresponding to the shaded area in the 3D model is red, and the shaded color is black, then the color of the shaded area can be directly rendered as black.
也可以先基于所述三维模型对应的漫反射贴图确定三维模型中对应于所述阴影区域的颜色(称为模型颜色),其中漫反射贴图可以用于表征三维模型表面的反射和表面颜色,并通过阴影颜色对模型颜色进行修正,利用修正后的颜色对阴影区域进行渲染。例如,可以将模型颜色中的R通道值、G通道值和B通道值与阴影颜色的对应通道值进行求和,得到修正后的颜色。假设模型颜色在R通道、G通道和B通道的分量为(R1,G1,B1),阴影颜色在R通道、G通道和B通道的分量为(R2,G2,B2),则修正后的颜色为(R1+R2,G1+G2,B1+B2)。当然,修正方式不限于对颜色的各通道分量进行简单叠加。It is also possible to first determine the color corresponding to the shadow area in the 3D model (called model color) based on the diffuse reflection map corresponding to the 3D model, wherein the diffuse reflection map can be used to characterize the reflection and surface color of the 3D model surface, and The model color is corrected by the shadow color, and the shadow area is rendered with the corrected color. For example, the R channel value, G channel value, and B channel value of the model color can be summed with the corresponding channel value of the shadow color to obtain the corrected color. Assuming that the components of the model color in the R channel, G channel and B channel are (R1, G1, B1), and the components of the shadow color in the R channel, G channel and B channel are (R2, G2, B2), the corrected color is (R1+R2, G1+G2, B1+B2). Of course, the correction method is not limited to simple superposition of each channel component of the color.
还可以在不同的图层上分别采用模型颜色和阴影颜色对阴影区域进行渲染,并将渲染后的两个图层进行合成,得到阴影区域的渲染结果。例如,模型颜色为红色,阴影颜色为黑色,则可以在第一图层上渲染红色,在第二图层上渲染黑色,并将两个图层叠加在一起,得到阴影区域的渲染结果。同时,还可以分别调整第一图层的透明度和第二图层的透明度,从而呈现出不同的渲染效果。It is also possible to render the shadow area with the model color and shadow color on different layers, and combine the rendered two layers to obtain the rendering result of the shadow area. For example, if the model color is red and the shadow color is black, you can render red on the first layer and black on the second layer, and superimpose the two layers together to get the rendering result of the shadow area. At the same time, the transparency of the first layer and the transparency of the second layer can also be adjusted separately, so as to present different rendering effects.
在上述实施例中,三维模型对应于明亮区域的颜色以及三维模型对应于阴影区域的颜色可以基于三维模型对应的漫反射贴图确定。在所述亮度阈值为单一阈值的情况下,通过在明亮区域绘制三维模型原有的漫反射贴图颜色;在阴影区域将三维模型原有的漫反射贴图颜色按照预定义的阴影颜色进行调整后再进行渲染,从而实现二次元风格中二级色的渲染效果。In the above embodiment, the color of the 3D model corresponding to the bright area and the color of the 3D model corresponding to the shadow area may be determined based on the diffuse reflection map corresponding to the 3D model. In the case where the brightness threshold is a single threshold, by drawing the original diffuse reflection map color of the 3D model in the bright area; in the shadow area, the original diffuse reflection map color of the 3D model is adjusted according to the predefined shadow color and then Perform rendering to achieve the rendering effect of the secondary color in the two-dimensional style.
在一些实施例中,也可以通过制定多个阈值来实现多级着色的渲染效果。例如,所述第二预设亮度条件包括所述亮度值小于或等于第一亮度阈值。预先确定的阴影颜色包括多个阴影颜色,所述多个阴影颜色分别对应于多个不同亮度子区间;所述多个不同亮度子区间根据对所述亮度值小于或等于所述第一亮度阈值的亮度区间进行划分得到,其中所述多个不同亮度子区间中的最大亮度值小于或等于第一亮度阈值。可以确定所述阴影区域中各片元的亮度值所属的亮度子区间;根据各片元的亮度值所属的亮度子区间分别对应的阴影颜色,对所述阴影区域的各片元进行渲染。这种确定阴影颜色的方式称为基于子区间的方式。假设用0到1之间的实数表示片元的亮度值,第一亮度阈值为0.8,可以将[0,0.8]这一亮度区间划分为[0,0.3]以及[0.3,0.8]两个亮度子区间,且亮度子区间[0,0.3]对应的阴影颜色为黑色,亮度子区间[0.3,0.8]对应的阴影颜色为灰色。如果阴影区域中的片元A的亮度值在亮度子区间[0,0.3]之内,则对片元A进行渲染的阴影颜色为黑色;如果阴影区域中的片元A的亮度值在亮度子区间[0.3,0.8]之内,则对片元A进行渲染的阴影颜色为灰色。本领域技术人员可以理解,上述实施例仅为示例性说明,亮度子区间的数量及划分方式可以不同于上述实施例。In some embodiments, the rendering effect of multi-level shading can also be achieved by specifying multiple thresholds. For example, the second preset brightness condition includes that the brightness value is less than or equal to a first brightness threshold. The predetermined shadow color includes a plurality of shadow colors, and the plurality of shadow colors correspond to a plurality of different brightness sub-intervals; The brightness interval is obtained by dividing the brightness interval, wherein the maximum brightness value in the plurality of different brightness sub-intervals is less than or equal to the first brightness threshold. The luminance subinterval to which the luminance value of each fragment in the shadow area belongs may be determined; and each fragment in the shadow area is rendered according to the shadow colors corresponding to the luminance subintervals to which the luminance value of each fragment belongs. This way of determining the shade color is called subinterval-based. Assuming that a real number between 0 and 1 is used to represent the brightness value of the fragment, and the first brightness threshold is 0.8, the brightness interval of [0,0.8] can be divided into two brightnesses of [0,0.3] and [0.3,0.8] subinterval, and the shadow color corresponding to the brightness subinterval [0,0.3] is black, and the shadow color corresponding to the brightness subinterval [0.3,0.8] is gray. If the luminance value of fragment A in the shadow area is within the luminance subinterval [0,0.3], the shade color for rendering fragment A is black; if the luminance value of fragment A in the shadow area is within the luminance subinterval Within the interval [0.3,0.8], the shade color for rendering fragment A is gray. Those skilled in the art can understand that the above-mentioned embodiment is only for illustration, and the number and division method of brightness sub-intervals may be different from the above-mentioned embodiment.
进一步地,一个片元的亮度值所属亮度子区间的亮度范围的值越大,该片元的阴影颜色越浅(即片元的亮度值越大);反之,一个片元的亮度值所属亮度子区间的亮度范围的值越小,该片元的阴影颜色越深(即片元的亮度值越小)。这样,可以呈现出阴影颜色随着片元亮度值的减小有层次地变深的效果。当然,上述变化趋势并非是必须的,也可以采用其他方式设置各个子区间对应的阴影颜色,从而获得其他的阴影渲染效果。Further, the larger the value of the brightness range of the brightness subinterval to which the brightness value of a fragment belongs, the lighter the shadow color of the fragment (that is, the larger the brightness value of the fragment); on the contrary, the brightness value of a fragment belongs to The smaller the value of the luminance range of the sub-interval, the darker the shadow color of the fragment (that is, the smaller the luminance value of the fragment). In this way, the effect that the shadow color becomes darker in layers as the brightness value of the fragment decreases can be presented. Of course, the above-mentioned change trend is not necessary, and other ways can also be used to set the shadow color corresponding to each sub-interval, so as to obtain other shadow rendering effects.
又例如,所述第二预设亮度条件包括所述亮度值小于或等于第二亮度阈值;所述预先确定的阴影颜色包括多个阴影颜色,所述多个阴影颜色分别对应查找表中的多个参考亮度值;所述查找表中的各个参考亮度值均小于或等于所述第二亮度阈值。可以从所述查找表中查找与所述阴影区域中各片元的亮度值分别匹配的参考亮度值对应的阴影颜色;根据与各片元分别匹配的参考亮度值对应的阴影颜色,对所述阴影区域的各片元进行渲染。这种确定阴影颜色的方式称为基于查找表的方式。若在查找表中查找到与片元的亮度值相同的参考亮度值,可以直接将与片元的亮度值相同的参考亮度值确定为与片 元的亮度值相匹配的参考亮度值。For another example, the second preset brightness condition includes that the brightness value is less than or equal to a second brightness threshold; the predetermined shadow color includes a plurality of shadow colors, and the plurality of shadow colors respectively correspond to multiple values in the lookup table. reference brightness values; each reference brightness value in the lookup table is less than or equal to the second brightness threshold. The shadow color corresponding to the reference brightness value matched with the brightness value of each fragment in the shadow area can be searched from the lookup table; according to the shadow color corresponding to the reference brightness value matched with each fragment, the Each fragment in the shaded area is rendered. This way of determining the shade color is called a lookup table based way. If the reference luminance value identical to the luminance value of the fragment is found in the lookup table, the reference luminance value identical to the luminance value of the fragment can be directly determined as the reference luminance value matching the luminance value of the fragment.
若在查找表中未查找到与片元的亮度值相同的参考亮度值,将查找表中与片元亮度值最接近的参考亮度值确定为与片元的亮度值相匹配的参考亮度值。或者,可以将查找表中与片元的亮度值的差异在预设范围内的多个参考亮度值的平均值确定为与片元的亮度值相匹配的参考亮度值。If no reference luminance value identical to the luminance value of the fragment is found in the lookup table, the reference luminance value closest to the luminance value of the fragment in the lookup table is determined as the reference luminance value matching the luminance value of the fragment. Alternatively, the average value of multiple reference brightness values in the lookup table whose difference from the brightness value of the fragment is within a preset range may be determined as the reference brightness value matching the brightness value of the fragment.
进一步地,查找表中存储的参考亮度值越大,该参考亮度值对应的阴影颜色越浅;反之,查找表中存储的参考亮度值越小,该参考亮度值对应的阴影颜色越深。这样,可以呈现出阴影颜色随着片元亮度值的减小有层次地变深的效果。当然,上述变化趋势并非是必须的,也可以采用其他方式设置查找表中各个亮度值对应的阴影颜色,从而获得其他的阴影渲染效果。Further, the larger the reference brightness value stored in the lookup table, the lighter the shadow color corresponding to the reference brightness value; conversely, the smaller the reference brightness value stored in the lookup table, the darker the shadow color corresponding to the reference brightness value. In this way, the effect that the shadow color becomes darker in layers as the brightness value of the fragment decreases can be presented. Of course, the above-mentioned change trend is not necessary, and other methods can also be used to set the shadow color corresponding to each brightness value in the lookup table, so as to obtain other shadow rendering effects.
上述实施例中的第一亮度阈值与第二亮度阈值可以相同,也可以不同。用于对所述阴影区域的颜色进行修正的阴影颜色可以选择基于亮度子区间的方式或者基于查找表的方式中的任意一者确定,也可以结合两种方式共同确定。例如,将基于亮度子区间的方式确定的阴影颜色称为第一阴影颜色,将基于查找表的方式确定的阴影颜色称为第二阴影颜色,则用于对所述阴影区域的颜色进行修正的阴影颜色可以是第一阴影颜色与第二阴影颜色中较深或者较浅的阴影颜色,或者是第一阴影颜色与第二阴影颜色进行加权平均之后得到的阴影颜色。The first brightness threshold and the second brightness threshold in the above embodiments may be the same or different. The shadow color used for correcting the color of the shadow area may be determined based on any one of brightness subinterval-based manners or look-up table-based manners, or may be determined in combination of the two manners. For example, the shadow color determined based on the brightness sub-interval is called the first shadow color, and the shadow color determined based on the look-up table is called the second shadow color, then the color used for correcting the shadow area The shadow color may be a darker or lighter shadow color among the first shadow color and the second shadow color, or a shadow color obtained by weighting the first shadow color and the second shadow color.
除了采用以上列举的方式确定阴影颜色之外,还可采用其他方式确定阴影颜色,此处不再一一列举。通过对三维模型上不同亮度值的部分采用不同的阴影颜色,可模仿不同作画风格的渲染效果。但核心都是要计算光暗分布,并利用光暗分布来对三维模型的着色渲染进行分区处理,不管这个分区有多么细腻。In addition to determining the shadow color using the methods listed above, other methods may also be used to determine the shadow color, which will not be listed here. By using different shadow colors for parts with different brightness values on the 3D model, the rendering effect of different painting styles can be imitated. But the core is to calculate the light and dark distribution, and use the light and dark distribution to partition the shading and rendering of the 3D model, no matter how delicate the partition is.
上述渲染过程可称为扁平着色。在一些实施例中,还可以对步骤303得到的渲染结果进行描边,以形成二次元渲染效果中的线条轮廓。所谓的描边是指采用预先确定的描边颜色渲染三维模型的边缘轮廓。The rendering process described above may be referred to as flat shading. In some embodiments, the rendering result obtained in step 303 may also be stroked to form a line outline in a two-dimensional rendering effect. The so-called stroke refers to rendering the edge outline of the 3D model with a predetermined stroke color.
传统的描边方式一般是在将整个三维模型渲染成二维图像之后,对二维图像进行边缘检测,以确定二维图像中的边缘像素点,再采用描边颜色对边缘像素点进行渲染。这种描边方式同样需要先渲染整个三维模型成二维图像之后,再进行后处理,渲染效率较低。The traditional stroke method is generally to perform edge detection on the two-dimensional image after rendering the entire three-dimensional model into a two-dimensional image to determine the edge pixels in the two-dimensional image, and then use the stroke color to render the edge pixels. This stroke method also needs to render the entire 3D model into a 2D image first, and then perform post-processing, and the rendering efficiency is low.
在本公开实施例中,可以对三维模型进行放大,得到放大模型,再基于预先确定的描边颜色对所述放大模型进行渲染。通过将步骤303中的渲染结果覆盖到放大模型的渲染结果上,从而能够实现图6所示的描边效果。In the embodiment of the present disclosure, the three-dimensional model may be enlarged to obtain an enlarged model, and then the enlarged model may be rendered based on a predetermined stroke color. By overlaying the rendering result in step 303 on the rendering result of the enlarged model, the stroke effect shown in FIG. 6 can be realized.
作为一种实现方式,可以将所述三维模型上的各个顶点沿着所述顶点的法线在投影平面上的投影向量的方向进行位移,得到放大模型。通过这种方式得到的放大模型无需进行位移,放大模型的中间部分会被放大前的三维模型遮挡,三维模型周边扩大的部分则会覆盖在放大前的三维模型的外侧,形成二次元渲染效果中的线条轮廓。除了上述位移方式以外,还可以通过其他方式对三维模型进行放大,本公开对此不做限制。As an implementation manner, each vertex on the three-dimensional model may be displaced along a direction of a projection vector of a normal of the vertex on a projection plane to obtain an enlarged model. The enlarged model obtained in this way does not need to be displaced, the middle part of the enlarged model will be blocked by the 3D model before the enlargement, and the enlarged part around the 3D model will cover the outside of the 3D model before the enlargement, forming a two-dimensional rendering effect line outline. In addition to the above-mentioned displacement methods, the three-dimensional model may also be enlarged in other ways, which is not limited in the present disclosure.
如图5所示,实线所示的圆形表示放大前的三维模型,A、B和C分别为放大前的三维模型上的三个顶点,f A、f B和f C分别表示顶点A、顶点B和顶点C对应的法线在投影平面上的投影向量的方向。通过将各顶点沿着投影向量的方向向外位移一定距离,从而得到如圆形虚线表示的放大后的三维模型。本领域技术人员可以理解,放大前的三维模型上还包括其他的顶点,图中不再一一示出。各个顶点的位移量可以相同,也可以不同。在各顶点位移量相同的情况下,可以对整个三维模型进行均匀地放大,即三维模型各个部分的放大程度相同。在存在位移量不同的顶点的情况下,位移量较大的顶点对应 的放大程度大于位移量较小的顶点。通过对不同的顶点进行不同程度的放大,能够实现不同的描边效果。顶点的位移量与所需的边缘轮廓的宽度有关。如果需要渲染出比较宽的边缘轮廓,则可以设置较大的位移量;如果需要渲染出比较窄的边缘轮廓,则可以设置较小的位移量。即,位移量与渲染得到的边缘轮廓的宽度正相关。可以预先输入所需的边缘轮廓的宽度,并基于输入的宽度自动确定各个顶点的位移量。 As shown in Figure 5, the circle shown by the solid line represents the three-dimensional model before the enlargement, A, B and C are the three vertices on the three-dimensional model before the enlargement respectively, and f A , f B and f C represent the vertex A respectively , the directions of the projection vectors of the normals corresponding to vertices B and C on the projection plane. By displacing each vertex outward for a certain distance along the direction of the projection vector, an enlarged three-dimensional model as indicated by the circular dotted line is obtained. Those skilled in the art can understand that the three-dimensional model before the enlargement also includes other vertices, which are not shown one by one in the figure. The displacement of each vertex can be the same or different. Under the condition that the displacement of each vertex is the same, the whole three-dimensional model can be uniformly enlarged, that is, the enlargement degree of each part of the three-dimensional model is the same. In the case that there are vertices with different displacement amounts, the magnification degree corresponding to the vertex with a larger displacement amount is larger than that of a vertex with a smaller displacement amount. By enlarging different vertices to different degrees, different stroke effects can be achieved. The amount of displacement of the vertices is related to the desired width of the edge profile. If you need to render a wider edge outline, you can set a larger displacement; if you need to render a narrower edge outline, you can set a smaller displacement. That is, the amount of displacement is positively correlated with the width of the rendered edge outline. The desired edge profile width can be input in advance, and the displacement of each vertex is automatically determined based on the input width.
在得到放大模型之后,可以采用描边颜色对三维模型进行渲染。例如,可以采用描边颜色对整个放大后的三维模型进行渲染,也可以仅渲染处于放大前的三维模型外侧的部分。前一种渲染方式能够有效避免描边渲染结果与扁平着色的渲染结果之间存在空隙而影响渲染效果。上述描边颜色可以是单一颜色,也可以是渐变色。描边颜色可以预先由用户指定,或者随机从多种颜色中选择一种,或者采用默认颜色。为了突出边缘轮廓与明亮区域之间的差异,可以选择与明亮区域颜色差异大于预设差异值的颜色作为描边颜色。After getting the enlarged model, the 3D model can be rendered with the stroke color. For example, the entire enlarged 3D model can be rendered with the stroke color, or only the part outside the enlarged 3D model can be rendered. The former rendering method can effectively avoid the gap between the stroke rendering result and the flat shading rendering result, which will affect the rendering effect. The aforementioned stroke color may be a single color or a gradient color. The stroke color can be pre-specified by the user, or one of multiple colors can be randomly selected, or the default color can be used. In order to highlight the difference between the edge outline and the bright area, you can select a color that is greater than the preset difference value from the bright area as the stroke color.
在一些实施例中,可以基于所述三维模型的颜色对所述明亮区域中各片元的正面进行渲染,并基于预先确定的阴影颜色对所述阴影区域中各片元的正面进行渲染。换言之,可以仅对所述明亮区域中各片元的正面以及所述阴影区域中各片元的正面进行渲染,而不对所述明亮区域中各片元的背面以及所述阴影区域中各片元的背面进行渲染。这种渲染方式称为剔除背面的渲染方式。还可以基于预先确定的描边颜色对所述放大模型中各片元的背面进行渲染。也就是说,可以仅对放大模型中各片元的背面进行渲染,而不对放大模型中各片元的正面进行渲染。这种渲染方式称为剔除正面的渲染方式。In some embodiments, the front face of each fragment in the bright area may be rendered based on the color of the three-dimensional model, and the front face of each fragment in the shadow area may be rendered based on a predetermined shadow color. In other words, only the front faces of the fragments in the bright area and the front faces of the shadows in the shadow area may be rendered, but not the back faces of the fragments in the bright area and the shadows in the shadow area Render the back of the . This rendering method is called culling backface rendering. The back surface of each fragment in the enlarged model may also be rendered based on a predetermined stroke color. That is to say, only the backside of each fragment in the enlarged model may be rendered, but the front side of each fragment in the enlarged model may not be rendered. This rendering method is called culling front-facing rendering.
其中,一个片元的正面为所述片元上多个顶点沿着第一方向进行连接得到的面,一个片元的背面为所述片元上多个顶点沿着第二方向进行连接得到的面,所述第一方向与所述第二方向相反。例如,片元的正面可以定义为多个顶点通过顺时针方式连接形成的面,片元的背面可以定义为多个顶点通过逆时针方式连接形成的面。当然,也可以根据实际需要,按照其他的方式定义正面和背面。Wherein, the front face of a fragment is the face obtained by connecting multiple vertices on the fragment along the first direction, and the back face of a fragment is the face obtained by connecting multiple vertices on the fragment along the second direction On the other hand, the first direction is opposite to the second direction. For example, the front face of a fragment can be defined as a face formed by connecting multiple vertices in a clockwise manner, and the back face of a fragment can be defined as a face formed by connecting multiple vertices in a counterclockwise manner. Of course, the front and the back can also be defined in other ways according to actual needs.
如图7所示,在剔除背面的渲染方式中,a、b、c表示一个片元的三个顶点,箭头方向表示顶点的连接顺序。只渲染a、b、c三个顶点通过顺时针方向连接形成的面(7-1),而不渲染a、b、c三个顶点通过逆时针方向连接形成的面(7-2)。在剔除正面的渲染方式中,只渲染a、b、c三个顶点通过逆时针方向连接形成的面(7-3),而不渲染a、b、c三个顶点通过顺时针方向连接形成的面(7-4)。通过采用不同的剔除方式渲染放大前的三维模型与放大后的三维模型,能够更好地实现放大前的三维模型对放大后的三维模型的遮挡效果。As shown in Figure 7, in the rendering method of excluding the back surface, a, b, and c represent three vertices of a fragment, and the direction of the arrow represents the connection order of the vertices. Only render the surface (7-1) formed by connecting the three vertices a, b, and c in a clockwise direction, but not render the surface (7-2) formed by connecting the three vertices a, b, and c in a counterclockwise direction. In the rendering method of excluding the front face, only the surface (7-3) formed by connecting the three vertices a, b, and c in a counterclockwise direction is rendered, and the surface (7-3) formed by connecting the three vertices a, b, and c in a clockwise direction is not rendered. Face (7-4). By using different culling methods to render the three-dimensional model before enlargement and the three-dimensional model after enlargement, the occlusion effect of the three-dimensional model before enlargement on the three-dimensional model after enlargement can be better realized.
在一些实施例中,在基于预先确定的描边颜色对所述放大三维模型进行渲染之前,可以将所述放大模型沿着远离所述渲染相机的方向进行移动。这一步操作可以避免放大模型和三维模型在模型厚度较小的地方发生穿透,导致渲染效果错误的问题。如图8所示,假设放大模型移动前与渲染相机之间的距离为d1,则可以将放大模型移动到与渲染相机相距d2的位置,其中,d2大于d1。In some embodiments, before rendering the enlarged three-dimensional model based on a predetermined stroke color, the enlarged model may be moved in a direction away from the rendering camera. This step can avoid the problem of magnified model and 3D model penetrating in places where the thickness of the model is small, resulting in wrong rendering effects. As shown in FIG. 8 , assuming that the distance between the enlarged model and the rendering camera before moving is d1, the enlarged model can be moved to a position d2 away from the rendering camera, where d2 is greater than d1.
上述扁平着色与描边两个流程可以采用不同的渲染通道(Pass)实现,两个渲染通道上的流程可以并行地执行,也可以先通过第一渲染通道进行扁平着色,在扁平着色处理完成之后,再通过第二渲染通道进行描边。The above two processes of flat shading and stroke can be realized by using different rendering passes (Pass). The processes on the two rendering passes can be executed in parallel, or the flat shading can be performed through the first rendering pass first, and after the flat shading process is completed , and then stroke through the second rendering pass.
本公开实施例可应用于二次元风格的3D游戏、主播的二次元风格3D贴纸或者二次元风格的3D虚拟主播渲染等场景。以二次元风格的3D虚拟主播渲染场景为例,所述三维模型为主播的三维模型。在主播进行直播的过程中,可以实时获取主播的图像,并基于主播的图像对主播进行三维建模,得到主播的三维模型。然后,采用本公开实施例的渲染方法对主播的三维模型进行实时渲染,从而得到二次元风格的虚拟主播形象。由 于本公开实施例的渲染方法无需进行后处理,实时性较高,因此,能够适用于上述场景。The embodiments of the present disclosure can be applied to scenarios such as 2D-style 3D games, 2D-style 3D stickers of anchors, or 3D virtual anchor rendering of 2D-style. Taking a 3D virtual anchor rendering scene in a two-dimensional style as an example, the 3D model is the 3D model of the anchor. During the live broadcast process of the anchor, the image of the anchor can be obtained in real time, and the three-dimensional modeling of the anchor can be performed based on the image of the anchor to obtain the three-dimensional model of the anchor. Then, the three-dimensional model of the anchor is rendered in real time by using the rendering method of the embodiment of the present disclosure, so as to obtain a virtual anchor image in a two-dimensional style. Since the rendering method of the embodiment of the present disclosure does not require post-processing and has high real-time performance, it can be applied to the above-mentioned scenarios.
如图9A,本公开实施例还提供另一种三维模型的渲染方法,所述方法包括步骤901至步骤902。As shown in FIG. 9A , the embodiment of the present disclosure also provides another method for rendering a three-dimensional model, and the method includes steps 901 to 902 .
步骤901:在第一渲染通道中,基于三维模型的颜色对三维模型的明亮区域进行渲染,并基于预先确定的阴影颜色对所述三维模型的阴影区域进行渲染。Step 901: In a first rendering pass, render the bright area of the 3D model based on the color of the 3D model, and render the shadow area of the 3D model based on a predetermined shadow color.
步骤902:在第二渲染通道中,对所述三维模型进行放大,得到放大模型,基于预先确定的描边颜色对所述放大模型进行渲染,所述放大模型的中间区域被所述三维模型遮挡。Step 902: In the second rendering pass, enlarge the 3D model to obtain an enlarged model, render the enlarged model based on a predetermined stroke color, and the middle area of the enlarged model is blocked by the 3D model .
本公开实施例通过两个渲染通道分别进行扁平着色和描边,其中,扁平着色过程通过对三维模型进行渲染来实现,描边过程通过对放大模型进行渲染来实现。由于扁平着色过程和描边过程都是直接通过确定好的颜色来进行渲染,无需先将三维模型渲染成二维图像后再进行后处理,因此,提高了渲染效率。The embodiments of the present disclosure respectively perform flat shading and stroke through two rendering passes, wherein the flat shading process is implemented by rendering a 3D model, and the stroke process is implemented by rendering an enlarged model. Since both the flat shading process and the stroke process are directly rendered with the determined color, there is no need to render the 3D model into a 2D image before post-processing, thus improving the rendering efficiency.
上述第一通道的处理称为扁平着色,第二通道的处理称为描边。参考图9B,图9B示出了通过第一渲染通道进行扁平着色的具体流程以及通过第二渲染通道进行模型描边的具体流程。The processing of the first pass above is called flat shading, and the processing of the second pass is called stroke. Referring to FIG. 9B , FIG. 9B shows a specific process of performing flat shading through the first rendering pass and a specific process of performing model strokes through the second rendering pass.
通过第一渲染通道进行扁平着色,包括步骤(1-1)至步骤(1-6):Perform flat shading through the first rendering pass, including steps (1-1) to steps (1-6):
(1-1)获取渲染用的输入,如三维模型的阴影贴图、光照方向、三维模型的法线贴图,渲染相机位置等。(1-1) Obtain the input for rendering, such as the shadow map of the 3D model, the direction of light, the normal map of the 3D model, and the position of the rendering camera.
(1-2)通过光照方向、法线贴图和渲染相机位置确定渲染相机视角下三维模型因为光照方向导致的第一光暗分布。(1-2) Determine the first light and dark distribution of the 3D model under the perspective of the rendering camera due to the direction of the light through the direction of the light, the normal map and the position of the rendering camera.
(1-3)通过阴影贴图,确定三维模型上因为阴影遮蔽导致的第二光暗分布。(1-3) Determine the second light and dark distribution on the 3D model due to shadow occlusion through the shadow map.
(1-4)将三维模型上因为阴影遮蔽的第二光暗分布和因为光照方向导致的第一光暗分布进行叠加,获取最终的光暗分布(即,三维模型在渲染相机视角下的光暗分布)。(1-4) Superimpose the second light and dark distribution on the 3D model due to shadow occlusion and the first light and dark distribution caused by the light direction to obtain the final light and dark distribution (that is, the light and dark distribution of the 3D model under the rendering camera perspective dark distribution).
(1-5)利用预定义的亮度阈值将最终的光暗分布分为明亮区域和阴影区域两部分。(1-5) Divide the final light-dark distribution into two parts, the bright area and the shadow area, using a predefined brightness threshold.
(1-6)对明亮区域与阴影区域分别进行着色:在明亮区域绘制三维模型原有的漫反射贴图颜色;在阴影区域将三维模型原有的漫反射贴图按照预定义的阴影颜色进行调整后再在阴影区域绘制该调整后的颜色,从而实现二次元风格中二级色的渲染效果。(1-6) Color the bright area and shadow area separately: draw the original diffuse reflection map color of the 3D model in the bright area; adjust the original diffuse reflection map of the 3D model according to the predefined shadow color in the shadow area Then draw the adjusted color in the shadow area, so as to realize the rendering effect of the secondary color in the two-dimensional style.
通过第二渲染通道进行模型描边,包括步骤(2-1)至步骤(2-5):Stroke the model through the second rendering pass, including step (2-1) to step (2-5):
(2-1)获取渲染用的输入,如顶点位置、渲染相机的投影矩阵、模型矩阵、骨骼数据、三维模型的顶点法线等;其中,投影矩阵为渲染相机坐标系与世界坐标系之间的转换矩阵,模型矩阵用于描述三维模型的位移和旋转,骨骼数据用于表示有骨骼的三维模型(例如,人物的三维模型)的形变,对于没有骨骼的三维模型(例如,桌子),输入可以不包括骨骼数据。(2-1) Obtain the input for rendering, such as vertex position, projection matrix of rendering camera, model matrix, bone data, vertex normal of 3D model, etc.; among them, the projection matrix is the distance between the rendering camera coordinate system and the world coordinate system The transformation matrix, the model matrix is used to describe the displacement and rotation of the 3D model, the bone data is used to represent the deformation of the 3D model with bones (for example, a 3D model of a character), and for a 3D model without bones (for example, a table), input Skeleton data may not be included.
(2-2)通过三维模型的顶点位置、顶点法线和模型矩阵计算出三维模型在世界坐标系下各个顶点的位置和法线信息。由于光源的位置一般用世界坐标系下的位置来表示,这一步的目的是将三维模型投影到世界坐标系,从而使三维模型与光源处于同一坐标系下。除了将三维模型转换到世界坐标系,也可以将光源转换到渲染相机坐标系下,或者将三维模型与渲染相机都转换到其他坐标系下,只要二者处于同一坐标系即可。(2-2) Calculate the position and normal information of each vertex of the 3D model in the world coordinate system through the vertex position, vertex normal and model matrix of the 3D model. Since the position of the light source is generally represented by the position in the world coordinate system, the purpose of this step is to project the 3D model to the world coordinate system, so that the 3D model and the light source are in the same coordinate system. In addition to converting the 3D model to the world coordinate system, you can also convert the light source to the rendering camera coordinate system, or convert both the 3D model and the rendering camera to other coordinate systems, as long as they are in the same coordinate system.
(2-3)模型放大:利用三维模型的位置和法线信息,以及渲染相机的投影矩阵,将模型的各个顶点沿着法线在投影平面的投影方向上向外移动,使得模型在投影平面上放 大,也就是让模型“大一圈”。(2-3) Model enlargement: using the position and normal information of the 3D model, and the projection matrix of the rendering camera, each vertex of the model is moved outward along the normal in the projection direction of the projection plane, so that the model is on the projection plane Zoom in, that is, to make the model "bigger".
(2-4)将该放大的模型沿着远离渲染相机的方向略微移动。这一步操作可以避免这个放大的模型和原模型在模型厚度较小的地方发生穿透导致渲染效果错误的问题。(2-4) Move the enlarged model slightly in a direction away from the rendering camera. This step can avoid the problem that the zoomed-in model and the original model are penetrated in places where the thickness of the model is small, resulting in wrong rendering effects.
(2-5)以预先指定的描边颜色,用剔除正面的方式渲染放大后的模型(原三维模型的阴影区域和明亮区域通过剔除背面的方式进行渲染)。这样,这个放大后的模型的中间部分就会被原三维模型遮挡,周边扩大的部分则会覆盖在原本的三维模型的外边,形成二次元渲染效果中的线条轮廓。(2-5) Render the enlarged model with the pre-specified stroke color by excluding the front (the shadow area and bright area of the original 3D model are rendered by excluding the back). In this way, the middle part of the enlarged model will be blocked by the original 3D model, and the enlarged peripheral part will cover the outside of the original 3D model, forming the outline of the lines in the 2D rendering effect.
经过以上两个通道的渲染之后,产出的结果就是有着边缘轮廓和二级着色的赛璐珞风格二次元渲染特效了。After the rendering of the above two passes, the result is a celluloid-style two-dimensional rendering effect with edge outlines and secondary coloring.
该流程中的描边效果除却利用顶点的法线信息之外,也可以用切线来间接计算出平滑过渡的法线,再进行放大,从而实现较为平滑、避免描边在尖锐的边角上出现断裂的问题,但核心都是要绘制一个比原有模型大一些的反向模型以模拟描边的效果。与后处理的方法相比,本公开实施例的方法渲染出的结果即为二次元绘制风格,省去了后处理的步骤和时间。In addition to using the normal information of the vertices, the stroke effect in this process can also use the tangent to indirectly calculate the normal of the smooth transition, and then enlarge it, so as to achieve smoother strokes and avoid the appearance of strokes on sharp corners. The problem of fracture, but the core is to draw a reverse model that is larger than the original model to simulate the effect of stroke. Compared with the post-processing method, the rendering result of the method of the embodiment of the present disclosure is a two-dimensional drawing style, which saves the steps and time of post-processing.
本实施例的细节详见前述三维模型的渲染方法的实施例,此处不再赘述。For details of this embodiment, refer to the foregoing embodiment of the rendering method for a three-dimensional model, and details are not repeated here.
如图10,本公开实施例还提供另一种三维模型的渲染方法,所述方法包括步骤1001至步骤1002。As shown in FIG. 10 , the embodiment of the present disclosure also provides another method for rendering a 3D model, and the method includes steps 1001 to 1002 .
步骤1001:对三维模型进行放大,得到放大模型,所述放大模型的中间区域被所述三维模型遮挡。Step 1001: Enlarge the three-dimensional model to obtain an enlarged model, and the middle area of the enlarged model is blocked by the three-dimensional model.
步骤1002:基于预先确定的描边颜色对所述放大模型进行渲染。Step 1002: Render the enlarged model based on a predetermined stroke color.
本公开实施例直接采用描边颜色对放大模型进行渲染,相比于传统的渲染方式中先将整个三维模型渲染成二维图像,再通过后处理进行边缘分割的方式,本公开实施例能够有效提高渲染效率。The embodiment of the present disclosure directly uses the stroke color to render the enlarged model. Compared with the traditional rendering method in which the entire 3D model is first rendered into a 2D image, and then edge segmentation is performed through post-processing, the embodiment of the present disclosure can effectively Improve rendering efficiency.
在一些实施例中,所述对所述三维模型进行放大,得到放大模型,包括:将所述三维模型上的各个顶点沿着所述顶点的法线在投影平面上的投影向量的方向进行位移,得到所述放大模型。通过这种方式得到的放大模型的中间区域能够直接被放大前的三维模型遮挡,无需进行位移,处理复杂度较低。In some embodiments, the amplifying the three-dimensional model to obtain the enlarged model includes: displacing each vertex on the three-dimensional model along the direction of the projection vector of the normal of the vertex on the projection plane , to obtain the enlarged model. The middle area of the enlarged model obtained in this way can be directly blocked by the three-dimensional model before the enlargement without displacement, and the processing complexity is low.
在一些实施例中,在基于预先确定的描边颜色对所述放大模型进行渲染之前,所述方法还包括:将所述放大模型沿着远离所述渲染相机的方向进行移动。这一步操作可以避免放大模型和三维模型在模型厚度较小的地方发生穿透,导致渲染效果错误的问题。In some embodiments, before rendering the enlarged model based on a predetermined stroke color, the method further includes: moving the enlarged model in a direction away from the rendering camera. This step can avoid the problem of magnified model and 3D model penetrating in places where the thickness of the model is small, resulting in wrong rendering effects.
在一些实施例中,所述方法还包括:基于三维模型的颜色对所述三维模型的明亮区域进行渲染,并基于预先确定的阴影颜色对所述三维模型的阴影区域进行渲染。本公开实施例无需先对整个三维模型绘制成二维图像,而是能够直接获取最终的渲染效果。相比于先绘制待渲染三维模型至二维图像再进行后处理的渲染方式,本公开实施例的渲染方式渲染效率较高,更加适用于实时渲染场景。In some embodiments, the method further includes: rendering a bright area of the three-dimensional model based on a color of the three-dimensional model, and rendering a shaded area of the three-dimensional model based on a predetermined shade color. In the embodiments of the present disclosure, it is not necessary to first render the entire 3D model into a 2D image, but can directly obtain the final rendering effect. Compared with the rendering method in which the 3D model to be rendered is first drawn to a 2D image and then post-processed, the rendering method of the embodiment of the present disclosure has higher rendering efficiency and is more suitable for real-time rendering scenarios.
在一些实施例中,基于三维模型的颜色对所述三维模型的明亮区域进行渲染,并基于预先确定的阴影颜色对所述三维模型的阴影区域进行渲染,包括:基于所述三维模型的颜色对所述明亮区域中各片元的正面进行渲染,并基于预先确定的阴影颜色对所述阴影区域中各片元的正面进行渲染;基于预先确定的描边颜色对所述放大模型进行渲染,包括:基于预先确定的描边颜色对所述放大模型中各片元的背面进行渲染;其中,一个片元的正面为所述片元上多个顶点沿着第一方向进行连接得到的面,一个片元的背面为 所述片元上多个顶点沿着第二方向进行连接得到的面,所述第一方向与所述第二方向相反。通过这种方式,能够减少描边颜色对三维模型的明亮区域和阴影区域的颜色的影响,提高渲染效果。In some embodiments, rendering the bright area of the 3D model based on the color of the 3D model, and rendering the shaded area of the 3D model based on a predetermined shadow color includes: pairing the color of the 3D model based on rendering the front of each fragment in the bright area, and rendering the front of each fragment in the shadow area based on a predetermined shadow color; rendering the enlarged model based on a predetermined stroke color, including : Render the back of each fragment in the enlarged model based on a predetermined stroke color; wherein, the front of a fragment is a surface obtained by connecting multiple vertices on the fragment along the first direction, and a The back surface of the fragment is a surface obtained by connecting multiple vertices on the fragment along a second direction, and the first direction is opposite to the second direction. In this way, the influence of the stroke color on the color of the bright area and the shadow area of the 3D model can be reduced, and the rendering effect can be improved.
本实施例的细节详见前述三维模型的渲染方法的实施例,此处不再赘述。For details of this embodiment, refer to the foregoing embodiment of the rendering method for a three-dimensional model, and details are not repeated here.
本公开涉及增强现实领域,通过获取现实环境中的目标对象的图像信息,进而借助各类视觉相关算法实现对目标对象的相关特征、状态及属性进行检测或识别处理,从而得到与具体应用匹配的虚拟与现实相结合的AR效果。示例性的,目标对象可涉及与人体相关的脸部、肢体、手势、动作等,或者与物体相关的标识物、标志物,或者与场馆或场所相关的沙盘、展示区域或展示物品等。视觉相关算法可涉及视觉定位、SLAM、三维重建、图像注册、背景分割、对象的关键点提取及跟踪、对象的位姿或深度检测等。具体应用不仅可以涉及跟真实场景或物品相关的导览、导航、讲解、重建、虚拟效果叠加展示等交互场景,还可以涉及与人相关的特效处理,比如妆容美化、肢体美化、特效展示、虚拟模型展示等交互场景。可通过卷积神经网络,实现对目标对象的相关特征、状态及属性进行检测或识别处理。上述卷积神经网络是基于深度学习框架进行模型训练而得到的网络模型。This disclosure relates to the field of augmented reality. By acquiring the image information of the target object in the real environment, and then using various visual correlation algorithms to detect or identify the relevant features, states and attributes of the target object, and thus obtain the image information that matches the specific application. AR effect combining virtual and reality. Exemplarily, the target object may involve faces, limbs, gestures, actions, etc. related to the human body, or markers and markers related to objects, or sand tables, display areas or display items related to venues or places. Vision-related algorithms can involve visual positioning, SLAM, 3D reconstruction, image registration, background segmentation, object key point extraction and tracking, object pose or depth detection, etc. Specific applications can not only involve interactive scenes such as guided tours, navigation, explanations, reconstructions, virtual effect overlays and display related to real scenes or objects, but also special effects processing related to people, such as makeup beautification, body beautification, special effect display, virtual Interactive scenarios such as model display. The relevant features, states and attributes of the target object can be detected or identified through the convolutional neural network. The above-mentioned convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。Those skilled in the art can understand that in the above method of specific implementation, the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process. The specific execution order of each step should be based on its function and possible The inner logic is OK.
如图11,本公开实施例还提供一种三维模型的渲染装置,所述装置包括:确定模块1101,用于确定三维模型在渲染相机视角下的光暗分布,所述光暗分布用于表征所述三维模型上各个片元的亮度值;划分模块1102,用于基于所述光暗分布将所述三维模型划分为明亮区域和阴影区域;渲染模块1103,用于基于所述三维模型的颜色对所述明亮区域进行渲染,并基于预先确定的阴影颜色对所述阴影区域进行渲染。As shown in Fig. 11 , an embodiment of the present disclosure also provides a rendering device for a 3D model, the device includes: a determining module 1101, configured to determine the light and dark distribution of the 3D model under the viewing angle of the rendering camera, and the light and dark distribution is used to represent The luminance value of each fragment on the 3D model; the division module 1102, used to divide the 3D model into bright areas and shadow areas based on the light and dark distribution; the rendering module 1103, used to color the 3D model based on The bright area is rendered, and the shadow area is rendered based on a predetermined shadow color.
如图12,本公开实施例还提供一种三维模型的渲染装置,所述装置包括:第一渲染模块1201,用于在第一渲染通道中,基于三维模型的颜色对所述三维模型的明亮区域进行渲染,并基于预先确定的阴影颜色对所述三维模型的阴影区域进行渲染;第二渲染模块1202,用于在第二渲染通道中,对所述三维模型进行放大,得到放大模型,基于预先确定的描边颜色对所述放大模型进行渲染,所述放大模型的中间区域被所述三维模型遮挡。As shown in FIG. 12 , an embodiment of the present disclosure also provides a three-dimensional model rendering device, which includes: a first rendering module 1201, configured to, in the first rendering pass, adjust the brightness of the three-dimensional model based on the color of the three-dimensional model The region is rendered, and the shadow region of the three-dimensional model is rendered based on a predetermined shadow color; the second rendering module 1202 is configured to enlarge the three-dimensional model in a second rendering pass to obtain an enlarged model, based on The predetermined stroke color renders the enlarged model, and the middle area of the enlarged model is blocked by the three-dimensional model.
如图13,本公开实施例还提供一种三维模型的渲染装置,所述装置包括:放大模块1301,用于对三维模型进行放大,得到放大模型,所述放大模型的中间区域被所述三维模型遮挡;渲染模块1302,用于基于预先确定的描边颜色对所述放大模型进行渲染。As shown in Fig. 13 , the embodiment of the present disclosure also provides a rendering device for a three-dimensional model, the device includes: an enlargement module 1301, configured to enlarge the three-dimensional model to obtain an enlarged model, the middle area of the enlarged model is covered by the three-dimensional Model occlusion; a rendering module 1302, configured to render the enlarged model based on a predetermined stroke color.
在一些实施例中,本公开实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述。In some embodiments, the functions or modules included in the device provided by the embodiments of the present disclosure can be used to execute the methods described in the method embodiments above, and its specific implementation can refer to the description of the method embodiments above. For brevity, here No longer.
本说明书实施例还提供一种计算机设备,其至少包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,处理器执行所述程序时实现前述任一实施例所述的方法。The embodiment of this specification also provides a computer device, which at least includes a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein, when the processor executes the program, the computer program described in any of the preceding embodiments is implemented. described method.
图14示出了本说明书实施例所提供的一种更为具体的计算设备硬件结构示意图,该设备可以包括:处理器1401、存储器1402、输入/输出接口1403、通信接口1404和总线1405。其中处理器1401、存储器1402、输入/输出接口1403和通信接口1404通过总线1405实现彼此之间在设备内部的通信连接。FIG. 14 shows a schematic diagram of a more specific hardware structure of a computing device provided by the embodiment of this specification. The device may include: a processor 1401 , a memory 1402 , an input/output interface 1403 , a communication interface 1404 and a bus 1405 . The processor 1401 , the memory 1402 , the input/output interface 1403 and the communication interface 1404 are connected to each other within the device through the bus 1405 .
处理器1401可以采用通用的CPU(Central Processing Unit,中央处理器)、微处理器、应用专用集成电路(Application Specific Integrated Circuit,ASIC)、或者一个或多个集成电路等方式实现,用于执行相关程序,以实现本说明书实施例所提供的技术方案。处理器1401还可以包括显卡,所述显卡可以是Nvidia titan X显卡或者1080Ti显卡等。The processor 1401 may be implemented by a general-purpose CPU (Central Processing Unit, central processing unit), a microprocessor, an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, and is used to execute related programs to realize the technical solutions provided by the embodiments of this specification. The processor 1401 may also include a graphics card, and the graphics card may be an Nvidia titan X graphics card or a 1080Ti graphics card.
存储器1402可以采用ROM(Read Only Memory,只读存储器)、RAM(Random Access Memory,随机存取存储器)、静态存储设备,动态存储设备等形式实现。存储器1402可以存储操作系统和其他应用程序,在通过软件或者固件来实现本说明书实施例所提供的技术方案时,相关的程序代码保存在存储器1402中,并由处理器1401来调用执行。The memory 1402 can be implemented in the form of ROM (Read Only Memory, read-only memory), RAM (Random Access Memory, random access memory), static storage device, dynamic storage device, etc. The memory 1402 can store operating systems and other application programs. When implementing the technical solutions provided by the embodiments of this specification through software or firmware, the relevant program codes are stored in the memory 1402 and invoked by the processor 1401 for execution.
输入/输出接口1403用于连接输入/输出模块,以实现信息输入及输出。输入输出/模块可以作为组件配置在设备中(图中未示出),也可以外接于设备以提供相应功能。其中输入设备可以包括键盘、鼠标、触摸屏、麦克风、各类传感器等,输出设备可以包括显示器、扬声器、振动器、指示灯等。The input/output interface 1403 is used to connect the input/output module to realize information input and output. The input/output/module can be configured in the device as a component (not shown in the figure), or can be externally connected to the device to provide corresponding functions. The input device may include a keyboard, mouse, touch screen, microphone, various sensors, etc., and the output device may include a display, a speaker, a vibrator, an indicator light, and the like.
通信接口1404用于连接通信模块(图中未示出),以实现本设备与其他设备的通信交互。其中通信模块可以通过有线方式(例如USB、网线等)实现通信,也可以通过无线方式(例如移动网络、WIFI、蓝牙等)实现通信。The communication interface 1404 is used to connect a communication module (not shown in the figure), so as to realize the communication interaction between the device and other devices. The communication module can realize communication through wired means (such as USB, network cable, etc.), and can also realize communication through wireless means (such as mobile network, WIFI, Bluetooth, etc.).
总线1405包括一通路,在设备的各个组件(例如处理器1401、存储器1402、输入/输出接口1403和通信接口1404)之间传输信息。 Bus 1405 includes a path for transferring information between the various components of the device (eg, processor 1401, memory 1402, input/output interface 1403, and communication interface 1404).
需要说明的是,尽管上述设备仅示出了处理器1401、存储器1402、输入/输出接口1403、通信接口1404以及总线1405,但是在实施过程中,该设备还可以包括实现正常运行所必需的其他组件。此外,本领域的技术人员可以理解的是,上述设备中也可以仅包含实现本说明书实施例方案所必需的组件,而不必包含图中所示的全部组件。It should be noted that although the above device only shows the processor 1401, the memory 1402, the input/output interface 1403, the communication interface 1404, and the bus 1405, in the implementation process, the device may also include other necessary components for normal operation. components. In addition, those skilled in the art can understand that the above-mentioned device may only include components necessary to implement the solutions of the embodiments of this specification, and does not necessarily include all the components shown in the figure.
本公开实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现前述任一实施例所述的方法。An embodiment of the present disclosure further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method described in any one of the foregoing embodiments is implemented.
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体,可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。Computer-readable media includes both volatile and non-volatile, removable and non-removable media, and can be implemented by any method or technology for storage of information. Information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridge, tape magnetic disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer-readable media excludes transitory computer-readable media, such as modulated data signals and carrier waves.
通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到本说明书实施例可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本说明书实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本说明书实施例各个实施例或者实施例的某些部分所述的方法。It can be known from the above description of the implementation manners that those skilled in the art can clearly understand that the embodiments of this specification can be implemented by means of software plus a necessary general hardware platform. Based on this understanding, the essence of the technical solutions of the embodiments of this specification or the part that contributes to the prior art can be embodied in the form of software products, and the computer software products can be stored in storage media, such as ROM/RAM, A magnetic disk, an optical disk, etc., include several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) to execute the methods described in various embodiments or some parts of the embodiments of this specification.
上述实施例阐明的系统、装置、模块或单元,可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体 播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。The systems, devices, modules or units described in the above embodiments may be realized by computer chips or entities, or by products with certain functions. A typical implementing device is a computer, which may take the form of a personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation device, e-mail device, game control device, etc. desktops, tablets, wearables, or any combination of these.
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,在实施本说明书实施例方案时可以把各模块的功能在同一个或多个软件和/或硬件中实现。也可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。Each embodiment in this specification is described in a progressive manner, the same and similar parts of each embodiment can be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, as for the device embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and for relevant parts, please refer to part of the description of the method embodiments. The device embodiments described above are only illustrative, and the modules described as separate components may or may not be physically separated, and the functions of each module may be integrated in the same or multiple software and/or hardware implementations. Part or all of the modules can also be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without creative effort.
以上所述仅是本说明书实施例的具体实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本说明书实施例原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本说明书实施例的保护范围。The above is only the specific implementation of the embodiment of this specification. It should be pointed out that for those of ordinary skill in the art, without departing from the principle of the embodiment of this specification, some improvements and modifications can also be made. These Improvements and modifications should also be regarded as the scope of protection of the embodiments of this specification.

Claims (19)

  1. 一种三维模型的渲染方法,包括:A method for rendering a three-dimensional model, comprising:
    确定三维模型在渲染相机视角下的光暗分布,所述光暗分布用于表征所述三维模型上各个片元的亮度值;Determine the light and dark distribution of the three-dimensional model under the viewing angle of the rendering camera, where the light and dark distribution is used to represent the brightness value of each fragment on the three-dimensional model;
    基于所述光暗分布将所述三维模型划分为明亮区域和阴影区域;dividing the three-dimensional model into a bright area and a shadow area based on the light and dark distribution;
    基于所述三维模型的颜色对所述明亮区域进行渲染,并基于预先确定的阴影颜色对所述阴影区域进行渲染。The bright area is rendered based on the color of the three-dimensional model, and the shaded area is rendered based on a predetermined shade color.
  2. 根据权利要求1所述的方法,其中,确定所述三维模型在所述渲染相机视角下的光暗分布,包括:The method according to claim 1, wherein determining the light and dark distribution of the 3D model under the viewing angle of the rendering camera comprises:
    基于光照方向、所述三维模型的法线贴图和所述渲染相机的位置,确定所述三维模型在所述渲染相机视角下的第一光暗分布,所述法线贴图用于表征所述三维模型上各顶点的法线;Determine a first light and dark distribution of the 3D model under the viewing angle of the rendering camera based on the illumination direction, the normal map of the 3D model and the position of the rendering camera, the normal map is used to characterize the 3D model The normal of each vertex on the model;
    基于所述三维模型的阴影贴图,确定所述三维模型在所述渲染相机视角下的第二光暗分布;Based on the shadow map of the 3D model, determine a second light and dark distribution of the 3D model under the viewing angle of the rendering camera;
    基于所述第一光暗分布和所述第二光暗分布确定所述三维模型在所述渲染相机视角下的光暗分布。Determine the light and dark distribution of the three-dimensional model under the rendering camera viewing angle based on the first light and dark distribution and the second light and dark distribution.
  3. 根据权利要求2所述的方法,其中,基于所述第一光暗分布和所述第二光暗分布确定所述三维模型在所述渲染相机视角下的光暗分布,包括:The method according to claim 2, wherein determining the light and dark distribution of the 3D model under the rendering camera viewing angle based on the first light and dark distribution and the second light and dark distribution comprises:
    针对所述三维模型在所述渲染相机视角下的每个片元,基于所述第一光暗分布确定该片元的第一亮度值,并基于所述第二光暗分布确定该片元的第二亮度值;For each fragment of the 3D model under the viewing angle of the rendering camera, determine the first brightness value of the fragment based on the first light and dark distribution, and determine the brightness value of the fragment based on the second light and dark distribution second brightness value;
    根据所述第一亮度值及所述第二亮度值,确定该片元的亮度值。Determine the brightness value of the slice according to the first brightness value and the second brightness value.
  4. 根据权利要求1所述的方法,其中,基于所述光暗分布将所述三维模型划分为明亮区域和阴影区域,包括:The method according to claim 1, wherein dividing the three-dimensional model into a bright area and a shadow area based on the light and dark distribution comprises:
    将所述三维模型上亮度值满足第一预设亮度条件的区域确定为所述明亮区域;determining an area on the three-dimensional model whose brightness value satisfies a first preset brightness condition as the bright area;
    将所述三维模型上亮度值满足第二预设亮度条件的区域确定为所述阴影区域。An area on the three-dimensional model whose brightness value satisfies a second preset brightness condition is determined as the shaded area.
  5. 根据权利要求4所述的方法,其中,所述第二预设亮度条件包括所述亮度值小于或等于第一亮度阈值;The method according to claim 4, wherein the second preset brightness condition comprises that the brightness value is less than or equal to a first brightness threshold;
    所述预先确定的阴影颜色包括多个阴影颜色,所述多个阴影颜色分别对应于多个不同亮度子区间;所述不同亮度子区间根据所述亮度值小于或等于所述第一亮度阈值的亮度区间划分得到;The predetermined shadow color includes a plurality of shadow colors, and the plurality of shadow colors respectively correspond to a plurality of different brightness sub-intervals; The brightness interval is divided into;
    基于所述预先确定的阴影颜色对所述阴影区域进行渲染,包括:Rendering the shadow area based on the predetermined shadow color includes:
    确定所述阴影区域中各片元的亮度值所属的亮度子区间;determining the luminance subinterval to which the luminance value of each fragment in the shaded area belongs;
    根据各片元的亮度值所属的亮度子区间分别对应的阴影颜色,对所述阴影区域的各片元进行渲染。Each fragment in the shadow area is rendered according to the shadow colors corresponding to the brightness sub-intervals to which the brightness value of each fragment belongs.
  6. 根据权利要求4所述的方法,其中,所述第二预设亮度条件包括所述亮度值小于或等于第二亮度阈值;The method according to claim 4, wherein the second preset brightness condition comprises that the brightness value is less than or equal to a second brightness threshold;
    所述预先确定的阴影颜色包括多个阴影颜色,所述多个阴影颜色分别对应于查找表中的多个参考亮度值;所述查找表中的多个参考亮度值均小于或等于所述第二亮度阈值;The predetermined shadow color includes a plurality of shadow colors, and the plurality of shadow colors respectively correspond to a plurality of reference brightness values in the lookup table; the plurality of reference brightness values in the lookup table are all less than or equal to the first Two brightness thresholds;
    基于所述预先确定的阴影颜色对所述阴影区域进行渲染,包括:Rendering the shadow area based on the predetermined shadow color includes:
    从所述查找表中查找与所述阴影区域中各片元的亮度值分别匹配的参考亮度值对应的阴影颜色;Finding, from the lookup table, shadow colors corresponding to reference brightness values respectively matched with the brightness values of the fragments in the shadow area;
    根据与各片元分别匹配的参考亮度值对应的阴影颜色,对所述阴影区域的各片元进行渲染。Each fragment in the shadow area is rendered according to the shadow color corresponding to the reference brightness value respectively matched with each fragment.
  7. 根据权利要求1所述的方法,其中,基于所述三维模型的颜色对所述明亮区域进行渲染,包括:The method of claim 1, wherein rendering the bright region based on the color of the 3D model comprises:
    基于所述三维模型对应的漫反射贴图确定所述明亮区域的颜色;determining the color of the bright region based on the diffuse reflection map corresponding to the three-dimensional model;
    基于所述明亮区域的颜色对所述明亮区域进行渲染。The bright region is rendered based on the color of the bright region.
  8. 根据权利要求1所述的方法,其中,基于所述预先确定的阴影颜色对所述阴影区域进行渲染,包括:The method of claim 1, wherein rendering the shaded region based on the predetermined shade color comprises:
    基于所述三维模型对应的漫反射贴图确定所述阴影区域的颜色;determining the color of the shadow area based on the diffuse reflection map corresponding to the three-dimensional model;
    基于所述预先确定的阴影颜色对所述阴影区域的颜色进行修正,得到述阴影区域的修正后的颜色;correcting the color of the shaded area based on the predetermined shaded color to obtain a corrected color of the shaded area;
    基于所述修正后的颜色对所述阴影区域进行渲染。The shaded area is rendered based on the corrected color.
  9. 根据权利要求1至8任意一项所述的方法,还包括:The method according to any one of claims 1 to 8, further comprising:
    对所述三维模型进行放大,得到放大模型,所述放大模型的中间区域被所述三维模型遮挡;Enlarging the three-dimensional model to obtain an enlarged model, the middle area of the enlarged model is blocked by the three-dimensional model;
    基于预先确定的描边颜色对所述放大模型进行渲染。The enlarged model is rendered based on a predetermined stroke color.
  10. 根据权利要求9所述的方法,其中,对所述三维模型进行放大,得到所述放大模型,包括:The method according to claim 9, wherein amplifying the three-dimensional model to obtain the amplified model comprises:
    将所述三维模型上的各个顶点沿着所述顶点的法线在投影平面上的投影向量的方向进行位移,得到所述放大模型。Each vertex on the three-dimensional model is displaced along the direction of the projection vector of the normal of the vertex on the projection plane to obtain the enlarged model.
  11. 根据权利要求9所述的方法,其中,基于所述三维模型的颜色对所述明亮区域进行渲染,并基于预先确定的阴影颜色对所述阴影区域进行渲染,包括:The method of claim 9, wherein rendering the bright region based on a color of the 3D model and rendering the shadow region based on a predetermined shade color comprises:
    基于所述三维模型的颜色对所述明亮区域中各片元的正面进行渲染,并基于所述预先确定的阴影颜色对所述阴影区域中各片元的正面进行渲染;rendering the front face of each fragment in the bright area based on the color of the three-dimensional model, and rendering the front face of each fragment in the shadow area based on the predetermined shadow color;
    基于预先确定的描边颜色对所述放大模型进行渲染,包括:Rendering the enlarged model based on a predetermined stroke color includes:
    基于预先确定的描边颜色对所述放大模型中各片元的背面进行渲染;rendering the back of each fragment in the enlarged model based on a predetermined stroke color;
    其中,一个片元的正面为所述片元上多个顶点沿着第一方向进行连接得到的面,一个片元的背面为所述片元上多个顶点沿着第二方向进行连接得到的面,所述第一方向与所述第二方向相反。Wherein, the front face of a fragment is the face obtained by connecting multiple vertices on the fragment along the first direction, and the back face of a fragment is the face obtained by connecting multiple vertices on the fragment along the second direction On the other hand, the first direction is opposite to the second direction.
  12. 根据权利要求9所述的方法,其中,在基于所述预先确定的描边颜色对所述放大模型进行渲染之前,所述方法还包括:The method according to claim 9, wherein, before rendering the enlarged model based on the predetermined stroke color, the method further comprises:
    将所述放大模型沿着远离所述渲染相机的方向进行移动。and moving the enlarged model in a direction away from the rendering camera.
  13. 一种三维模型的渲染方法,包括:A method for rendering a three-dimensional model, comprising:
    在第一渲染通道中,基于三维模型的颜色对所述三维模型的明亮区域进行渲染,并基于预先确定的阴影颜色对所述三维模型的阴影区域进行渲染;In a first rendering pass, rendering a bright region of the 3D model based on a color of the 3D model, and rendering a shadow region of the 3D model based on a predetermined shadow color;
    在第二渲染通道中,对所述三维模型进行放大,得到放大模型,基于预先确定的描边颜色对所述放大模型进行渲染,所述放大模型的中间区域被所述三维模型遮挡。In the second rendering pass, the three-dimensional model is enlarged to obtain an enlarged model, and the enlarged model is rendered based on a predetermined stroke color, and a middle area of the enlarged model is blocked by the three-dimensional model.
  14. 根据权利要求13所述的方法,其中,基于所述三维模型的颜色对所述三维模型的明亮区域进行渲染,并基于所述预先确定的阴影颜色对所述三维模型的阴影区域进行渲染,包括:The method of claim 13, wherein rendering the bright regions of the 3D model based on the color of the 3D model and rendering the shaded regions of the 3D model based on the predetermined shadow color comprises :
    基于所述三维模型的颜色对所述明亮区域中各片元的正面进行渲染,并基于所述预先确定的阴影颜色对所述阴影区域中各片元的正面进行渲染;rendering the front face of each fragment in the bright area based on the color of the three-dimensional model, and rendering the front face of each fragment in the shadow area based on the predetermined shadow color;
    基于所述预先确定的描边颜色对所述放大模型进行渲染,包括:Rendering the enlarged model based on the predetermined stroke color includes:
    基于预先确定的描边颜色对所述放大模型中各片元的背面进行渲染;rendering the back of each fragment in the enlarged model based on a predetermined stroke color;
    其中,一个片元的正面为所述片元上多个顶点沿着第一方向进行连接得到的面,一个片元的背面为所述片元上多个顶点沿着第二方向进行连接得到的面,所述第一方向与所述第二方向相反。Wherein, the front face of a fragment is the face obtained by connecting multiple vertices on the fragment along the first direction, and the back face of a fragment is the face obtained by connecting multiple vertices on the fragment along the second direction On the other hand, the first direction is opposite to the second direction.
  15. 一种三维模型的渲染装置,包括:A rendering device for a three-dimensional model, comprising:
    确定模块,用于确定三维模型在渲染相机视角下的光暗分布,所述光暗分布用于表征所述三维模型上各个片元的亮度值;A determining module, configured to determine the light and dark distribution of the three-dimensional model under the viewing angle of the rendering camera, where the light and dark distribution is used to characterize the brightness value of each fragment on the three-dimensional model;
    划分模块,用于基于所述光暗分布将所述三维模型划分为明亮区域和阴影区域;a division module, configured to divide the three-dimensional model into bright areas and shadow areas based on the light and dark distribution;
    渲染模块,用于基于所述三维模型的颜色对所述明亮区域进行渲染,并基于预先确定的阴影颜色对所述阴影区域进行渲染。A rendering module, configured to render the bright area based on the color of the 3D model, and render the shadow area based on a predetermined shadow color.
  16. 一种三维模型的渲染装置,包括:A rendering device for a three-dimensional model, comprising:
    第一渲染模块,用于在第一渲染通道中,基于三维模型的颜色对所述三维模型的明亮区域进行渲染,并基于预先确定的阴影颜色对所述三维模型的阴影区域进行渲染;A first rendering module, configured to render a bright region of the 3D model based on a color of the 3D model in a first rendering pass, and render a shadow region of the 3D model based on a predetermined shadow color;
    第二渲染模块,用于在第二渲染通道中,对所述三维模型进行放大,得到放大模型,基于预先确定的描边颜色对所述放大模型进行渲染,所述放大模型的中间区域被所述三维模型遮挡。The second rendering module is configured to enlarge the three-dimensional model in a second rendering pass to obtain an enlarged model, render the enlarged model based on a predetermined stroke color, and the middle area of the enlarged model is drawn The 3D model occlusion described above.
  17. 一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现权利要求1至14中任意一项所述的方法。A computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the method according to any one of claims 1 to 14 is realized.
  18. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1至14中任意一项所述的方法。A computer device comprising a memory, a processor, and a computer program stored in the memory and operable on the processor, the processor implementing the method according to any one of claims 1 to 14 when executing the program.
  19. 一种计算机程序产品,所述产品包括计算机程序,当所述计算机程序被处理器执行时实现权利要求1至14中任意一项所述的方法。A computer program product comprising a computer program which, when executed by a processor, implements the method of any one of claims 1 to 14.
PCT/CN2022/125043 2021-10-18 2022-10-13 Rendering of three-dimensional model WO2023066121A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111211776.4 2021-10-18
CN202111211776.4A CN113658316B (en) 2021-10-18 2021-10-18 Rendering method and device of three-dimensional model, storage medium and computer equipment

Publications (1)

Publication Number Publication Date
WO2023066121A1 true WO2023066121A1 (en) 2023-04-27

Family

ID=78484203

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/125043 WO2023066121A1 (en) 2021-10-18 2022-10-13 Rendering of three-dimensional model

Country Status (2)

Country Link
CN (2) CN113658316B (en)
WO (1) WO2023066121A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658316B (en) * 2021-10-18 2022-03-08 北京市商汤科技开发有限公司 Rendering method and device of three-dimensional model, storage medium and computer equipment
CN115082607A (en) * 2022-05-26 2022-09-20 网易(杭州)网络有限公司 Virtual character hair rendering method and device, electronic equipment and storage medium
CN117689773A (en) * 2024-01-31 2024-03-12 合肥中科类脑智能技术有限公司 Mapping method, mapping device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001101441A (en) * 1999-09-28 2001-04-13 Square Co Ltd Method and device for rendering, game system and computer readable recording medium storing program for rendering three-dimensional model
US6847361B1 (en) * 1999-09-10 2005-01-25 Namco, Ltd. Image generation system and program
CN106683199A (en) * 2015-11-06 2017-05-17 三星电子株式会社 3D graphic rendering method and apparatus
CN111127623A (en) * 2019-12-25 2020-05-08 上海米哈游天命科技有限公司 Model rendering method and device, storage medium and terminal
CN112933599A (en) * 2021-04-08 2021-06-11 腾讯科技(深圳)有限公司 Three-dimensional model rendering method, device, equipment and storage medium
CN113658316A (en) * 2021-10-18 2021-11-16 北京市商汤科技开发有限公司 Rendering method and device of three-dimensional model, storage medium and computer equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2227502C (en) * 1997-01-31 2006-06-13 Microsoft Corporation Method and system for determining and or using illumination maps in rendering images
US10762695B1 (en) * 2019-02-21 2020-09-01 Electronic Arts Inc. Systems and methods for ray-traced shadows of transparent objects
CN110196746B (en) * 2019-05-30 2022-09-30 网易(杭州)网络有限公司 Interactive interface rendering method and device, electronic equipment and storage medium
CN111862254B (en) * 2020-07-17 2023-06-16 福建天晴数码有限公司 Cross-rendering platform-based material rendering method and system
CN112138386A (en) * 2020-09-24 2020-12-29 网易(杭州)网络有限公司 Volume rendering method and device, storage medium and computer equipment
CN112316420B (en) * 2020-11-05 2024-03-22 网易(杭州)网络有限公司 Model rendering method, device, equipment and storage medium
CN113223131B (en) * 2021-04-16 2022-05-31 完美世界(北京)软件科技发展有限公司 Model rendering method and device, storage medium and computing equipment
CN113240783B (en) * 2021-05-27 2023-06-27 网易(杭州)网络有限公司 Stylized rendering method and device, readable storage medium and electronic equipment
CN113256781B (en) * 2021-06-17 2023-05-30 腾讯科技(深圳)有限公司 Virtual scene rendering device, storage medium and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6847361B1 (en) * 1999-09-10 2005-01-25 Namco, Ltd. Image generation system and program
JP2001101441A (en) * 1999-09-28 2001-04-13 Square Co Ltd Method and device for rendering, game system and computer readable recording medium storing program for rendering three-dimensional model
CN106683199A (en) * 2015-11-06 2017-05-17 三星电子株式会社 3D graphic rendering method and apparatus
CN111127623A (en) * 2019-12-25 2020-05-08 上海米哈游天命科技有限公司 Model rendering method and device, storage medium and terminal
CN112933599A (en) * 2021-04-08 2021-06-11 腾讯科技(深圳)有限公司 Three-dimensional model rendering method, device, equipment and storage medium
CN113658316A (en) * 2021-10-18 2021-11-16 北京市商汤科技开发有限公司 Rendering method and device of three-dimensional model, storage medium and computer equipment
CN114494570A (en) * 2021-10-18 2022-05-13 北京市商汤科技开发有限公司 Rendering method and device of three-dimensional model, storage medium and computer equipment

Also Published As

Publication number Publication date
CN114494570A (en) 2022-05-13
CN113658316B (en) 2022-03-08
CN113658316A (en) 2021-11-16

Similar Documents

Publication Publication Date Title
WO2023066121A1 (en) Rendering of three-dimensional model
JP6612266B2 (en) 3D model rendering method and apparatus, and terminal device
JP7386153B2 (en) Rendering methods and terminals that simulate lighting
CN107993216B (en) Image fusion method and equipment, storage medium and terminal thereof
CN115699114B (en) Method and apparatus for image augmentation for analysis
Shen et al. Depth-aware image seam carving
CN107644453B (en) Rendering method and system based on physical coloring
US9563959B2 (en) Image processor, lighting processor and method therefor
Li et al. Physically-based editing of indoor scene lighting from a single image
US10347052B2 (en) Color-based geometric feature enhancement for 3D models
US8854392B2 (en) Circular scratch shader
EP3485464B1 (en) Computer system and method for improved gloss representation in digital images
CN111127623A (en) Model rendering method and device, storage medium and terminal
Dos Santos et al. Real time ray tracing for augmented reality
JP2020198066A (en) Systems and methods for augmented reality applications
CN105550973B (en) Graphics processing unit, graphics processing system and anti-aliasing processing method
Boom et al. Interactive light source position estimation for augmented reality with an RGB‐D camera
CN112446943A (en) Image rendering method and device and computer readable storage medium
US20140292754A1 (en) Easy selection threshold
US11804008B2 (en) Systems and methods of texture super sampling for low-rate shading
JP6852224B2 (en) Sphere light field rendering method in all viewing angles
CN112516595B (en) Magma rendering method, device, equipment and storage medium
CN115965735B (en) Texture map generation method and device
CN113888398B (en) Hair rendering method and device and electronic equipment
CN116363288A (en) Rendering method and device of target object, storage medium and computer equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22882731

Country of ref document: EP

Kind code of ref document: A1