WO2022041548A1 - Model rendering method and device - Google Patents

Model rendering method and device Download PDF

Info

Publication number
WO2022041548A1
WO2022041548A1 PCT/CN2020/133862 CN2020133862W WO2022041548A1 WO 2022041548 A1 WO2022041548 A1 WO 2022041548A1 CN 2020133862 W CN2020133862 W CN 2020133862W WO 2022041548 A1 WO2022041548 A1 WO 2022041548A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
edge
initial model
pixel
texture
Prior art date
Application number
PCT/CN2020/133862
Other languages
French (fr)
Chinese (zh)
Inventor
赵溪
徐丹
Original Assignee
完美世界(北京)软件科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 完美世界(北京)软件科技发展有限公司 filed Critical 完美世界(北京)软件科技发展有限公司
Publication of WO2022041548A1 publication Critical patent/WO2022041548A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Definitions

  • the present disclosure relates to the field of computers, and in particular, to a model rendering method and device.
  • Ink painting style is a traditional Chinese painting technique.
  • ink painting effects for stylized rendering, making the pictures unique and far-reaching.
  • the current method is to divide the stroke and central area of the model into two independent pieces and add them directly. Make the central area transparent, so as not to block all models with greater depth.
  • Using this method for ink-wash-style image rendering will have obvious problems of line interspersed during the movement of complex models, and there will also be obvious problems of jagged edges on the inner edges, resulting in poor quality of the rendered images.
  • the present disclosure provides a model rendering method and device, so as to at least solve the technical problem of poor image quality obtained by rendering a model in ink style in the related art.
  • an embodiment of the present disclosure provides a method for rendering a model, including:
  • a target model is obtained by mixing pixels whose transparency on the edge of the intermediate model is less than a preset threshold value and the color of the middle of the intermediate model.
  • an embodiment of the present disclosure also provides a model rendering device, including:
  • the stroke module is set to stroke the edge of the initial model according to the observation direction and the normal map of the initial model to obtain the stroked model, wherein the observation direction is the direction in which the virtual camera observes the initial model;
  • a rendering module configured to render the ink texture to the middle of the initial model according to the background color of the initial model and the transparency in the texture map, to obtain a rendering model
  • an overlay module configured to overlay the stroke model and the rendering model to obtain an intermediate model
  • the mixing module is configured to mix the pixels whose transparency on the edge of the intermediate model is less than the preset threshold value and the color of the middle of the intermediate model to obtain the target model.
  • an embodiment of the present disclosure further provides a storage medium, where the storage medium includes a stored program, and the above method is executed when the program runs.
  • an embodiment of the present disclosure also provides an electronic device, including a memory, a processor, and a computer program stored in the memory and running on the processor, where the processor executes the above method through the computer program.
  • the beneficial effects of the present disclosure include at least: using the observation direction and the normal map of the initial model to stroke the edge of the initial model to obtain a stroked model, wherein the observation direction is the direction in which the virtual camera observes the initial model; Transparency in the background color and texture map Render the ink texture to the middle of the initial model to obtain a rendering model; superimpose the stroke model and the rendering model to obtain an intermediate model; set the transparency on the edge of the intermediate model to be less than a preset threshold
  • the pixels of the model are mixed with the middle color of the intermediate model to obtain the target model, and the edge of the model and the middle of the model are rendered separately.
  • the texture fits the edge of the model better.
  • the background color and the transparency parameters in the texture map are combined for rendering, so as to avoid the problem of color penetration and interspersed caused by the transparent part of the ink texture, and the edge of the model is not completely
  • the filled pixels are mixed with the colors in the middle of the model to obtain the target model of ink style, which achieves the purpose of eliminating the inner edge jaggedness on the obtained target model of ink style, thus realizing the improvement of the ink style of the model.
  • the technical effect of the image quality obtained after rendering thereby solving the technical problem of poor image quality obtained after rendering the model in ink style.
  • FIG. 1 is a schematic diagram of a hardware environment of a model rendering method according to an embodiment of the present disclosure
  • FIG. 2 is a flowchart of an optional model rendering method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a stroke effect with different parameters according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of transparent processing in the middle of a model according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of a linear blend of edge and middle according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of an optional model rendering apparatus according to an embodiment of the present disclosure.
  • FIG. 7 is a structural block diagram of an electronic device according to an embodiment of the present disclosure.
  • an embodiment of a method for rendering a model is provided.
  • the above model rendering method may be applied to the hardware environment formed by the terminal 101 and the server 103 as shown in FIG. 1 .
  • the server 103 is connected to the terminal 101 through the network, and can be used to provide services (such as game services, application services, etc.) for the terminal or the client installed on the terminal, and a database can be set on the server or independently of the server,
  • the above-mentioned network includes but is not limited to: a wide area network, a metropolitan area network or a local area network, and the terminal 101 is not limited to a PC, a mobile phone, a tablet computer, and the like.
  • the rendering method of the model in the embodiment of the present disclosure may be executed by the server 103, may also be executed by the terminal 101, or may be executed jointly by the server 103 and the terminal 101.
  • the rendering method of the model in the embodiment of the present disclosure executed by the terminal 101 may also be executed by a client installed on the terminal 101 .
  • FIG. 2 is a flowchart of an optional model rendering method according to an embodiment of the present disclosure. As shown in FIG. 2 , the method may include the following steps:
  • Step S202 the edge of the initial model is stroked according to the observation direction and the normal map of the initial model to obtain a stroked model, wherein the observation direction is the direction in which the virtual camera observes the initial model;
  • Step S204 rendering the ink texture to the middle of the initial model according to the background color of the initial model and the transparency in the texture map, to obtain a rendering model
  • Step S206 superimposing the stroke model and the rendering model to obtain an intermediate model
  • Step S208 mixing the pixels whose transparency on the edge of the intermediate model is less than a preset threshold value and the color of the middle of the intermediate model to obtain the target model.
  • the edge of the model and the middle of the model are respectively rendered, and the edge of the model is stroked according to the viewing direction and the normal map of the model, so that the texture can better fit the edge of the model , for the interior of the model, the background color and the transparency parameters in the texture map are combined for rendering to avoid the color penetration and interspersed problems caused by the transparent part of the ink texture.
  • the colors are mixed to obtain an ink-wash-style target model, which achieves the purpose of eliminating the inner edge jaggedness on the obtained ink-wash-style target model, thereby realizing the technology of improving the image quality obtained by rendering the model in ink-wash style. effect, thereby solving the technical problem of poor image quality after rendering the model in ink style.
  • the above-mentioned model rendering method may be applied, but not limited to, to a scene in which a virtual model in an application is rendered in ink style.
  • the above-mentioned applications may include, but are not limited to, any type of applications, such as game applications, educational applications, multimedia applications, instant communication applications, shopping applications, community applications, life tool applications, browser applications, and the like.
  • the above-mentioned virtual models may include, but are not limited to, character models, animal and plant models, scene object models, prop models, architectural models, and the like.
  • the rendering method of the above model may be applied to, but not limited to, a shader in a rendering engine, and the shader may be designed and implemented, but not limited to, by a shader making auxiliary tool of the rendering engine. After completion, it is converted into the shader code used in the rendering engine for packaging.
  • the above rendering engine may include, but is not limited to, the Unity engine, the Unreal engine (Unreal), and the like.
  • the observation direction is the direction in which the virtual camera observes the initial model. It can be seen that the process of stroking the edge of the initial model is based on the perspective. After adding texture to the stroke calculated based on the perspective, you can Simulate just the right brush effect.
  • the stroke process based on the observation direction can draw strokes of appropriate thickness according to the complexity of the model, which is similar to the creative idea of many Chinese painting works. Draw with a fine pen.
  • the initial model is a model to be rendered in ink style (for example: character model (game character model, anime character model, etc.), landscape model (mountain, water, trees, stones, buildings, Architecture, etc.), animal image models (such as: rat, cow, tiger, rabbit, etc. effective image)).
  • character model game character model, anime character model, etc.
  • landscape model mountain, water, trees, stones, buildings, Architecture, etc.
  • animal image models such as: rat, cow, tiger, rabbit, etc. effective image
  • the stroke model is a model obtained by performing an ink-and-wash stroke on the initial model, so that the edge of the stroke model presents the style of brush strokes.
  • the transparency in the texture map may be, but not limited to, the value of the Alpha (transparency) channel of the texture map.
  • the ink texture may include, but is not limited to, textures of various ink styles, such as realistic, freehand, and meticulous painting styles. Different ink rendering styles can be achieved by adjusting the rendering parameters in the rendering process.
  • the rendering model is a model obtained by rendering the ink texture to the middle of the initial model, so that the middle of the rendered model presents textures in the style of realism, freehand brushwork, and fine brushwork.
  • step S206 the stroke model and the rendering model are superimposed to obtain an intermediate model, the edge of the intermediate model presents brush strokes, and the middle portion presents an ink-wash style texture.
  • the pixels with transparency less than the preset threshold on the edge of the intermediate model may, but are not limited to, refer to pixels whose edges of the model are not completely filled, that is, pixels whose edges are semi-transparent.
  • the incompletely filled pixels on the edge of the intermediate model are mixed with the color of the center of the rendered model, which can achieve the effect of a soft transition from the edge of the model to the center of the model.
  • the intermediate model obtained after superposition may have the phenomenon of inner edge jaggedness, and the inner edge jaggedness on the intermediate model can be eliminated by mixing the color of the edge and the middle, so that the obtained target model presents both Ink style, but also soft and excessive in color, showing a higher quality picture.
  • the target model is a high-quality ink-and-wash model that is ultimately presented in the display.
  • the edge of the initial model is stroked according to the viewing direction and the normal map of the initial model, and the obtained stroked model includes:
  • the principle of the method of calculating the stroke based on the viewing angle is briefly described as follows.
  • the dot product dot(viewDir, normal) k can represent the distance from the pixel to the edge as the edge degree of the pixel.
  • the edge degree of the pixel on the edge can be determined using the formula N ⁇ I, where N represents the vector normal (ie the normal direction), the vector I stands for Incident (incident direction, that is, observation direction), namely viewDir.
  • the texture data of the pixels on the edge may be, but not limited to, acquired from a normal map.
  • the edge degree of the pixel on the edge is stored in the horizontal axis coordinate channel (that is, the u coordinate) of the edge texture map, and the texture data of the pixel on the edge is stored in the edge texture map.
  • the target edge texture map is obtained, and the target edge texture map can be used to render the edge of the model, thereby obtaining a model with a stroke, that is, a stroke model.
  • the distance from the pixel to the edge is stored in the horizontal axis coordinate channel, where 1 is closest to the contour line, and 0 is far away from the contour line.
  • the texture data of the map is stored in the vertical axis coordinate channel. Distorts the map texture based on changes in normal and viewing angle to form a textured stroke.
  • determining the edge degree of the pixel on the edge of the initial model according to the observation direction and the normal map includes:
  • the observation direction may be obtained from the direction of the current virtual camera.
  • the normal direction of the pixels on the edge can be obtained from the normal map.
  • the formula N ⁇ I is used to calculate the projection of the observation direction of the pixel on the edge of the initial model on the normal direction, where N represents the vector normal (normal direction), and the vector I represents Incident. (incidence direction, that is, observation direction).
  • the geometric meaning of the result of the dot product of N and I is the projection of the direction of the pixel in the normal direction, which is used to indicate the degree of the pixel's proximity to the contour.
  • N comes from the normal map of the model, and I is the viewing direction of the camera (observer).
  • the edge degree of the pixels on the edge can be calculated by, but not limited to, the formula Bias+Scale(1+N ⁇ I) Power , where Scale (ie, the first parameter control) is used to control the line details.
  • the complexity of Power (that is, the second parameter control) is used to control the thickness of the stroke, and Bias is 0 by default.
  • FIG. 3 is a schematic diagram of a stroke effect with different parameters according to an embodiment of the present disclosure. As shown in FIG. 3 , the larger the Scale, the higher the complexity of the stroke effect of the model, and the smaller the Power, the thickness of the stroke is controlled bigger.
  • determining the texture data of the pixel on the edge of the initial model according to the coordinate value on the normal map includes:
  • the edge degree of the pixel on the edge is used as the u coordinate of the edge texture map, and the horizontal and vertical coordinate values are halved and converted into scalar parameters as the v coordinate of the edge texture map, so that the The texture sticks to the edge of the observed model.
  • the processing of the edge texture map is not limited to the above manner, and different manners can form different distortion effects, and this is just an example of the calculation of the stroke distortion.
  • the ink texture is rendered to the middle of the initial model according to the background color of the initial model and the transparency in the texture map, and obtaining the rendered model includes:
  • S43 Render the texture data of the pixels in the middle of the initial model to the middle of the initial model to obtain the rendered model.
  • linear mixing refers to mixing two color data according to the Alpha (transparency) value, that is, lerp(color1, color2, alpha), where the Alpha value is 0 and displayed as the background color ( color1), when the Alpha value is 1, it is displayed as a texture (color2), and the Alpha value comes from the Alpha channel of the texture map.
  • FIG. 4 is a schematic diagram of transparent processing in the middle of a model according to an embodiment of the present disclosure.
  • the model when the above method is not used to transparently process the middle of the model, as shown in the box, the model obviously has color transparency. passing and interspersed issues. After using the above method to transparently process the middle of the model, the problem of color penetration and interpenetration is obviously solved.
  • Linear blending of color data includes:
  • the linear mixing process may be, but not limited to, performing RGB calculation on two colors according to the ratio of the transparency (alpha value) corresponding to the pixel in the middle of the stroke model.
  • mixing pixels whose transparency is less than a preset threshold on the edge of the intermediate model and the middle color of the intermediate model to obtain the target model includes:
  • the target pixel whose transparency on the edge is less than the preset threshold value may be determined as a pixel that is not completely filled.
  • a value of 1 is a pixel that is not fully filled.
  • using the transparency of the target pixel to linearly mix the color data of the target pixel and the color data in the middle of the intermediate model may be, but not limited to, combining the two colors according to the transparency (Alpha) of the target pixel. value) for the RGB calculation.
  • FIG. 5 is a schematic diagram of a linear blending of the edge and the middle according to an embodiment of the present disclosure. As shown in FIG. 5 , when the edge and the middle are not linearly blended, the model will obviously have the problem of jagged edges. After linear blending, the jagged edges are effectively eliminated, so that the edges and the middle of the model can be smoothly transitioned.
  • FIG. 6 is a schematic diagram of an optional model rendering apparatus according to an embodiment of the present disclosure. As shown in FIG. 6 , the apparatus may include:
  • the stroke module 62 is configured to stroke the edge of the initial model according to the observation direction and the normal map of the initial model to obtain a stroked model, wherein the observation direction is the direction in which the virtual camera observes the initial model;
  • the rendering module 64 is configured to render the ink texture to the middle of the initial model according to the background color of the initial model and the transparency in the texture map to obtain a rendering model;
  • the superimposing module 66 is configured to superimpose the stroke model and the rendering model to obtain an intermediate model
  • the mixing module 68 is configured to mix the pixels whose transparency on the edge of the intermediate model is less than the preset threshold value and the color of the middle of the intermediate model to obtain the target model.
  • the stroke module 62 in this embodiment may be configured to perform step S202 in this embodiment of the present disclosure
  • the rendering module 64 in this embodiment may be configured to perform step S204 in this embodiment of the present disclosure
  • the superposition module 66 in this embodiment may be configured to perform step S206 in the embodiment of the present disclosure
  • the mixing module 68 in this embodiment may be configured to perform step S208 in the embodiment of the present disclosure.
  • the edge of the model and the middle of the model are rendered separately.
  • the edge of the model is stroked according to the viewing direction and the normal map of the model, so that the texture can better fit the edge of the model.
  • the background color and the transparency parameters in the texture map are combined for rendering to avoid the color penetration and interspersed problems caused by the transparent part of the ink texture.
  • the stroke module includes:
  • a first determining unit configured to determine an edge degree of a pixel on the edge of the initial model according to the observation direction and the normal map, wherein the edge degree is used to indicate a pixel on the edge of the initial model the distance to the edge contour of the initial model;
  • a second determining unit configured to determine texture data of pixels on the edge of the initial model according to the coordinate values on the normal map
  • a storage unit configured to store the edge degree of the pixel on the edge of the initial model into the horizontal axis coordinate channel of the edge texture map, and store the texture data of the pixel on the edge of the initial model into the edge texture In the vertical axis coordinate channel of the map, the target edge texture map is obtained;
  • a first rendering unit configured to use the target edge texture map to render the edge of the initial model.
  • the first determining unit is set to:
  • a first parameter is used to control the complexity of the projection of the viewing direction of the pixels on the edges of the initial model in the normal direction
  • a second parameter is used to control the viewing direction of the pixels on the edges of the initial model in the normal direction
  • the second determining unit is set to:
  • the pixels on the edge of the initial model are added after halving the corresponding horizontal and vertical coordinate values on the normal map to obtain the coordinate vector of the pixels on the edge of the initial model;
  • the coordinate vector of the pixel on the edge of the initial model is converted into a scalar parameter to obtain the texture data of the pixel on the edge of the initial model.
  • the rendering module includes:
  • a first obtaining unit configured to obtain the transparency corresponding to the pixel in the middle of the initial model from the texture map
  • a first mixing unit set to use the transparency corresponding to the pixels in the middle of the initial model to the background color corresponding to the pixels in the middle of the initial model and the color data of the pixels in the middle of the initial model on the ink texture Perform linear mixing to obtain the texture data of the pixels in the middle of the initial model;
  • the second rendering unit is configured to render the texture data of the pixels in the middle of the initial model to the middle of the initial model to obtain the rendered model.
  • the first mixing unit is set to:
  • the ratio of the transparency corresponding to the pixel in the middle of the initial model, the background color corresponding to the pixel in the middle of the initial model and the color data of the pixel in the middle of the initial model on the ink texture are subjected to RGB color values. calculate.
  • the mixing module includes:
  • a second acquiring unit configured to acquire target pixels whose transparency is less than a preset threshold from the edge of the intermediate model
  • a second mixing unit configured to linearly mix the color data of the target pixel and the color data in the middle of the intermediate model using the transparency of the target pixel as the target color data of the target pixel to obtain the target model .
  • the above modules may run in the hardware environment as shown in FIG. 1 , and may be implemented by software or hardware, wherein the hardware environment includes a network environment.
  • an electronic device for implementing the above model rendering method is also provided.
  • FIG. 7 is a structural block diagram of an electronic device according to an embodiment of the present disclosure.
  • the electronic device may include: one or more (only one is shown in the figure) a processor 701 , a memory 703 , and a transmission
  • the device 705, as shown in FIG. 7, the electronic device may further include an input and output device 707.
  • the memory 703 may be configured to store software programs and modules, such as program instructions/modules corresponding to the model rendering method and device in the embodiments of the present disclosure, and the processor 701 runs the software programs and modules stored in the memory 703 to thereby Execute various functional applications and data processing, that is, implement the above-mentioned model rendering method.
  • Memory 703 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 703 may further include memory located remotely from the processor 701, and these remote memories may be connected to the electronic device through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the above-mentioned transmission device 705 is configured to receive or send data via a network, and may also be configured to transmit data between the processor and the memory. Specific examples of the above-mentioned networks may include wired networks and wireless networks.
  • the transmission device 705 includes a network adapter (Network Interface Controller, NIC), which can be connected to other network devices and routers through a network cable so as to communicate with the Internet or a local area network.
  • the transmission device 705 is a radio frequency (Radio Frequency, RF) module, which is configured to communicate with the Internet in a wireless manner.
  • RF Radio Frequency
  • the memory 703 is configured to store application programs.
  • the processor 701 can call the application program stored in the memory 703 through the transmission device 705 to perform the following steps:
  • a target model is obtained by mixing the pixels whose transparency on the edge of the rendering model is less than a preset threshold value and the middle color of the rendering model.
  • a solution for model rendering is provided.
  • the edge of the model and the middle of the model are rendered separately.
  • the edge of the model is stroked according to the viewing direction and the normal map of the model, which can make the texture better fit the edge of the model.
  • For the interior of the model a combination of The background color and the transparency parameter in the texture map are rendered to avoid the problem of color penetration and interspersed caused by the transparent part of the ink texture.
  • For the pixels that are not completely filled at the edge of the model combine the colors in the middle of the model to mix to obtain the ink style.
  • FIG. 7 is only a schematic diagram, and the electronic device can be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a handheld computer, and a mobile Internet Device (Mobile Internet Devices, MID) , PAD and other electronic equipment.
  • FIG. 7 does not limit the structure of the above electronic device.
  • the electronic device may also include more or fewer components than those shown in FIG. 7 (eg, network interface, display device, etc.), or have a different configuration than that shown in FIG. 7 .
  • Embodiments of the present disclosure also provide a storage medium.
  • the above-mentioned storage medium may be set as program code for executing the rendering method of the model.
  • the foregoing storage medium may be located on at least one network device among multiple network devices in the network shown in the foregoing embodiment.
  • the storage medium is configured to store program codes for executing the following steps:
  • a target model is obtained by mixing the pixels whose transparency on the edge of the rendering model is less than a preset threshold value and the middle color of the rendering model.
  • the above-mentioned storage medium may include but is not limited to: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic Various media that can store program codes, such as discs or optical discs.
  • the integrated units in the above-mentioned embodiments are implemented in the form of software functional units and sold or used as independent products, they may be stored in the above-mentioned computer-readable storage medium.
  • the technical solutions of the present disclosure are essentially or contribute to the prior art, or all or part of the technical solutions can be embodied in the form of software products, and the computer software products are stored in a storage medium.
  • Several instructions are included to cause one or more computer devices (which may be personal computers, servers, or network devices, etc.) to perform all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the disclosed client may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative, for example, the division of the units is only a logical function division, and there may be other division methods in actual implementation, for example, multiple units or components may be combined or Integration into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

A model rendering method and device. Said method comprises: tracing an edge of an initial model according to a viewing direction and a normal map of the initial model to obtain a traced model, wherein the viewing direction is a direction in which a virtual camera views the initial model; rendering an ink texture to the middle part of the initial model according to a background color of the initial model and transparency in the texture map to obtain a rendered model; superposing the traced model and the rendered model to obtain an intermediate model; and mixing pixels, the transparency of which is less than a preset threshold, on the edge of the intermediate model with the color of the middle part of the intermediate model to obtain a target model. Said method solves the technical problem of poor quality of an image obtained by rendering a model in an ink style.

Description

一种模型的渲染方法和装置Model rendering method and device
本公开要求于2020年08月26日提交中国专利局、优先权号为202010871727.2、发明名称为“一种模型的渲染方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。This disclosure claims the priority of a Chinese patent application with a priority number of 202010871727.2 and an invention titled "A MODEL RENDERING METHOD AND APPARATUS" filed with the China Patent Office on August 26, 2020, the entire contents of which are incorporated herein by reference Public.
技术领域technical field
本公开涉及计算机领域,尤其涉及一种模型的渲染方法和装置。The present disclosure relates to the field of computers, and in particular, to a model rendering method and device.
背景技术Background technique
水墨风格是一种中国传统绘画技法,在计算机图形领域,也有一些作品参照水墨效果进行风格化渲染,使画面别具一格意境悠远。进行水墨风格的图像渲染时,目前采用的方法是将模型的描边与中部区域区分为独立的两块,直接相加。将中部区域做透明处理,不遮挡一切深度更大的模型。采用此方法进行水墨风格的图像渲染会存在复杂模型运动过程中明显的线条穿插问题,还会存在内边缘锯齿明显的问题,导致渲染出的图像质量较差。Ink painting style is a traditional Chinese painting technique. In the field of computer graphics, there are also some works that refer to ink painting effects for stylized rendering, making the pictures unique and far-reaching. When rendering images in ink style, the current method is to divide the stroke and central area of the model into two independent pieces and add them directly. Make the central area transparent, so as not to block all models with greater depth. Using this method for ink-wash-style image rendering will have obvious problems of line interspersed during the movement of complex models, and there will also be obvious problems of jagged edges on the inner edges, resulting in poor quality of the rendered images.
针对上述的问题,目前尚未提出有效的解决方案。For the above problems, no effective solution has been proposed yet.
发明内容SUMMARY OF THE INVENTION
本公开提供了一种模型的渲染方法和装置,以至少解决相关技术中对模型进行水墨风格的渲染后得到的图像质量较差的技术问题。The present disclosure provides a model rendering method and device, so as to at least solve the technical problem of poor image quality obtained by rendering a model in ink style in the related art.
一方面,本公开实施例提供了一种模型的渲染方法,包括:In one aspect, an embodiment of the present disclosure provides a method for rendering a model, including:
根据观察方向和初始模型的法线贴图对初始模型的边缘进行描边,得到描边模型,其中,所述观察方向为虚拟摄像机观察所述初始模型的方向;Stroke the edge of the initial model according to the observation direction and the normal map of the initial model to obtain a stroked model, wherein the observation direction is the direction in which the virtual camera observes the initial model;
根据所述初始模型的背景色和纹理贴图中的透明度将水墨纹理渲染到所述初始模型的中部,得到渲染模型;Render the ink texture to the middle of the initial model according to the background color of the initial model and the transparency in the texture map to obtain a rendered model;
将所述描边模型与所述渲染模型叠加,得到中间模型;superimposing the stroke model and the rendering model to obtain an intermediate model;
将所述中间模型的边缘上透明度小于预设阈值的像素与所述中间模型的中部颜色进行混合,得到目标模型。A target model is obtained by mixing pixels whose transparency on the edge of the intermediate model is less than a preset threshold value and the color of the middle of the intermediate model.
另一方面,本公开实施例还提供了一种模型的渲染装置,包括:On the other hand, an embodiment of the present disclosure also provides a model rendering device, including:
描边模块,设置为根据观察方向和初始模型的法线贴图对初始模型的边缘进行描边,得到描边模型,其中,所述观察方向为虚拟摄像机观察所述初始模型的方向;The stroke module is set to stroke the edge of the initial model according to the observation direction and the normal map of the initial model to obtain the stroked model, wherein the observation direction is the direction in which the virtual camera observes the initial model;
渲染模块,设置为根据所述初始模型的背景色和纹理贴图中的透明度将水墨纹理渲染到所述初始模型的中部,得到渲染模型;A rendering module, configured to render the ink texture to the middle of the initial model according to the background color of the initial model and the transparency in the texture map, to obtain a rendering model;
叠加模块,设置为将所述描边模型与所述渲染模型叠加,得到中间模型;an overlay module, configured to overlay the stroke model and the rendering model to obtain an intermediate model;
混合模块,设置为将所述中间模型的边缘上透明度小于预设阈值的像素与所述中间模型的中部颜色进行混合,得到目标模型。The mixing module is configured to mix the pixels whose transparency on the edge of the intermediate model is less than the preset threshold value and the color of the middle of the intermediate model to obtain the target model.
另一方面,本公开实施例还提供了一种存储介质,该存储介质包括存储的程序,程序运行时执行上述的方法。On the other hand, an embodiment of the present disclosure further provides a storage medium, where the storage medium includes a stored program, and the above method is executed when the program runs.
另一方面,本公开实施例还提供了一种电子装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器通过计算机程序执行上述的方法。On the other hand, an embodiment of the present disclosure also provides an electronic device, including a memory, a processor, and a computer program stored in the memory and running on the processor, where the processor executes the above method through the computer program.
本公开的有益效果至少包括:采用根据观察方向和初始模型的法线贴图对初始模型的边缘进行描边,得到描边模型,其中,观察方向为虚拟摄像机观察初始模型的方向;根据初始模型的背景色和纹理贴图中的透明度将水墨纹理渲染到初始模型的中部,得到渲染模型;将所述描边模型与所述渲染模型叠加,得到中间模型;将中间模型的边缘上透明度小于预设阈值的像素与中间模型的中部颜色进行混合,得到目标模型的方式,采用模型的边缘和模型的中部分别渲染的形式, 对于模型的边缘,根据观察方向和模型的法线贴图进行描边,能够使得贴图更好的贴合模型的边缘,对于模型的内部,结合了背景色和纹理贴图中的透明度参数进行渲染,避免出现水墨纹理透明部分导致的颜色透过和穿插问题,对于模型的边缘没有完全填充的像素,结合模型的中部的颜色进行混合,从而得到水墨风格的目标模型,达到了消除在所得到的水墨风格的目标模型上出现内边缘锯齿的目的,从而实现了提高对模型进行水墨风格的渲染后得到的图像质量的技术效果,进而解决了对模型进行水墨风格的渲染后得到的图像质量较差的技术问题。The beneficial effects of the present disclosure include at least: using the observation direction and the normal map of the initial model to stroke the edge of the initial model to obtain a stroked model, wherein the observation direction is the direction in which the virtual camera observes the initial model; Transparency in the background color and texture map Render the ink texture to the middle of the initial model to obtain a rendering model; superimpose the stroke model and the rendering model to obtain an intermediate model; set the transparency on the edge of the intermediate model to be less than a preset threshold The pixels of the model are mixed with the middle color of the intermediate model to obtain the target model, and the edge of the model and the middle of the model are rendered separately. The texture fits the edge of the model better. For the interior of the model, the background color and the transparency parameters in the texture map are combined for rendering, so as to avoid the problem of color penetration and interspersed caused by the transparent part of the ink texture, and the edge of the model is not completely The filled pixels are mixed with the colors in the middle of the model to obtain the target model of ink style, which achieves the purpose of eliminating the inner edge jaggedness on the obtained target model of ink style, thus realizing the improvement of the ink style of the model. The technical effect of the image quality obtained after rendering, thereby solving the technical problem of poor image quality obtained after rendering the model in ink style.
附图说明Description of drawings
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例,并与说明书一起用于解释本发明的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description serve to explain the principles of the invention.
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. In other words, on the premise of no creative labor, other drawings can also be obtained from these drawings.
图1是根据本公开实施例的模型的渲染方法的硬件环境的示意图;1 is a schematic diagram of a hardware environment of a model rendering method according to an embodiment of the present disclosure;
图2是根据本公开实施例的一种可选的模型的渲染方法的流程图;2 is a flowchart of an optional model rendering method according to an embodiment of the present disclosure;
图3是根据本公开实施例的一种不同参数下描边效果的示意图;3 is a schematic diagram of a stroke effect with different parameters according to an embodiment of the present disclosure;
图4是根据本公开实施例的一种模型中部透明处理的示意图;4 is a schematic diagram of transparent processing in the middle of a model according to an embodiment of the present disclosure;
图5是根据本公开实施例的一种边缘和中部线性混合的示意图;FIG. 5 is a schematic diagram of a linear blend of edge and middle according to an embodiment of the present disclosure;
图6是根据本公开实施例的一种可选的模型的渲染装置的示意图;6 is a schematic diagram of an optional model rendering apparatus according to an embodiment of the present disclosure;
图7是根据本公开实施例的一种电子装置的结构框图。FIG. 7 is a structural block diagram of an electronic device according to an embodiment of the present disclosure.
具体实施方式detailed description
为了使本技术领域的人员更好地理解本公开方案,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分的实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本公开保护的范围。In order to make those skilled in the art better understand the solutions of the present disclosure, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only Embodiments are part of the present disclosure, but not all of the embodiments. Based on the embodiments in the present disclosure, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present disclosure.
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be noted that the terms "first", "second" and the like in the description and claims of the present disclosure and the above drawings are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or sequence. It is to be understood that the data so used may be interchanged under appropriate circumstances such that the embodiments of the disclosure described herein can be practiced in sequences other than those illustrated or described herein. Furthermore, the terms "comprising" and "having" and any variations thereof, are intended to cover non-exclusive inclusion, for example, a process, method, system, product or device comprising a series of steps or units is not necessarily limited to those expressly listed Rather, those steps or units may include other steps or units not expressly listed or inherent to these processes, methods, products or devices.
根据本公开实施例的一方面,提供了一种模型的渲染的方法实施例。According to an aspect of the embodiments of the present disclosure, an embodiment of a method for rendering a model is provided.
可选地,在本实施例中,上述模型的渲染方法可以应用于如图1所示的由终端101和服务器103所构成的硬件环境中。如图1所示,服务器103通过网络与终端101进行连接,可用于为终端或终端上安装的客户端提供服务(如游戏服务、应用服务等),可在服务器上或独立于服务器设置数据库,用于为服务器103提供数据存储服务,上述网络包括但不限于:广域网、城域网或局域网,终端101并不限定于PC、手机、平板电脑等。本公开实施例的模型的渲染方法可以由服 务器103来执行,也可以由终端101来执行,还可以是由服务器103和终端101共同执行。其中,终端101执行本公开实施例的模型的渲染方法也可以是由安装在其上的客户端来执行。Optionally, in this embodiment, the above model rendering method may be applied to the hardware environment formed by the terminal 101 and the server 103 as shown in FIG. 1 . As shown in FIG. 1 , the server 103 is connected to the terminal 101 through the network, and can be used to provide services (such as game services, application services, etc.) for the terminal or the client installed on the terminal, and a database can be set on the server or independently of the server, For providing data storage services for the server 103, the above-mentioned network includes but is not limited to: a wide area network, a metropolitan area network or a local area network, and the terminal 101 is not limited to a PC, a mobile phone, a tablet computer, and the like. The rendering method of the model in the embodiment of the present disclosure may be executed by the server 103, may also be executed by the terminal 101, or may be executed jointly by the server 103 and the terminal 101. The rendering method of the model in the embodiment of the present disclosure executed by the terminal 101 may also be executed by a client installed on the terminal 101 .
图2是根据本公开实施例的一种可选的模型的渲染方法的流程图,如图2所示,该方法可以包括以下步骤:FIG. 2 is a flowchart of an optional model rendering method according to an embodiment of the present disclosure. As shown in FIG. 2 , the method may include the following steps:
步骤S202,根据观察方向和初始模型的法线贴图对初始模型的边缘进行描边,得到描边模型,其中,所述观察方向为虚拟摄像机观察所述初始模型的方向;Step S202, the edge of the initial model is stroked according to the observation direction and the normal map of the initial model to obtain a stroked model, wherein the observation direction is the direction in which the virtual camera observes the initial model;
步骤S204,根据所述初始模型的背景色和纹理贴图中的透明度将水墨纹理渲染到所述初始模型的中部,得到渲染模型;Step S204, rendering the ink texture to the middle of the initial model according to the background color of the initial model and the transparency in the texture map, to obtain a rendering model;
步骤S206,将所述描边模型与所述渲染模型叠加,得到中间模型;Step S206, superimposing the stroke model and the rendering model to obtain an intermediate model;
步骤S208,将所述中间模型的边缘上透明度小于预设阈值的像素与所述中间模型的中部颜色进行混合,得到目标模型。Step S208, mixing the pixels whose transparency on the edge of the intermediate model is less than a preset threshold value and the color of the middle of the intermediate model to obtain the target model.
通过上述步骤S202至步骤S208,采用模型的边缘和模型的中部分别渲染的形式,对于模型的边缘,根据观察方向和模型的法线贴图进行描边,能够使得贴图更好的贴合模型的边缘,对于模型的内部,结合了背景色和纹理贴图中的透明度参数进行渲染,避免出现水墨纹理透明部分导致的颜色透过和穿插问题,对于模型的边缘没有完全填充的像素,结合模型的中部的颜色进行混合,从而得到水墨风格的目标模型,达到了消除在所得到的水墨风格的目标模型上出现内边缘锯齿的目的,从而实现了提高对模型进行水墨风格的渲染后得到的图像质量的技术效果,进而解决了对模型进行水墨风格的渲染后得到的图像质量较差的技术问题。Through the above steps S202 to S208, the edge of the model and the middle of the model are respectively rendered, and the edge of the model is stroked according to the viewing direction and the normal map of the model, so that the texture can better fit the edge of the model , for the interior of the model, the background color and the transparency parameters in the texture map are combined for rendering to avoid the color penetration and interspersed problems caused by the transparent part of the ink texture. The colors are mixed to obtain an ink-wash-style target model, which achieves the purpose of eliminating the inner edge jaggedness on the obtained ink-wash-style target model, thereby realizing the technology of improving the image quality obtained by rendering the model in ink-wash style. effect, thereby solving the technical problem of poor image quality after rendering the model in ink style.
可选地,在本实施例中,上述模型的渲染方法可以但不限于应用于对应用程序中的虚拟模型进行水墨风格的渲染的场景中。上述应用 程序可以但不限于包括任何类型的应用程序,比如:游戏应用、教育应用、多媒体应用、即使通信应用、购物应用、社区应用、生活工具类应用、浏览器应用等等。上述虚拟模型可以但不限于包括人物模型、动植物模型、场景物体模型、道具模型、建筑模型等等。Optionally, in this embodiment, the above-mentioned model rendering method may be applied, but not limited to, to a scene in which a virtual model in an application is rendered in ink style. The above-mentioned applications may include, but are not limited to, any type of applications, such as game applications, educational applications, multimedia applications, instant communication applications, shopping applications, community applications, life tool applications, browser applications, and the like. The above-mentioned virtual models may include, but are not limited to, character models, animal and plant models, scene object models, prop models, architectural models, and the like.
可选地,在本实施例中,上述模型的渲染方法可以但不限于应用于渲染引擎中的着色器,该着色器可以但不限于通过渲染引擎的着色器制作辅助工具来设计和实现,制作完成后转换成了渲染引擎中使用的着色器代码进行封装。上述渲染引擎可以但不限于包括:Unity引擎、虚幻引擎(Unreal)等等。Optionally, in this embodiment, the rendering method of the above model may be applied to, but not limited to, a shader in a rendering engine, and the shader may be designed and implemented, but not limited to, by a shader making auxiliary tool of the rendering engine. After completion, it is converted into the shader code used in the rendering engine for packaging. The above rendering engine may include, but is not limited to, the Unity engine, the Unreal engine (Unreal), and the like.
在步骤S202提供的技术方案中,观察方向为虚拟摄像机观察初始模型的方向,可见,对初始模型的边缘进行描边的过程是基于视角来实现的,将基于视角计算的描边增加纹理后可以模拟出恰到好处的毛笔效果。In the technical solution provided in step S202, the observation direction is the direction in which the virtual camera observes the initial model. It can be seen that the process of stroking the edge of the initial model is based on the perspective. After adding texture to the stroke calculated based on the perspective, you can Simulate just the right brush effect.
可选地,在本实施例中,基于观察方向的描边过程能够根据模型的复杂度来绘制适当粗细的描边,这与很多国画作品的创作思路相似,简单的大型场景用粗线条勾勒,细节处用细笔刻画。Optionally, in this embodiment, the stroke process based on the observation direction can draw strokes of appropriate thickness according to the complexity of the model, which is similar to the creative idea of many Chinese painting works. Draw with a fine pen.
可选地,在本实施例中,初始模型是待渲染成水墨风格的模型(比如:人物模型(游戏角色模型,动漫角色模型等等),风景模型(山,水,树木,石头,楼房,建筑等),动物形象模型(如:鼠,牛,虎,兔等生效形象))。Optionally, in this embodiment, the initial model is a model to be rendered in ink style (for example: character model (game character model, anime character model, etc.), landscape model (mountain, water, trees, stones, buildings, Architecture, etc.), animal image models (such as: rat, cow, tiger, rabbit, etc. effective image)).
可选地,在本实施例中,描边模型是对初始模型进行水墨风格的描边后得到的模型,使描边模型的边缘呈现出毛笔笔触的风格。Optionally, in this embodiment, the stroke model is a model obtained by performing an ink-and-wash stroke on the initial model, so that the edge of the stroke model presents the style of brush strokes.
在步骤S204提供的技术方案中,纹理贴图中的透明度可以但不限于为纹理贴图的Alpha(透明度)通道的值。In the technical solution provided in step S204, the transparency in the texture map may be, but not limited to, the value of the Alpha (transparency) channel of the texture map.
可选地,在本实施例中,水墨纹理可以但不限于包括各种不同水 墨风格的纹理,比如:写实、写意、工笔画等风格。可以通过调整渲染过程中的渲染参数来实现不同的水墨渲染风格。Optionally, in this embodiment, the ink texture may include, but is not limited to, textures of various ink styles, such as realistic, freehand, and meticulous painting styles. Different ink rendering styles can be achieved by adjusting the rendering parameters in the rendering process.
可选地,在本实施例中,渲染模型是将水墨纹理渲染到初始模型的中部后得到的模型,使渲染模型的中部呈现出写实、写意、工笔画等风格的质感。Optionally, in this embodiment, the rendering model is a model obtained by rendering the ink texture to the middle of the initial model, so that the middle of the rendered model presents textures in the style of realism, freehand brushwork, and fine brushwork.
在步骤S206提供的技术方案中,将描边模型与渲染模型叠加得到了中间模型,中间模型的边缘呈现毛笔笔触,中部呈现出水墨风格的质感。In the technical solution provided in step S206, the stroke model and the rendering model are superimposed to obtain an intermediate model, the edge of the intermediate model presents brush strokes, and the middle portion presents an ink-wash style texture.
在步骤S208提供的技术方案中,中间模型的边缘上透明度小于预设阈值的像素可以但不限于指模型的边缘没有完全填充的像素,即模型的边缘呈半透明的像素。In the technical solution provided in step S208, the pixels with transparency less than the preset threshold on the edge of the intermediate model may, but are not limited to, refer to pixels whose edges of the model are not completely filled, that is, pixels whose edges are semi-transparent.
可选地,在本实施例中,将中间模型的边缘上没有完全填充的像素与渲染模型的中部颜色进行混合,能够起到模型边缘到模型中部柔和过渡的效果。Optionally, in this embodiment, the incompletely filled pixels on the edge of the intermediate model are mixed with the color of the center of the rendered model, which can achieve the effect of a soft transition from the edge of the model to the center of the model.
可选地,在本实施例中,叠加后得到的中间模型可能会出现内边缘锯齿的现象,通过边缘和中部颜色的混合能够消除中间模型上的内边缘锯齿,从而使得到的目标模型既呈现出水墨风格,又能够在颜色上呈现柔和过度,展现更高质量的画面。目标模型是最终呈现在显示画面中的高质量的水墨风格模型。Optionally, in this embodiment, the intermediate model obtained after superposition may have the phenomenon of inner edge jaggedness, and the inner edge jaggedness on the intermediate model can be eliminated by mixing the color of the edge and the middle, so that the obtained target model presents both Ink style, but also soft and excessive in color, showing a higher quality picture. The target model is a high-quality ink-and-wash model that is ultimately presented in the display.
作为一种可选的实施例,根据观察方向和初始模型的法线贴图对初始模型的边缘进行描边,得到描边模型包括:As an optional embodiment, the edge of the initial model is stroked according to the viewing direction and the normal map of the initial model, and the obtained stroked model includes:
S11,根据所述观察方向和所述法线贴图确定所述初始模型的边缘上的像素的边缘度,其中,所述边缘度用于指示所述初始模型的边缘上的像素到所述初始模型的边缘轮廓线的距离;S11. Determine an edge degree of a pixel on the edge of the initial model according to the viewing direction and the normal map, where the edge degree is used to indicate the pixel on the edge of the initial model to the initial model The distance of the edge contour line;
S12,根据所述法线贴图上的坐标值确定所述初始模型的边缘上的 像素的纹理数据;S12, determine the texture data of the pixel on the edge of the initial model according to the coordinate value on the normal map;
S13,将所述初始模型的边缘上的像素的边缘度存储到边缘纹理贴图的水平轴坐标通道中,并将所述初始模型的边缘上的像素的纹理数据存储到所述边缘纹理贴图的垂直轴坐标通道中,得到目标边缘纹理贴图;S13, store the edge degree of the pixel on the edge of the initial model into the horizontal axis coordinate channel of the edge texture map, and store the texture data of the pixel on the edge of the initial model into the vertical axis of the edge texture map In the axis coordinate channel, get the target edge texture map;
S14,使用所述目标边缘纹理贴图对所述初始模型的边缘进行渲染。S14, using the target edge texture map to render the edge of the initial model.
可选地,在本实施例中,基于视角计算描边的方法原理简述如下,当观察方向与模型相切时,观察到的就是模型的边缘,可以理解为模型没有面积的轮廓线,使用点乘dot(viewDir,normal) k可以表示像素到边缘的距离作为像素的边缘度,边缘上的像素的边缘度可以使用公式N·I来确定,N表示向量normal(即法线方向),向量I表示Incident(入射方向,也就是观察方向),即viewDir。 Optionally, in this embodiment, the principle of the method of calculating the stroke based on the viewing angle is briefly described as follows. When the observation direction is tangent to the model, what is observed is the edge of the model, which can be understood as the outline of the model without area. The dot product dot(viewDir, normal) k can represent the distance from the pixel to the edge as the edge degree of the pixel. The edge degree of the pixel on the edge can be determined using the formula N·I, where N represents the vector normal (ie the normal direction), the vector I stands for Incident (incident direction, that is, observation direction), namely viewDir.
可选地,在本实施例中,边缘上的像素的纹理数据可以但不限于从法线贴图上获取。Optionally, in this embodiment, the texture data of the pixels on the edge may be, but not limited to, acquired from a normal map.
可选地,在本实施例中,将边缘上的像素的边缘度存储到边缘纹理贴图的水平轴坐标通道(即u坐标)中,并将边缘上的像素的纹理数据存储到边缘纹理贴图的垂直轴坐标通道(即v坐标)中,得到目标边缘纹理贴图,即可使用目标边缘纹理贴图对模型的边缘进行渲染,从而得到具有描边的模型,即描边模型。Optionally, in this embodiment, the edge degree of the pixel on the edge is stored in the horizontal axis coordinate channel (that is, the u coordinate) of the edge texture map, and the texture data of the pixel on the edge is stored in the edge texture map. In the vertical axis coordinate channel (that is, the v coordinate), the target edge texture map is obtained, and the target edge texture map can be used to render the edge of the model, thereby obtaining a model with a stroke, that is, a stroke model.
可选地,在本实施例中,水平轴坐标通道中储存了像素到边缘的距离,1为最靠近轮廓线,0为远离轮廓线。垂直轴坐标通道中储存了贴图的纹理数据。使贴图纹理根据法线和观察角度的变化进行扭曲形成带有纹理的描边。Optionally, in this embodiment, the distance from the pixel to the edge is stored in the horizontal axis coordinate channel, where 1 is closest to the contour line, and 0 is far away from the contour line. The texture data of the map is stored in the vertical axis coordinate channel. Distorts the map texture based on changes in normal and viewing angle to form a textured stroke.
作为一种可选的实施例,根据所述观察方向和所述法线贴图确定 所述初始模型的边缘上的像素的边缘度包括:As an optional embodiment, determining the edge degree of the pixel on the edge of the initial model according to the observation direction and the normal map includes:
S21,获取所述观察方向,并从所述法线贴图中获取所述初始模型的边缘上的像素的法线方向;S21, obtaining the observation direction, and obtaining the normal direction of the pixel on the edge of the initial model from the normal map;
S22,使用所述观察方向和所述初始模型的边缘上的像素的法线方向计算所述初始模型的边缘上的像素的观察方向在法线方向上的投影;S22, using the observation direction and the normal direction of the pixel on the edge of the initial model to calculate the projection of the observation direction of the pixel on the edge of the initial model on the normal direction;
S23,使用第一参数控制所述初始模型的边缘上的像素的观察方向在法线方向上的投影的复杂度,并使用第二参数控制所述初始模型的边缘上的像素的观察方向在法线方向上的投影的厚度,得到所述初始模型的边缘上的像素的边缘度。S23, use the first parameter to control the complexity of the projection of the observation direction of the pixel on the edge of the initial model in the normal direction, and use the second parameter to control the observation direction of the pixel on the edge of the initial model in the normal direction The thickness of the projection in the line direction gives the edge degree of the pixels on the edge of the initial model.
可选地,在本实施例中,观察方向可以从当前虚拟摄像机的方向获取。边缘上的像素的法线方向可以从法线贴图中获取。Optionally, in this embodiment, the observation direction may be obtained from the direction of the current virtual camera. The normal direction of the pixels on the edge can be obtained from the normal map.
可选地,在本实施例中,采用公式N·I计算初始模型的边缘上的像素的观察方向在法线方向上的投影,其中,N表示向量normal(法线方向),向量I表示Incident(入射方向,也就是观察方向)。N与I的点乘结果的几何意义为观察这个像素的方向在法线方向上的投影,用来表示这个像素的靠近轮廓的程度。N来自于模型的法线贴图,I是摄像机(观察者)的观察方向。Optionally, in this embodiment, the formula N·I is used to calculate the projection of the observation direction of the pixel on the edge of the initial model on the normal direction, where N represents the vector normal (normal direction), and the vector I represents Incident. (incidence direction, that is, observation direction). The geometric meaning of the result of the dot product of N and I is the projection of the direction of the pixel in the normal direction, which is used to indicate the degree of the pixel's proximity to the contour. N comes from the normal map of the model, and I is the viewing direction of the camera (observer).
可选地,在本实施例中,边缘上的像素的边缘度可以但不限于通过公式Bias+Scale(1+N·I) Power计算,其中Scale(即第一参数控制)用来控制线条细节的复杂度,Power(即第二参数控制)用来控制描边的厚度,Bias在默认情况下为0。例如:图3是根据本公开实施例的一种不同参数下描边效果的示意图,如图3所示,Scale越大,模型的描边效果复杂度越高,Power越小控制描边的厚度越大。 Optionally, in this embodiment, the edge degree of the pixels on the edge can be calculated by, but not limited to, the formula Bias+Scale(1+N·I) Power , where Scale (ie, the first parameter control) is used to control the line details. The complexity of Power (that is, the second parameter control) is used to control the thickness of the stroke, and Bias is 0 by default. For example: FIG. 3 is a schematic diagram of a stroke effect with different parameters according to an embodiment of the present disclosure. As shown in FIG. 3 , the larger the Scale, the higher the complexity of the stroke effect of the model, and the smaller the Power, the thickness of the stroke is controlled bigger.
作为一种可选的实施例,根据所述法线贴图上的坐标值确定所述 初始模型的边缘上的像素的纹理数据包括:As an optional embodiment, determining the texture data of the pixel on the edge of the initial model according to the coordinate value on the normal map includes:
S31,将所述初始模型的边缘上的像素在所述法线贴图上对应的横纵坐标值减半后进行相加,得到所述初始模型的边缘上的像素的坐标向量;S31, adding the pixels on the edge of the initial model after halving the corresponding horizontal and vertical coordinate values on the normal map, to obtain a coordinate vector of the pixels on the edge of the initial model;
S32,将所述初始模型的边缘上的像素的坐标向量转换为标量参数,得到所述初始模型的边缘上的像素的纹理数据。S32: Convert the coordinate vector of the pixel on the edge of the initial model into a scalar parameter to obtain texture data of the pixel on the edge of the initial model.
可选地,在本实施例中,将边缘上的像素的边缘度作为边缘纹理贴图的u坐标,将横纵坐标值减半后相加转变为标量参数作为边缘纹理贴图的v坐标,以便将贴图贴在观察到的模型的边缘。Optionally, in this embodiment, the edge degree of the pixel on the edge is used as the u coordinate of the edge texture map, and the horizontal and vertical coordinate values are halved and converted into scalar parameters as the v coordinate of the edge texture map, so that the The texture sticks to the edge of the observed model.
可选地,在本实施例中,对边缘纹理贴图的处理不局限于上述方式,不同的方式可以形成不同的扭曲效果,这里只是对描边扭曲计算的一个示例。Optionally, in this embodiment, the processing of the edge texture map is not limited to the above manner, and different manners can form different distortion effects, and this is just an example of the calculation of the stroke distortion.
作为一种可选的实施例,根据所述初始模型的背景色和纹理贴图中的透明度将水墨纹理渲染到所述初始模型的中部,得到渲染模型包括:As an optional embodiment, the ink texture is rendered to the middle of the initial model according to the background color of the initial model and the transparency in the texture map, and obtaining the rendered model includes:
S41,从所述纹理贴图中获取所述初始模型的中部的像素对应的透明度;S41, obtaining the transparency corresponding to the pixel in the middle of the initial model from the texture map;
S42,使用所述初始模型的中部的像素对应的透明度对所述初始模型的中部的像素对应的背景色和所述初始模型的中部的像素在所述水墨纹理上的颜色数据进行线性混合,得到所述初始模型的中部的像素的纹理数据;S42, using the transparency corresponding to the pixel in the middle of the initial model to linearly mix the background color corresponding to the pixel in the middle of the initial model and the color data of the pixel in the middle of the initial model on the ink texture, to obtain texture data of pixels in the middle of the initial model;
S43,将所述初始模型的中部的像素的纹理数据渲染到所述初始模型的中部,得到所述渲染模型。S43: Render the texture data of the pixels in the middle of the initial model to the middle of the initial model to obtain the rendered model.
可选地,在本实施例中,线性混合指的是将两个颜色数据根据Alpha(透明度)值进行混合,即lerp(color1,color2,alpha),这 里Alpha值为0时显示为背景色(color1),Alpha值为1时显示为纹理(color2),Alpha值来源于纹理贴图的Alpha通道。Optionally, in this embodiment, linear mixing refers to mixing two color data according to the Alpha (transparency) value, that is, lerp(color1, color2, alpha), where the Alpha value is 0 and displayed as the background color ( color1), when the Alpha value is 1, it is displayed as a texture (color2), and the Alpha value comes from the Alpha channel of the texture map.
可选地,在本实施例中,对模型中部的透明处理进行了优化,抓取背景色与水墨纹理的线性混合,解决水墨纹理透明部分导致的颜色透过和穿插问题。例如:图4是根据本公开实施例的一种模型中部透明处理的示意图,如图4所示,未使用上述方式对模型中部进行透明处理时,如方框中所示,模型明显存在颜色透过和穿插问题。使用上述方式对模型中部进行透明处理后,明显解决了颜色透过和穿插的问题。Optionally, in this embodiment, the transparency processing in the middle of the model is optimized, and the linear mixture of the background color and the ink texture is captured to solve the problem of color penetration and interpenetration caused by the transparent part of the ink texture. For example: FIG. 4 is a schematic diagram of transparent processing in the middle of a model according to an embodiment of the present disclosure. As shown in FIG. 4 , when the above method is not used to transparently process the middle of the model, as shown in the box, the model obviously has color transparency. passing and interspersed issues. After using the above method to transparently process the middle of the model, the problem of color penetration and interpenetration is obviously solved.
作为一种可选的实施例,使用所述初始模型的中部的像素对应的透明度对所述初始模型的中部的像素对应的背景色和所述初始模型的中部的像素在所述水墨纹理上的颜色数据进行线性混合包括:As an optional embodiment, use the transparency corresponding to the pixel in the middle of the initial model to set the background color corresponding to the pixel in the middle of the initial model and the pixel in the middle of the initial model on the ink texture. Linear blending of color data includes:
S51,按照所述初始模型的中部的像素对应的透明度的比例将所述初始模型的中部的像素对应的背景色和所述初始模型的中部的像素在所述水墨纹理上的颜色数据进行RGB(Red-Green-Blue,红绿蓝)颜色值的计算。S51, according to the ratio of the transparency corresponding to the pixel in the middle of the initial model, perform RGB ( Red-Green-Blue, red-green-blue) color value calculation.
可选地,在本实施例中,线性混合的过程可以但不限于是将两种颜色按照描边模型的中部的像素对应的透明度(Alpha值)的比例进行RGB的计算。Optionally, in this embodiment, the linear mixing process may be, but not limited to, performing RGB calculation on two colors according to the ratio of the transparency (alpha value) corresponding to the pixel in the middle of the stroke model.
可选地,在本实施例中,将两种颜色混合的方式有很多种算法,但由于水墨描边颜色一般为黑色,采用了上述线性混合的方式进行描述。Optionally, in this embodiment, there are many algorithms for mixing the two colors, but since the ink stroke color is generally black, the above-mentioned linear mixing method is used for description.
作为一种可选的实施例,将所述中间模型的边缘上透明度小于预设阈值的像素与所述中间模型的中部颜色进行混合,得到目标模型包括:As an optional embodiment, mixing pixels whose transparency is less than a preset threshold on the edge of the intermediate model and the middle color of the intermediate model to obtain the target model includes:
S61,从所述中间模型的边缘上获取透明度小于预设阈值的目标像素;S61, obtaining target pixels whose transparency is less than a preset threshold from the edge of the intermediate model;
S62,使用所述目标像素的透明度对所述目标像素的颜色数据和所述中间模型的中部的颜色数据进行线性混合作为所述目标像素的目标颜色数据,得到所述目标模型。S62, using the transparency of the target pixel to linearly mix the color data of the target pixel and the color data of the middle part of the intermediate model as the target color data of the target pixel to obtain the target model.
可选地,在本实施例中,边缘上透明度小于预设阈值的目标像素可以确定为没有完全填充的像素,比如,将描边计算的结果的Alpha值作为线性混合的Alpha值,Alpha值小于1时则为没有完全填充的像素。Optionally, in this embodiment, the target pixel whose transparency on the edge is less than the preset threshold value may be determined as a pixel that is not completely filled. A value of 1 is a pixel that is not fully filled.
可选地,在本实施例中,使用目标像素的透明度对目标像素的颜色数据和中间模型的中部的颜色数据进行线性混合的方式可以但不限于为将两种颜色按照目标像素的透明度(Alpha值)的比例进行RGB的计算。Optionally, in this embodiment, using the transparency of the target pixel to linearly mix the color data of the target pixel and the color data in the middle of the intermediate model may be, but not limited to, combining the two colors according to the transparency (Alpha) of the target pixel. value) for the RGB calculation.
可选地,在本实施例中,将边缘没有完全填充(半透明)的像素与内部颜色做二次线性混合,可以解决边缘锯齿状的问题。例如:图5是根据本公开实施例的一种边缘和中部线性混合的示意图,如图5所示,边缘和中部未进行线性混合时,模型会明显出现边缘锯齿状的问题,将边缘和中部进行线性混合后,有效消除了边缘的锯齿,使得模型的边缘与中部能够柔和过渡。Optionally, in this embodiment, secondary linear mixing is performed between the pixels whose edges are not completely filled (translucent) and the inner color, so as to solve the problem of jagged edges. For example: FIG. 5 is a schematic diagram of a linear blending of the edge and the middle according to an embodiment of the present disclosure. As shown in FIG. 5 , when the edge and the middle are not linearly blended, the model will obviously have the problem of jagged edges. After linear blending, the jagged edges are effectively eliminated, so that the edges and the middle of the model can be smoothly transitioned.
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本公开并不受所描述的动作顺序的限制,因为依据本公开,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本公开所必须的。It should be noted that, for the sake of simple description, the foregoing method embodiments are all expressed as a series of action combinations, but those skilled in the art should know that the present disclosure is not limited by the described action sequences. Because certain steps may be performed in other orders or concurrently in accordance with the present disclosure. Secondly, those skilled in the art should also know that the embodiments described in the specification are all preferred embodiments, and the actions and modules involved are not necessarily required by the present disclosure.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解 到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台电子设备(可以是手机,计算机,服务器,或者网络设备等)执行本公开各个实施例所述的方法。From the description of the above embodiments, those skilled in the art can clearly understand that the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation. Based on this understanding, the technical solutions of the present disclosure essentially or the parts that contribute to the prior art can be embodied in the form of software products, and the computer software products are stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to make an electronic device (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the methods described in the various embodiments of the present disclosure.
根据本公开实施例的另一个方面,还提供了一种用于实施上述模型的渲染方法的模型的渲染装置。图6是根据本公开实施例的一种可选的模型的渲染装置的示意图,如图6所示,该装置可以包括:According to another aspect of the embodiments of the present disclosure, there is also provided a model rendering apparatus for implementing the above model rendering method. FIG. 6 is a schematic diagram of an optional model rendering apparatus according to an embodiment of the present disclosure. As shown in FIG. 6 , the apparatus may include:
描边模块62,设置为根据观察方向和初始模型的法线贴图对初始模型的边缘进行描边,得到描边模型,其中,所述观察方向为虚拟摄像机观察所述初始模型的方向;The stroke module 62 is configured to stroke the edge of the initial model according to the observation direction and the normal map of the initial model to obtain a stroked model, wherein the observation direction is the direction in which the virtual camera observes the initial model;
渲染模块64,设置为根据所述初始模型的背景色和纹理贴图中的透明度将水墨纹理渲染到所述初始模型的中部,得到渲染模型;The rendering module 64 is configured to render the ink texture to the middle of the initial model according to the background color of the initial model and the transparency in the texture map to obtain a rendering model;
叠加模块66,设置为将所述描边模型与所述渲染模型叠加,得到中间模型;The superimposing module 66 is configured to superimpose the stroke model and the rendering model to obtain an intermediate model;
混合模块68,设置为将所述中间模型的边缘上透明度小于预设阈值的像素与所述中间模型的中部颜色进行混合,得到目标模型。The mixing module 68 is configured to mix the pixels whose transparency on the edge of the intermediate model is less than the preset threshold value and the color of the middle of the intermediate model to obtain the target model.
需要说明的是,该实施例中的描边模块62可以设置为执行本公开实施例中的步骤S202,该实施例中的渲染模块64可以设置为执行本公开实施例中的步骤S204,该实施例中的叠加模块66可以设置为执行本公开实施例中的步骤S206,该实施例中的混合模块68可以设置为执行本公开实施例中的步骤S208。It should be noted that the stroke module 62 in this embodiment may be configured to perform step S202 in this embodiment of the present disclosure, and the rendering module 64 in this embodiment may be configured to perform step S204 in this embodiment of the present disclosure. The superposition module 66 in this embodiment may be configured to perform step S206 in the embodiment of the present disclosure, and the mixing module 68 in this embodiment may be configured to perform step S208 in the embodiment of the present disclosure.
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用 场景相同,但不限于上述实施例所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现。It should be noted here that the examples and application scenarios implemented by the foregoing modules and corresponding steps are the same, but are not limited to the contents disclosed in the foregoing embodiments. It should be noted that, as a part of the device, the above modules may run in the hardware environment as shown in FIG. 1 , and may be implemented by software or hardware.
通过上述模块,采用模型的边缘和模型的中部分别渲染的形式,对于模型的边缘,根据观察方向和模型的法线贴图进行描边,能够使得贴图更好的贴合模型的边缘,对于模型的内部,结合了背景色和纹理贴图中的透明度参数进行渲染,避免出现水墨纹理透明部分导致的颜色透过和穿插问题,对于模型的边缘没有完全填充的像素,结合模型的中部的颜色进行混合,从而得到水墨风格的目标模型,达到了消除在所得到的水墨风格的目标模型上出现内边缘锯齿的目的,从而实现了提高对模型进行水墨风格的渲染后得到的图像质量的技术效果,进而解决了对模型进行水墨风格的渲染后得到的图像质量较差的技术问题。Through the above modules, the edge of the model and the middle of the model are rendered separately. For the edge of the model, the edge of the model is stroked according to the viewing direction and the normal map of the model, so that the texture can better fit the edge of the model. Internally, the background color and the transparency parameters in the texture map are combined for rendering to avoid the color penetration and interspersed problems caused by the transparent part of the ink texture. Thus, an ink-wash style target model is obtained, and the purpose of eliminating the internal edge jaggedness on the obtained ink-wash style target model is achieved, thereby achieving the technical effect of improving the image quality obtained by rendering the model in ink style, and then solving the problem. It solves the technical problem of poor image quality after rendering the model in ink style.
作为一种可选的实施例,所述描边模块包括:As an optional embodiment, the stroke module includes:
第一确定单元,设置为根据所述观察方向和所述法线贴图确定所述初始模型的边缘上的像素的边缘度,其中,所述边缘度用于指示所述初始模型的边缘上的像素到所述初始模型的边缘轮廓线的距离;a first determining unit, configured to determine an edge degree of a pixel on the edge of the initial model according to the observation direction and the normal map, wherein the edge degree is used to indicate a pixel on the edge of the initial model the distance to the edge contour of the initial model;
第二确定单元,设置为根据所述法线贴图上的坐标值确定所述初始模型的边缘上的像素的纹理数据;a second determining unit, configured to determine texture data of pixels on the edge of the initial model according to the coordinate values on the normal map;
存储单元,设置为将所述初始模型的边缘上的像素的边缘度存储到边缘纹理贴图的水平轴坐标通道中,并将所述初始模型的边缘上的像素的纹理数据存储到所述边缘纹理贴图的垂直轴坐标通道中,得到目标边缘纹理贴图;a storage unit, configured to store the edge degree of the pixel on the edge of the initial model into the horizontal axis coordinate channel of the edge texture map, and store the texture data of the pixel on the edge of the initial model into the edge texture In the vertical axis coordinate channel of the map, the target edge texture map is obtained;
第一渲染单元,设置为使用所述目标边缘纹理贴图对所述初始模型的边缘进行渲染。A first rendering unit, configured to use the target edge texture map to render the edge of the initial model.
作为一种可选的实施例,所述第一确定单元设置为:As an optional embodiment, the first determining unit is set to:
获取所述观察方向,并从所述法线贴图中获取所述初始模型的边缘上的像素的法线方向;obtaining the viewing direction, and obtaining the normal direction of the pixel on the edge of the initial model from the normal map;
使用所述观察方向和所述初始模型的边缘上的像素的法线方向计算所述初始模型的边缘上的像素的观察方向在法线方向上的投影;using the viewing direction and the normal direction of the pixel on the edge of the initial model to calculate the projection of the viewing direction of the pixel on the edge of the initial model on the normal direction;
使用第一参数控制所述初始模型的边缘上的像素的观察方向在法线方向上的投影的复杂度,并使用第二参数控制所述初始模型的边缘上的像素的观察方向在法线方向上的投影的厚度,得到所述初始模型的边缘上的像素的边缘度。A first parameter is used to control the complexity of the projection of the viewing direction of the pixels on the edges of the initial model in the normal direction, and a second parameter is used to control the viewing direction of the pixels on the edges of the initial model in the normal direction On the thickness of the projection, get the edge degree of the pixels on the edges of the initial model.
作为一种可选的实施例,所述第二确定单元设置为:As an optional embodiment, the second determining unit is set to:
将所述初始模型的边缘上的像素在所述法线贴图上对应的横纵坐标值减半后进行相加,得到所述初始模型的边缘上的像素的坐标向量;The pixels on the edge of the initial model are added after halving the corresponding horizontal and vertical coordinate values on the normal map to obtain the coordinate vector of the pixels on the edge of the initial model;
将所述初始模型的边缘上的像素的坐标向量转换为标量参数,得到所述初始模型的边缘上的像素的纹理数据。The coordinate vector of the pixel on the edge of the initial model is converted into a scalar parameter to obtain the texture data of the pixel on the edge of the initial model.
作为一种可选的实施例,所述渲染模块包括:As an optional embodiment, the rendering module includes:
第一获取单元,设置为从所述纹理贴图中获取所述初始模型的中部的像素对应的透明度;a first obtaining unit, configured to obtain the transparency corresponding to the pixel in the middle of the initial model from the texture map;
第一混合单元,设置为使用所述初始模型的中部的像素对应的透明度对所述初始模型的中部的像素对应的背景色和所述初始模型的中部的像素在所述水墨纹理上的颜色数据进行线性混合,得到所述初始模型的中部的像素的纹理数据;a first mixing unit, set to use the transparency corresponding to the pixels in the middle of the initial model to the background color corresponding to the pixels in the middle of the initial model and the color data of the pixels in the middle of the initial model on the ink texture Perform linear mixing to obtain the texture data of the pixels in the middle of the initial model;
第二渲染单元,设置为将所述初始模型的中部的像素的纹理数据渲染到所述初始模型的中部,得到所述渲染模型。The second rendering unit is configured to render the texture data of the pixels in the middle of the initial model to the middle of the initial model to obtain the rendered model.
作为一种可选的实施例,所述第一混合单元设置为:As an optional embodiment, the first mixing unit is set to:
按照所述初始模型的中部的像素对应的透明度的比例将所述初始模型的中部的像素对应的背景色和所述初始模型的中部的像素在所述水墨纹理上的颜色数据进行RGB颜色值的计算。According to the ratio of the transparency corresponding to the pixel in the middle of the initial model, the background color corresponding to the pixel in the middle of the initial model and the color data of the pixel in the middle of the initial model on the ink texture are subjected to RGB color values. calculate.
作为一种可选的实施例,所述混合模块包括:As an optional embodiment, the mixing module includes:
第二获取单元,设置为从所述中间模型的边缘上获取透明度小于预设阈值的目标像素;a second acquiring unit, configured to acquire target pixels whose transparency is less than a preset threshold from the edge of the intermediate model;
第二混合单元,设置为使用所述目标像素的透明度对所述目标像素的颜色数据和所述中间模型的中部的颜色数据进行线性混合作为所述目标像素的目标颜色数据,得到所述目标模型。a second mixing unit, configured to linearly mix the color data of the target pixel and the color data in the middle of the intermediate model using the transparency of the target pixel as the target color data of the target pixel to obtain the target model .
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现,其中,硬件环境包括网络环境。It should be noted here that the examples and application scenarios implemented by the foregoing modules and corresponding steps are the same, but are not limited to the contents disclosed in the foregoing embodiments. It should be noted that, as a part of the device, the above modules may run in the hardware environment as shown in FIG. 1 , and may be implemented by software or hardware, wherein the hardware environment includes a network environment.
根据本公开实施例的另一个方面,还提供了一种用于实施上述模型的渲染方法的电子装置。According to another aspect of the embodiments of the present disclosure, an electronic device for implementing the above model rendering method is also provided.
图7是根据本公开实施例的一种电子装置的结构框图,如图7所示,该电子装置可以包括:一个或多个(图中仅示出一个)处理器701、存储器703、以及传输装置705,如图7所示,该电子装置还可以包括输入输出设备707。FIG. 7 is a structural block diagram of an electronic device according to an embodiment of the present disclosure. As shown in FIG. 7 , the electronic device may include: one or more (only one is shown in the figure) a processor 701 , a memory 703 , and a transmission The device 705, as shown in FIG. 7, the electronic device may further include an input and output device 707.
其中,存储器703可设置为存储软件程序以及模块,如本公开实施例中的模型的渲染方法和装置对应的程序指令/模块,处理器701通过运行存储在存储器703内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的模型的渲染方法。存储器703可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中, 存储器703可进一步包括相对于处理器701远程设置的存储器,这些远程存储器可以通过网络连接至电子装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 703 may be configured to store software programs and modules, such as program instructions/modules corresponding to the model rendering method and device in the embodiments of the present disclosure, and the processor 701 runs the software programs and modules stored in the memory 703 to thereby Execute various functional applications and data processing, that is, implement the above-mentioned model rendering method. Memory 703 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 703 may further include memory located remotely from the processor 701, and these remote memories may be connected to the electronic device through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
上述的传输装置705设置为经由一个网络接收或者发送数据,还可以设置为处理器与存储器之间的数据传输。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置705包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置705为射频(Radio Frequency,RF)模块,其设置为通过无线方式与互联网进行通讯。The above-mentioned transmission device 705 is configured to receive or send data via a network, and may also be configured to transmit data between the processor and the memory. Specific examples of the above-mentioned networks may include wired networks and wireless networks. In one example, the transmission device 705 includes a network adapter (Network Interface Controller, NIC), which can be connected to other network devices and routers through a network cable so as to communicate with the Internet or a local area network. In one example, the transmission device 705 is a radio frequency (Radio Frequency, RF) module, which is configured to communicate with the Internet in a wireless manner.
其中,具体地,存储器703设置为存储应用程序。Specifically, the memory 703 is configured to store application programs.
处理器701可以通过传输装置705调用存储器703存储的应用程序,以执行下述步骤:The processor 701 can call the application program stored in the memory 703 through the transmission device 705 to perform the following steps:
根据观察方向和模型的法线贴图对初始模型的边缘进行描边,得到描边模型,其中,所述观察方向为虚拟摄像机观察所述初始模型的方向;Stroke the edge of the initial model according to the observation direction and the normal map of the model to obtain a stroked model, wherein the observation direction is the direction in which the virtual camera observes the initial model;
根据所述初始模型的背景色和纹理贴图中的透明度将水墨纹理渲染到所述描边模型的中部,得到渲染模型;Render the ink texture to the middle of the stroke model according to the background color of the initial model and the transparency in the texture map to obtain a rendering model;
将所述描边模型与所述渲染模型叠加,得到中间模型;superimposing the stroke model and the rendering model to obtain an intermediate model;
将所述渲染模型的边缘上透明度小于预设阈值的像素与所述渲染模型的中部颜色进行混合,得到目标模型。A target model is obtained by mixing the pixels whose transparency on the edge of the rendering model is less than a preset threshold value and the middle color of the rendering model.
采用本公开实施例,提供了一种模型的渲染的方案。采用模型的边缘和模型的中部分别渲染的形式,对于模型的边缘,根据观察方向和模型的法线贴图进行描边,能够使得贴图更好的贴合模型的边缘,对于模型的内部,结合了背景色和纹理贴图中的透明度参数进行渲染, 避免出现水墨纹理透明部分导致的颜色透过和穿插问题,对于模型的边缘没有完全填充的像素,结合模型的中部的颜色进行混合,从而得到水墨风格的目标模型,达到了消除在所得到的水墨风格的目标模型上出现内边缘锯齿的目的,从而实现了提高对模型进行水墨风格的渲染后得到的图像质量的技术效果,进而解决了对模型进行水墨风格的渲染后得到的图像质量较差的技术问题。With the embodiments of the present disclosure, a solution for model rendering is provided. The edge of the model and the middle of the model are rendered separately. For the edge of the model, the edge of the model is stroked according to the viewing direction and the normal map of the model, which can make the texture better fit the edge of the model. For the interior of the model, a combination of The background color and the transparency parameter in the texture map are rendered to avoid the problem of color penetration and interspersed caused by the transparent part of the ink texture. For the pixels that are not completely filled at the edge of the model, combine the colors in the middle of the model to mix to obtain the ink style. It achieves the purpose of eliminating the internal edge jaggedness on the obtained ink style target model, thereby achieving the technical effect of improving the image quality obtained after the model is rendered in ink style, and then solving the problem of the model. Technical issues with poor image quality after ink style rendering.
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。Optionally, for specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiments, and details are not described herein again in this embodiment.
本领域普通技术人员可以理解,图7所示的结构仅为示意,电子装置可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等电子设备。图7其并不对上述电子装置的结构造成限定。例如,电子装置还可包括比图7中所示更多或者更少的组件(如网络接口、显示装置等),或者具有与图7所示不同的配置。Those of ordinary skill in the art can understand that the structure shown in FIG. 7 is only a schematic diagram, and the electronic device can be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a handheld computer, and a mobile Internet Device (Mobile Internet Devices, MID) , PAD and other electronic equipment. FIG. 7 does not limit the structure of the above electronic device. For example, the electronic device may also include more or fewer components than those shown in FIG. 7 (eg, network interface, display device, etc.), or have a different configuration than that shown in FIG. 7 .
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令电子设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。Those of ordinary skill in the art can understand that all or part of the steps in the various methods of the above embodiments can be completed by instructing hardware related to the electronic device through a program, and the program can be stored in a computer-readable storage medium, and the storage medium can Including: flash disk, read-only memory (Read-Only Memory, ROM), random access device (Random Access Memory, RAM), magnetic disk or optical disk, etc.
本公开的实施例还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以设置为执行模型的渲染方法的程序代码。Embodiments of the present disclosure also provide a storage medium. Optionally, in this embodiment, the above-mentioned storage medium may be set as program code for executing the rendering method of the model.
可选地,在本实施例中,上述存储介质可以位于上述实施例所示的网络中的多个网络设备中的至少一个网络设备上。Optionally, in this embodiment, the foregoing storage medium may be located on at least one network device among multiple network devices in the network shown in the foregoing embodiment.
可选地,在本实施例中,存储介质被设置为存储用于执行以下步 骤的程序代码:Optionally, in this embodiment, the storage medium is configured to store program codes for executing the following steps:
根据观察方向和模型的法线贴图对初始模型的边缘进行描边,得到描边模型,其中,所述观察方向为虚拟摄像机观察所述初始模型的方向;Stroke the edge of the initial model according to the observation direction and the normal map of the model to obtain a stroked model, wherein the observation direction is the direction in which the virtual camera observes the initial model;
根据所述初始模型的背景色和纹理贴图中的透明度将水墨纹理渲染到所述描边模型的中部,得到渲染模型;Render the ink texture to the middle of the stroke model according to the background color of the initial model and the transparency in the texture map to obtain a rendering model;
将所述描边模型与所述渲染模型叠加,得到中间模型;superimposing the stroke model and the rendering model to obtain an intermediate model;
将所述渲染模型的边缘上透明度小于预设阈值的像素与所述渲染模型的中部颜色进行混合,得到目标模型。A target model is obtained by mixing the pixels whose transparency on the edge of the rendering model is less than a preset threshold value and the middle color of the rendering model.
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。Optionally, for specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiments, and details are not described herein again in this embodiment.
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。Optionally, in this embodiment, the above-mentioned storage medium may include but is not limited to: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic Various media that can store program codes, such as discs or optical discs.
上述本公开实施例序号仅仅为了描述,不代表实施例的优劣。The above-mentioned serial numbers of the embodiments of the present disclosure are only for description, and do not represent the advantages or disadvantages of the embodiments.
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。If the integrated units in the above-mentioned embodiments are implemented in the form of software functional units and sold or used as independent products, they may be stored in the above-mentioned computer-readable storage medium. Based on this understanding, the technical solutions of the present disclosure are essentially or contribute to the prior art, or all or part of the technical solutions can be embodied in the form of software products, and the computer software products are stored in a storage medium, Several instructions are included to cause one or more computer devices (which may be personal computers, servers, or network devices, etc.) to perform all or part of the steps of the methods described in various embodiments of the present disclosure.
在本公开的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above-mentioned embodiments of the present disclosure, the description of each embodiment has its own emphasis. For parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
在本公开所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in the present disclosure, it should be understood that the disclosed client may be implemented in other manners. The apparatus embodiments described above are only illustrative, for example, the division of the units is only a logical function division, and there may be other division methods in actual implementation, for example, multiple units or components may be combined or Integration into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of units or modules, and may be in electrical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
以上所述仅是本公开的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本公开原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本公开的保护范围。The above are only the preferred embodiments of the present disclosure. It should be pointed out that for those skilled in the art, without departing from the principles of the present disclosure, several improvements and modifications can be made. It should be regarded as the protection scope of the present disclosure.

Claims (10)

  1. 一种模型的渲染方法,包括:A rendering method for a model, including:
    根据观察方向和初始模型的法线贴图对初始模型的边缘进行描边,得到描边模型,其中,所述观察方向为虚拟摄像机观察所述初始模型的方向;Stroke the edge of the initial model according to the observation direction and the normal map of the initial model to obtain a stroked model, wherein the observation direction is the direction in which the virtual camera observes the initial model;
    根据所述初始模型的背景色和纹理贴图中的透明度将水墨纹理渲染到所述初始模型的中部,得到渲染模型;Render the ink texture to the middle of the initial model according to the background color of the initial model and the transparency in the texture map to obtain a rendered model;
    将所述描边模型与所述渲染模型叠加,得到中间模型;superimposing the stroke model and the rendering model to obtain an intermediate model;
    将所述中间模型的边缘上透明度小于预设阈值的像素与所述中间模型的中部颜色进行混合,得到目标模型。A target model is obtained by mixing pixels whose transparency on the edge of the intermediate model is less than a preset threshold value and the color of the middle of the intermediate model.
  2. 根据权利要求1所述的方法,其中,根据观察方向和初始模型的法线贴图对初始模型的边缘进行描边,得到描边模型包括:The method according to claim 1, wherein the edge of the initial model is stroked according to the viewing direction and the normal map of the initial model, and obtaining the stroked model comprises:
    根据所述观察方向和所述法线贴图确定所述初始模型的边缘上的像素的边缘度,其中,所述边缘度用于指示所述初始模型的边缘上的像素到所述初始模型的边缘轮廓线的距离;The edge degree of the pixel on the edge of the initial model is determined according to the viewing direction and the normal map, wherein the edge degree is used to indicate the pixel on the edge of the initial model to the edge of the initial model the distance of the contour line;
    根据所述法线贴图上的坐标值确定所述初始模型的边缘上的像素的纹理数据;determining the texture data of the pixels on the edge of the initial model according to the coordinate values on the normal map;
    将所述初始模型的边缘上的像素的边缘度存储到边缘纹理贴图的水平轴坐标通道中,并将所述初始模型的边缘上的像素的纹理数据存储到所述边缘纹理贴图的垂直轴坐标通道中,得到目标边缘纹理贴图;Store the edge degree of the pixel on the edge of the initial model into the horizontal axis coordinate channel of the edge texture map, and store the texture data of the pixel on the edge of the initial model into the vertical axis coordinate of the edge texture map In the channel, get the target edge texture map;
    使用所述目标边缘纹理贴图对所述初始模型的边缘进行渲染。The edges of the initial model are rendered using the target edge texture map.
  3. 根据权利要求2所述的方法,其中,根据所述观察方向和所述法线贴图确定所述初始模型的边缘上的像素的边缘度包括:3. The method of claim 2, wherein determining edge degrees of pixels on edges of the initial model based on the viewing direction and the normal map comprises:
    获取所述观察方向,并从所述法线贴图中获取所述初始模型的边 缘上的像素的法线方向;obtaining the viewing direction, and obtaining the normal direction of the pixel on the edge of the initial model from the normal map;
    使用所述观察方向和所述初始模型的边缘上的像素的法线方向计算所述初始模型的边缘上的像素的观察方向在法线方向上的投影;using the viewing direction and the normal direction of the pixel on the edge of the initial model to calculate the projection of the viewing direction of the pixel on the edge of the initial model on the normal direction;
    使用第一参数控制所述初始模型的边缘上的像素的观察方向在法线方向上的投影的复杂度,并使用第二参数控制所述初始模型的边缘上的像素的观察方向在法线方向上的投影的厚度,得到所述初始模型的边缘上的像素的边缘度。A first parameter is used to control the complexity of the projection of the viewing direction of the pixels on the edges of the initial model in the normal direction, and a second parameter is used to control the viewing direction of the pixels on the edges of the initial model in the normal direction On the thickness of the projection, get the edge degree of the pixels on the edges of the initial model.
  4. 根据权利要求2所述的方法,其中,根据所述法线贴图上的坐标值确定所述初始模型的边缘上的像素的纹理数据包括:The method according to claim 2, wherein determining the texture data of the pixels on the edge of the initial model according to the coordinate values on the normal map comprises:
    将所述初始模型的边缘上的像素在所述法线贴图上对应的横纵坐标值减半后进行相加,得到所述初始模型的边缘上的像素的坐标向量;The pixels on the edge of the initial model are added after halving the corresponding horizontal and vertical coordinate values on the normal map to obtain the coordinate vector of the pixels on the edge of the initial model;
    将所述初始模型的边缘上的像素的坐标向量转换为标量参数,得到所述初始模型的边缘上的像素的纹理数据。The coordinate vector of the pixel on the edge of the initial model is converted into a scalar parameter to obtain the texture data of the pixel on the edge of the initial model.
  5. 根据权利要求1所述的方法,其中,根据所述初始模型的背景色和纹理贴图中的透明度将水墨纹理渲染到所述初始模型的中部,得到渲染模型包括:The method according to claim 1, wherein rendering the ink texture to the middle of the initial model according to the background color of the initial model and the transparency in the texture map, and obtaining the rendered model comprises:
    从所述纹理贴图中获取所述初始模型的中部的像素对应的透明度;Obtain the transparency corresponding to the pixel in the middle of the initial model from the texture map;
    使用所述初始模型的中部的像素对应的透明度对所述初始模型的中部的像素对应的背景色和所述初始模型的中部的像素在所述水墨纹理上的颜色数据进行线性混合,得到所述初始模型的中部的像素的纹理数据;Using the transparency corresponding to the pixel in the middle of the initial model to linearly mix the background color corresponding to the pixel in the middle of the initial model and the color data of the pixel in the middle of the initial model on the ink texture to obtain the Texture data of pixels in the middle of the initial model;
    将所述初始模型的中部的像素的纹理数据渲染到所述初始模型的中部,得到所述渲染模型。The rendered model is obtained by rendering the texture data of the pixels in the middle of the initial model to the middle of the initial model.
  6. 根据权利要求5所述的方法,其中,使用所述初始模型的中部的像素对应的透明度对所述初始模型的中部的像素对应的背景色和所述初始模型的中部的像素在所述水墨纹理上的颜色数据进行线性混合包括:The method according to claim 5, wherein the background color corresponding to the pixel in the middle of the initial model and the pixel in the middle of the initial model are used in the ink texture corresponding to the transparency of the pixel in the middle of the initial model Linear blending on color data consists of:
    按照所述初始模型的中部的像素对应的透明度的比例将所述初始模型的中部的像素对应的背景色和所述初始模型的中部的像素在所述水墨纹理上的颜色数据进行RGB颜色值的计算。According to the ratio of the transparency corresponding to the pixel in the middle of the initial model, the background color corresponding to the pixel in the middle of the initial model and the color data of the pixel in the middle of the initial model on the ink texture are subjected to RGB color values. calculate.
  7. 根据权利要求1所述的方法,其中,将所述中间模型的边缘上透明度小于预设阈值的像素与所述中间模型的中部颜色进行混合,得到目标模型包括:The method according to claim 1, wherein, mixing the pixels whose transparency on the edge of the intermediate model is less than a preset threshold value and the middle color of the intermediate model to obtain the target model comprises:
    从所述中间模型的边缘上获取透明度小于预设阈值的目标像素;Obtain target pixels whose transparency is less than a preset threshold from the edge of the intermediate model;
    使用所述目标像素的透明度对所述目标像素的颜色数据和所述中间模型的中部的颜色数据进行线性混合作为所述目标像素的目标颜色数据,得到所述目标模型。Using the transparency of the target pixel to linearly mix the color data of the target pixel and the color data of the middle part of the intermediate model as the target color data of the target pixel to obtain the target model.
  8. 一种模型的渲染装置,包括:A model rendering device, comprising:
    描边模块,设置为根据观察方向和初始模型的法线贴图对初始模型的边缘进行描边,得到描边模型,其中,所述观察方向为虚拟摄像机观察所述初始模型的方向;The stroke module is set to stroke the edge of the initial model according to the observation direction and the normal map of the initial model to obtain the stroked model, wherein the observation direction is the direction in which the virtual camera observes the initial model;
    渲染模块,设置为根据所述初始模型的背景色和纹理贴图中的透明度将水墨纹理渲染到所述初始模型的中部,得到渲染模型;A rendering module, configured to render the ink texture to the middle of the initial model according to the background color of the initial model and the transparency in the texture map, to obtain a rendering model;
    叠加模块,设置为将所述描边模型与所述渲染模型叠加,得到中间模型;an overlay module, configured to overlay the stroke model and the rendering model to obtain an intermediate model;
    混合模块,设置为将所述中间模型的边缘上透明度小于预设阈值的像素与所述中间模型的中部颜色进行混合,得到目标模型。The mixing module is configured to mix the pixels whose transparency on the edge of the intermediate model is less than the preset threshold value and the color of the middle of the intermediate model to obtain the target model.
  9. 一种存储介质,所述存储介质包括存储的程序,其中,所述程序运行时执行上述权利要求1至7任一项中所述的方法。A storage medium comprising a stored program, wherein the program executes the method described in any one of the above claims 1 to 7 when running.
  10. 一种电子装置,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器通过所述计算机程序执行上述权利要求1至7任一项中所述的方法。An electronic device, comprising a memory, a processor and a computer program stored on the memory and running on the processor, the processor executing any one of the above claims 1 to 7 through the computer program the method described.
PCT/CN2020/133862 2020-08-26 2020-12-04 Model rendering method and device WO2022041548A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010871727.2A CN112070873B (en) 2020-08-26 2020-08-26 Model rendering method and device
CN202010871727.2 2020-08-26

Publications (1)

Publication Number Publication Date
WO2022041548A1 true WO2022041548A1 (en) 2022-03-03

Family

ID=73660044

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/133862 WO2022041548A1 (en) 2020-08-26 2020-12-04 Model rendering method and device

Country Status (2)

Country Link
CN (1) CN112070873B (en)
WO (1) WO2022041548A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116617658A (en) * 2023-07-20 2023-08-22 腾讯科技(深圳)有限公司 Image rendering method and related device
CN116630486A (en) * 2023-07-19 2023-08-22 山东锋士信息技术有限公司 Semi-automatic animation production method based on Unity3D rendering
CN116721044A (en) * 2023-08-09 2023-09-08 广州市乐淘动漫设计有限公司 Multimedia cartoon making and generating system

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111435548B (en) * 2019-01-13 2023-10-03 北京魔门塔科技有限公司 Map rendering method and device
CN112581588A (en) * 2020-12-23 2021-03-30 广东三维家信息科技有限公司 Wallboard spray painting method and device and computer storage medium
CN113350792B (en) * 2021-06-16 2024-04-09 网易(杭州)网络有限公司 Contour processing method and device for virtual model, computer equipment and storage medium
CN113440845B (en) * 2021-06-25 2024-01-30 完美世界(重庆)互动科技有限公司 Virtual model rendering method and device, storage medium and electronic device
CN113935894B (en) * 2021-09-09 2022-08-26 完美世界(北京)软件科技发展有限公司 Ink and wash style scene rendering method and equipment and storage medium
CN114470766A (en) * 2022-02-14 2022-05-13 网易(杭州)网络有限公司 Model anti-penetration method and device, electronic equipment and storage medium
CN116091671B (en) * 2022-12-21 2024-02-06 北京纳通医用机器人科技有限公司 Rendering method and device of surface drawing 3D and electronic equipment
CN116758180A (en) * 2023-07-05 2023-09-15 河北汉方建筑装饰有限责任公司 Method and device for determining inking style of building image and computing equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496177A (en) * 2011-12-05 2012-06-13 中国科学院自动化研究所 Method for producing three-dimensional water-and-ink animation
CN104715454A (en) * 2013-12-14 2015-06-17 中国航空工业集团公司第六三一研究所 Anti-aliasing graph overlapping algorithm
US20160189424A1 (en) * 2012-05-31 2016-06-30 Microsoft Technology Licensing, Llc Virtual Surface Compaction
CN108564646A (en) * 2018-03-28 2018-09-21 腾讯科技(深圳)有限公司 Rendering intent and device, storage medium, the electronic device of object
CN109389558A (en) * 2017-08-03 2019-02-26 广州汽车集团股份有限公司 A kind of method and device for eliminating image border sawtooth
CN111080780A (en) * 2019-12-26 2020-04-28 网易(杭州)网络有限公司 Edge processing method and device of virtual character model

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100463003C (en) * 2006-03-16 2009-02-18 腾讯科技(深圳)有限公司 Method and apparatus for implementing wash painting style
CN102708585B (en) * 2012-05-09 2015-05-20 北京像素软件科技股份有限公司 Method for rendering contour edges of models
US9595135B2 (en) * 2013-03-05 2017-03-14 Autodesk, Inc. Technique for mapping a texture onto a three-dimensional model
CN103400404A (en) * 2013-07-31 2013-11-20 北京华易互动科技有限公司 Method for efficiently rendering bitmap motion trail
CN104268922B (en) * 2014-09-03 2017-06-06 广州博冠信息科技有限公司 A kind of image rendering method and image rendering device
CN108305311A (en) * 2017-01-12 2018-07-20 南京大学 A kind of digitized image wash painting style technology
CN108090945B (en) * 2017-11-03 2019-08-02 腾讯科技(深圳)有限公司 Object rendering method and device, storage medium and electronic device
CN110473281B (en) * 2018-05-09 2023-08-22 网易(杭州)网络有限公司 Method and device for processing edges of three-dimensional model, processor and terminal
CN109685869B (en) * 2018-12-25 2023-04-07 网易(杭州)网络有限公司 Virtual model rendering method and device, storage medium and electronic equipment
CN114984585A (en) * 2020-04-17 2022-09-02 完美世界(重庆)互动科技有限公司 Method for generating real-time expression picture of game role

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496177A (en) * 2011-12-05 2012-06-13 中国科学院自动化研究所 Method for producing three-dimensional water-and-ink animation
US20160189424A1 (en) * 2012-05-31 2016-06-30 Microsoft Technology Licensing, Llc Virtual Surface Compaction
CN104715454A (en) * 2013-12-14 2015-06-17 中国航空工业集团公司第六三一研究所 Anti-aliasing graph overlapping algorithm
CN109389558A (en) * 2017-08-03 2019-02-26 广州汽车集团股份有限公司 A kind of method and device for eliminating image border sawtooth
CN108564646A (en) * 2018-03-28 2018-09-21 腾讯科技(深圳)有限公司 Rendering intent and device, storage medium, the electronic device of object
CN111080780A (en) * 2019-12-26 2020-04-28 网易(杭州)网络有限公司 Edge processing method and device of virtual character model

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630486A (en) * 2023-07-19 2023-08-22 山东锋士信息技术有限公司 Semi-automatic animation production method based on Unity3D rendering
CN116630486B (en) * 2023-07-19 2023-11-07 山东锋士信息技术有限公司 Semi-automatic animation production method based on Unity3D rendering
CN116617658A (en) * 2023-07-20 2023-08-22 腾讯科技(深圳)有限公司 Image rendering method and related device
CN116617658B (en) * 2023-07-20 2023-10-20 腾讯科技(深圳)有限公司 Image rendering method and related device
CN116721044A (en) * 2023-08-09 2023-09-08 广州市乐淘动漫设计有限公司 Multimedia cartoon making and generating system
CN116721044B (en) * 2023-08-09 2024-04-02 广州市乐淘动漫设计有限公司 Multimedia cartoon making and generating system

Also Published As

Publication number Publication date
CN112070873A (en) 2020-12-11
CN112070873B (en) 2021-08-20

Similar Documents

Publication Publication Date Title
WO2022041548A1 (en) Model rendering method and device
CN107358643B (en) Image processing method, image processing device, electronic equipment and storage medium
CN112215934B (en) Game model rendering method and device, storage medium and electronic device
JP4173477B2 (en) Real-time rendering method
US11839820B2 (en) Method and apparatus for generating game character model, processor, and terminal
TWI625699B (en) Cloud 3d model constructing system and constructing method thereof
CN107274469A (en) The coordinative render method of Virtual reality
CN111882627A (en) Image processing method, video processing method, device, equipment and storage medium
CN110248242B (en) Image processing and live broadcasting method, device, equipment and storage medium
CN111182350B (en) Image processing method, device, terminal equipment and storage medium
CN104574496B (en) A kind of method and device of the static shade for calculating illumination pattern and dynamic shadow fusion
CN107657648B (en) Real-time efficient dyeing method and system in mobile game
WO2023071586A1 (en) Picture generation method and apparatus, device, and medium
CN112446939A (en) Three-dimensional model dynamic rendering method and device, electronic equipment and storage medium
US20230040777A1 (en) Method and apparatus for displaying virtual landscape picture, storage medium, and electronic device
KR20220017470A (en) Create a face texture map using a single color image and depth information
CN106604087B (en) A kind of rendering implementation method of panorama live streaming
CN107609946A (en) A kind of display control method and computing device
CN112003999A (en) Three-dimensional virtual reality synthesis algorithm based on Unity 3D
CN111097169A (en) Game image processing method, device, equipment and storage medium
CN104581196A (en) Video image processing method and device
CN114998514A (en) Virtual role generation method and equipment
WO2022024780A1 (en) Information processing device, information processing method, video distribution method, and information processing system
US20230396735A1 (en) Providing a 3d representation of a transmitting participant in a virtual meeting
WO2021155688A1 (en) Picture processing method and device, storage medium, and electronic apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20951216

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20951216

Country of ref document: EP

Kind code of ref document: A1