CN108932745B - Image drawing method and device, terminal equipment and computer readable storage medium - Google Patents

Image drawing method and device, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN108932745B
CN108932745B CN201710369091.XA CN201710369091A CN108932745B CN 108932745 B CN108932745 B CN 108932745B CN 201710369091 A CN201710369091 A CN 201710369091A CN 108932745 B CN108932745 B CN 108932745B
Authority
CN
China
Prior art keywords
hair
pixel point
color
area
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710369091.XA
Other languages
Chinese (zh)
Other versions
CN108932745A (en
Inventor
郭金辉
李斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710369091.XA priority Critical patent/CN108932745B/en
Publication of CN108932745A publication Critical patent/CN108932745A/en
Application granted granted Critical
Publication of CN108932745B publication Critical patent/CN108932745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)

Abstract

An image drawing method relates to the technical field of image processing, and comprises the following steps: a shader for drawing hair is arranged in Unity, and the shader integrates a mixed mode and a fine mode; respectively utilizing the mixed mode through a shader to obtain the position and color parameters of each pixel point in massive areas on the surface and back of the hair in the texture mapping of the hair to be drawn, and utilizing the fine mode to obtain the position and color parameters of each pixel point in filamentous areas on the surface and back of the hair in the texture mapping; and drawing the hair according to the position and the color parameter of each pixel point in the blocky area and the position and the color parameter of each pixel point in the filiform area through a graphical program interface. In addition, the invention also provides an image drawing device, terminal equipment and a computer readable storage medium. The invention can improve the fineness of drawing the hair by using Unity, so that the drawn hair is more delicate and layered, and the consumption of hardware performance cannot be greatly burdened.

Description

Image drawing method and device, terminal equipment and computer readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image rendering method and apparatus, a terminal device, and a computer-readable storage medium.
Background
With the development of image processing technology, 3D drawing technology is increasingly applied to computer drawing. Unity3D is a currently popular piece of 3D drawing software developed by Unity technology corporation that can create types of interactive content such as three-dimensional video games, building visualizations, real-time three-dimensional animations, and the like. A Shader (Shader) supporting transparent rendering is preset in Unity, but the preset Shader only supports drawing of a basic single-layer hair transparent channel and cannot display details of hair, and the drawing effect is rough and has no hierarchy as shown in fig. 1. However, if other ways are used to draw hair, such as some movie-level hair systems, although the details of the hair can be displayed, the performance consumption of hardware is huge because the hair system uses a professional hair system to generate each individual hair and then combine the individual hair into a hairstyle.
Disclosure of Invention
In view of the above, the present invention provides an image drawing method, an image drawing apparatus, a terminal device, and a computer-readable storage medium, which can improve the fineness of drawing hair by Unity, so that the drawn hair is finer and more layered, and will not bring too much burden to the consumption of hardware performance.
The image drawing method provided by the first aspect of the embodiment of the present invention includes: setting a shader for drawing hair in Unity, the shader integrating a blend mode and a fine mode; respectively utilizing the mixed mode through the shader to obtain the position and color parameters of each pixel point in the massive areas on the hair surface and the back in the texture mapping of the hair to be drawn, and utilizing the fine mode to obtain the position and color parameters of each pixel point in the filamentous areas on the hair surface and the back in the texture mapping; and drawing the hair according to the position and the color parameter of each pixel point in the blocky area and the position and the color parameter of each pixel point in the filiform area through a preset graphical program interface.
An image drawing device according to a second aspect of the embodiments of the present invention includes: an integration module for setting a shader for drawing hair in Unity, the shader integrating a blend mode and a fine mode; a parameter obtaining module, configured to obtain, through the shader, a position and a color parameter of each pixel point in a block region on a hair surface and a block region on a back surface of a texture map of hair to be drawn, respectively by using the blend mode, and obtain, by using the fine mode, a position and a color parameter of each pixel point in a filamentous region on a hair surface and a back surface of the texture map; and the drawing module is used for drawing the hair according to the position and the color parameter of each pixel point in the blocky area and the position and the color parameter of each pixel point in the filiform area through a preset graphical program interface.
The terminal device provided by the third aspect of the embodiments of the present invention includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and is characterized in that, when the processor executes the computer program, the steps in the image rendering method provided by the first aspect of the embodiments of the present invention are implemented.
A computer-readable storage medium provided by a fourth aspect of the embodiments of the present invention is a computer-readable storage medium having a computer program stored thereon, where the computer program is configured to, when executed by a processor, implement the steps in the image rendering method according to the first aspect of the embodiments of the present invention.
According to the image drawing method, the image drawing device, the terminal device and the computer-readable storage medium, two different processing modes are integrated in the Unity Shader and are respectively used for obtaining the positions and the color parameters of the area images with different characteristic attributes on the front side and the back side of the hair to be drawn in the UV chartlet, and then the massive areas and the filamentous areas of the hair are respectively drawn and superposed according to the obtained positions and the color parameters of the area images, so that the drawn hair is more detailed and layered, the fineness of drawing the hair by using Unity is improved, and an independent hair drawing system is not required to be adopted to independently draw the hair, and therefore too much burden is not brought to the consumption of hardware performance.
In order to make the aforementioned and other objects, features and advantages of the invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
FIG. 1 is a diagram illustrating the effect of hair drawn by the conventional Unity plot method;
fig. 2 shows a schematic structural diagram of a terminal device;
fig. 3 is a schematic flowchart of an image rendering method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating an effect of hair drawn without using the image drawing method according to the embodiment of the present invention;
fig. 5 is a schematic diagram illustrating an effect of drawn hair after restoration is performed by using the image drawing method according to the embodiment of the present invention;
fig. 6 is a schematic flowchart of an image rendering method according to another embodiment of the present invention;
fig. 7 is a schematic diagram illustrating an effect of hair drawing in the image drawing method according to the embodiment of the present invention;
fig. 8 is a schematic diagram illustrating another effect of hair drawing in the image drawing method according to the embodiment of the present invention;
fig. 9 is a schematic structural diagram of an image drawing apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an image drawing apparatus according to another embodiment of the present invention.
Detailed Description
To further explain the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, structures, features and effects according to the present invention will be made with reference to the accompanying drawings and preferred embodiments.
Referring to fig. 2, a schematic structural diagram of a terminal device 100 is shown. As shown in fig. 2, the terminal device 100 includes a memory 102, a memory controller 104, one or more (only one shown) processors 106, a peripheral interface 108, a radio unit 110, a key unit 112, an audio unit 114, and a display unit 116. These components communicate with each other via one or more communication buses/signal lines 122.
It is to be understood that the structure shown in fig. 2 is only an illustration and does not limit the structure of the terminal device 100. For example, terminal device 100 may also include more or fewer components than shown in FIG. 2, or have a different configuration than shown in FIG. 2. The components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
The memory 102 may be used for storing a computer program, such as program instructions/modules corresponding to the image drawing method and apparatus in the embodiments of the present invention, and the processor 106, when executing the computer program stored in the memory 102, implements the steps of the image drawing method shown in fig. 3 to 8 described below.
The memory 102, a computer-readable storage medium, may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 102 may further include memory located remotely from processor 106, which may be connected to terminal device 100 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. Access to the memory 102 by the processor 106, and possibly other components, may be under the control of the memory controller 104.
The peripherals interface 108 couples various input/output devices to the processor 106 as well as to the memory 102. The processor 106 executes various software, instructions within the memory 102 to perform various functions of the terminal device 100 and to perform data processing.
In some examples, the peripheral interface 108, the processor 106, and the memory controller 104 may be implemented in a single chip. In other examples, they may be implemented separately from the individual chips.
The rf unit 110 is used for receiving and transmitting electromagnetic waves, and implementing interconversion between the electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices. The radio frequency unit 110 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, a memory, and so forth. The rf unit 110 may communicate with various networks such as the internet, an intranet, a preset type of wireless network, or other devices through a preset type of wireless network. The preset types of wireless networks described above may include cellular telephone networks, wireless local area networks, or metropolitan area networks. The Wireless network of the above-mentioned preset type may use various Communication standards, protocols and technologies, including but not limited to Global System for Mobile Communication (GSM), Enhanced Data GSM Environment (EDGE), Wideband Code Division Multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), bluetooth, Wireless Fidelity (WiFi) (e.g., IEEE802.11 a, IEEE802.11 b, IEEE802.1 g and/or IEEE802.11 n), Voice over Internet Protocol (VoIP), world wide mail for Internet Access (wimax), and any other suitable short-range Communication protocols, and may even include those protocols that have not yet been developed.
The key unit 112 provides an interface for a user to input to the terminal device 100, and the user can cause the terminal device 100 to perform different functions by pressing different keys.
Audio unit 114 provides an audio interface to a user that may include one or more microphones, one or more speakers, and audio circuitry. The audio circuitry receives audio data from the peripheral interface 108, converts the audio data to electrical information, and transmits the electrical information to the speaker. The speaker converts the electrical information into sound waves that the human ear can hear. The audio circuitry also receives electrical information from the microphone, converts the electrical information to voice data, and transmits the voice data to the peripheral interface 108 for further processing. The audio data may be retrieved from the memory 102 or through the radio unit 110. In addition, the audio data may also be stored in the memory 102 or transmitted through the radio frequency unit 110. In some examples, audio unit 114 may also include a headphone jack for providing an audio interface to headphones or other devices.
The display unit 116 provides an output interface between the terminal device 100 and the user. In particular, display unit 116 displays video output to the user, the content of which may include text, graphics, video, and any combination thereof. Some of the output results are for some of the user interface objects. Further, an input interface is provided between the terminal device 100 and the user for receiving user inputs, such as user clicks, swipes, and other gesture operations, so that the user interface objects respond to the user inputs. The technique of detecting user input may be based on resistive, capacitive, or any other possible touch detection technique.
Referring to fig. 3, fig. 3 is a flowchart of an image rendering method according to an embodiment of the present invention. Can be applied to the terminal device 100 shown in fig. 2 to implement the image drawing method. As shown in fig. 3, the image drawing method provided in this embodiment includes the following steps:
s101, a shader for drawing hair is arranged in the Unity, and the shader integrates a mixed mode and a fine mode;
a three-dimensional surface mesh is expanded to a plane, and the expanded plane is a texture (UV) map. UV generally refers to the abbreviation U, V texture map coordinates, similar to the X, Y, Z axes of the spatial model, which define the information of the position of each point on the picture, which is interrelated with the 3D model for determining the position of the surface texture map, i.e. for determining how to place a texture image on the three-dimensional model surface. Wherein, the horizontal direction is U, the vertical direction is V, through the UV coordinate system of two-dimensional plane, can locate an arbitrary pixel on the image. The mounted UV map of the hair to be drawn comprises the position and color parameter information of the hair to be drawn on the point and the surface of the two-dimensional plane.
A Shader (Shader) for drawing hair is set in Unity, in which Blend mode (Blend SrcAlpha OneMinusSrcAlpha) and fine mode (Alpha Test) are integrated. And the mixing mode is used for acquiring the position and the color parameters of the blocky area in the UV map and performing mixing treatment. Wherein the block-shaped area includes: transparent areas (e.g., the area between the hair strands) and non-transparent bulk areas (e.g., the roots or middle of the hair). Fine mode for obtaining the location and color parameters of the silk-like area (e.g., hair tips, and details of the hair accessory) in the UV map.
S102, respectively utilizing the mixed mode through a shader to obtain the position and color parameters of each pixel point in massive areas on the surface and the back of the hair in a texture mapping of the hair to be drawn, and utilizing a fine mode to obtain the position and color parameters of each pixel point in filamentous areas on the surface and the back of the hair in the texture mapping;
the position and color parameters of each pixel point in the block area comprise: texture (UV) coordinates and color parameters of each pixel point in the block region, such as: color identification and color values. The position and color parameters of each pixel point in the filamentous region comprise: the UV coordinates and color parameters of each pixel point in the filamentous region, such as: color identification and color values. Where the position parameter determines in which region the drawing is performed and the color parameter determines what effect each region is to draw.
And S103, drawing the hair according to the position and the color parameter of each pixel point in the blocky area and the position and the color parameter of each pixel point in the filiform area through a preset graphical program interface.
The Graphics program interface may be, for example, OpenGL (Open Graphics Library) for rendering two-dimensional or three-dimensional images.
In practical application, a Shader program is usually executed on a Graphics Processing Unit (GPU), an OpenGL main program is executed on a Central Processing Unit (CPU), the OpenGL main program inputs data such as a vertex to a display memory, a drawing process is started, the drawing process is controlled, and the graph to be drawn is drawn according to the position and color parameters of each pixel point in the graph to be drawn, which are sent by the Shader program. It is understood that the drawing in the present embodiment may include: and drawing each graphic element in the picture or drawing the display effect of each graphic element and each image element.
The implementation principle of the embodiment is to extract two layers of parameter information respectively used for drawing different parts of the hair from the UV map of the hair to be drawn. Specifically, the UV map is first processed in a blending mode, the transparent area and the block-shaped non-transparent area are extracted, and then the color values of each point or each position in the extracted transparent area and block-shaped non-transparent area are blended by using a preset blending algorithm, so as to obtain the color values of each range point of the transparent area and the block-shaped non-transparent area. Secondly, the position and color parameters of the hair part are extracted using the fine mode. And finally, calling an OpenGL interface to draw different areas of the hair and overlap the areas according to the obtained parameter values, so that the final effect of the whole hair is obtained. Fig. 4 and 5 are schematic diagrams illustrating the effect of hair before and after hair is repaired by using the image drawing method provided by the embodiment of the invention. As shown in fig. 4, the hair that is not repaired by the image drawing method provided by the embodiment of the present invention has only one layer and only the front surface of the hair, the back surface of the hair is not drawn, and the drawn hair is rough as a whole, and the details of the hair are not shown. As shown in fig. 5, the hair repaired by the image drawing method provided by the embodiment of the invention has a strong layering effect, and the details of the hair are well reflected no matter the front side or the back side of the hair.
According to the image drawing method provided by the embodiment of the invention, two different processing modes are integrated in the Unity Shader and are respectively used for acquiring the positions and color parameters of the area images with different characteristic attributes on the front side and the back side of the hair to be drawn in the UV chartlet, and then the block-shaped areas and the filiform areas of the hair are respectively drawn and superposed according to the acquired positions and color parameters of the area images, so that the drawn hair is more detailed and layered, the fineness of drawing the hair by using the Unity is improved, and an independent hair drawing system is not required to be adopted to independently draw the hair, so that too much burden is not brought to the consumption of hardware performance.
Referring to fig. 6, fig. 6 is a flowchart of an image rendering method according to another embodiment of the present invention. Can be applied to the terminal device 100 shown in fig. 2 to implement the image drawing method. As shown in fig. 6, the image drawing method provided in this embodiment includes the following steps:
s201, a shader for drawing hair is arranged in the Unity, and the shader integrates a mixed mode and a fine mode;
a Shader (Shader) for drawing hair is set in Unity, in which Blend mode (Blend SrcAlpha OneMinusSrcAlpha) and fine mode (Alpha Test) are integrated. And the mixing mode is used for acquiring the position and the color parameters of the blocky area in the UV map and performing mixing treatment. Wherein the block-shaped area includes: transparent areas (e.g., the area between the hair strands) and non-transparent bulk areas (e.g., the roots or middle of the hair). Fine mode for obtaining the location and color parameters of the silk-like area (e.g., hair tips, and details of the hair accessory) in the UV map.
S202, mounting the UV map of the hair to be drawn through a shader, converting the UV map of the hair to be drawn from a picture without a transparent channel into a picture with a preset format of an alpha channel, and improving the contrast of the alpha channel;
a three-dimensional surface mesh is expanded to a plane, and the expanded plane is a texture (UV) map. UV generally refers to the abbreviation U, V texture map coordinates, similar to the X, Y, Z axes of the spatial model, which define the information of the position of each point on the picture, which is interrelated with the 3D model for determining the position of the surface texture map, i.e. for determining how to place a texture image on the three-dimensional model surface. Wherein, the horizontal direction is U, the vertical direction is V, through the UV coordinate system of two-dimensional plane, can locate an arbitrary pixel on the image. The mounted UV map of the hair to be drawn contains information of points and faces of the hair to be drawn on a two-dimensional plane.
The method comprises the steps of mounting a UV map of hair to be drawn, converting the UV map of the hair to be drawn from a picture in a PNG format without a transparent channel into a picture in a preset format with an alpha (alpha) channel, and improving the contrast of the alpha channel. Wherein, the preset format is preferably TGA (tagged graphics) format.
The alpha channel is an 8-bit grayscale channel that records transparency information in an image with 256 levels of grayscale, defining transparent, opaque, and translucent regions, where white represents opaque, black represents transparent, and gray represents translucent. Preferably, the contrast of the alpha channel is increased to a maximum.
On one hand, because the TGA can store data information of all channels, the UV charting of the hair to be drawn in the PNG format is modified into the picture in the TGA format with an alpha (alpha) channel, so that the data of each channel can be acquired better by a Shader in the later hair drawing operation, and the data between the transparent channel and the non-transparent channel can be screened more conveniently. On the other hand, by improving the contrast of an alpha channel and increasing the difference between different colors, the deviation of data extraction of different areas can be reduced.
S203, filling a transparent area of the UV map by using a color corresponding to any gray value between 0 and 255, and performing fuzzy processing on the hairline in the UV map;
preferably, the transparent areas of the UV-map are filled with pure white.
And (3) carrying out fuzzy treatment on the hair in the UV map, specifically, carrying out layered fuzzy treatment on the edge of the material part of the hair in the UV map. The material is mainly used to express the interaction (reflection, refraction, etc.) property of hair to light.
It will be appreciated that each image has one or more colour channels, and that the default number of colour channels in an image depends on its colour mode, i.e. the colour mode of an image will determine the number of colour channels. For example, by default, bitmap mode, grayscale, bi-tone, and index color images have only one channel. An RGB (red, green, blue) image has 3 channels. Each color channel stores information of the color elements in the image. The colors in all color channels are mixed in superposition to produce the colors of the pixels in the image. Therefore, by filling the transparent areas of the UV maps with pure colors, the data of the color channels corresponding to all parts of the hair can be stripped as much as possible before the hair is drawn by the shader, so that the data hierarchy between the color channels is more obvious. And the fuzzy processing is carried out on the hair, so that the drawn hair at the edge position of the hair can be transited more naturally.
S204, respectively acquiring the position and color parameters of each pixel point in the blocky areas on the surface and the back of the hair in the UV chartlet of the hair to be drawn by using the mixed mode through a shader, and acquiring the position and color parameters of each pixel point in the filiform areas on the surface and the back of the hair in the UV chartlet of the hair to be drawn by using the fine mode;
the position and color parameters of each pixel point in the block area comprise: texture (UV) coordinates and color parameters of each pixel point in the block region, such as: color identification and color values. The position and color parameters of each pixel point in the filamentous region comprise: the UV coordinates and color parameters of each pixel point in the filamentous region, such as: color identification and color values. Where the position parameter determines in which region the drawing is performed and the color parameter determines what effect each region is to draw.
When the position and the color parameter of each pixel point in the blocky area on the surface and the back of the hair in the UV chartlet of the hair to be drawn are obtained by using the Blend SrcAlpha OneMus SrcAlpha mode through the shader, the ZWrit of the semitransparent object is closed, for example: setting the ZWrit value of the semitransparent object to OFF, then acquiring the original parameter values of the positions and the color parameters of all the pixel points in the block area, writing the acquired original parameter values (including coordinate values, color identifications and color values) into a cache, and then using preset mixing factors, such as: taking alpha of the current fragment as a mixing factor, mixing color values written into the cache before to obtain target color values of all pixel points in the massive region for hair drawing, and then sending original position parameter values, color identifiers and the target color values in the cache to a preset graphical program interface, for example: OpenGL (Open Graphics Library).
Through the mixed treatment of the mixed mode, a transition is made between the transparent area of the hair and the large non-transparent area so as to fuse the original transparent area and the large non-transparent area, thereby ensuring that the large transparent area can be drawn normally.
When positions and color parameters of all pixel points in filiform areas on the surface and the back of the hair in a UV chartlet of the hair to be drawn are obtained by a Shader through an Alpha Test mode, an Alpha value range (or a filtering range) is established, for example, between 0 and 100, the position detail of a transparent area is determined to be transition from multi-level to multi-level by 0 to 100, and when the value is 99%, the situation that only a very detailed hair part is taken is meant. In practical applications, the pixel points may be filtered using an alpha cutoff parameter. It can be understood that as long as the alpha of a pixel does not satisfy the condition defined by the alpha cutoff parameter, it will be filtered out. Specifically, the ZWrit of the translucent object is turned on, for example: and setting the ZWrit value of the semitransparent object to be ON, and then acquiring the position and the color parameter of a pixel point corresponding to the hair at the most marginal transparent position in the UV chartlet according to preset filtering parameters by utilizing an Alpha Test mode.
And S205, drawing the hair according to the position and the color parameter of each pixel point in the blocky area and the position and the color parameter of each pixel point in the filiform area through a preset graphical program interface.
Specifically, transparent areas and massive non-transparent areas in the hair are drawn on the bottom layer through OpenGL according to the positions and color parameters of all pixel points in the massive areas respectively, hairlines and hairpins in the hair are drawn according to the positions and the color parameters of all pixel points in the filamentous areas, and then the drawn hairlines and hairpins are superposed on the drawn transparent areas and massive non-transparent areas in the hair. Wherein, the drawing in this embodiment may include: and drawing graphic elements of all parts in the hair to be drawn or drawing display effects of all the graphic elements and all the image elements.
It can be understood that, in the drawing process, when the position and the color parameter of each pixel point in the block area conflict with the position and the color parameter of each pixel point in the filiform area, drawing is performed according to the position and the color parameter of the filiform area.
As shown in fig. 7 and 8, the massive region and the filamentous region of the hair are drawn in layers through the two modes in the above steps, and then the drawn images of the layers are merged together. The finally fused hair image not only supports the detail display of hair and hair tips, but also supports the display of the back of the hair.
According to the image drawing method provided by the embodiment of the invention, two different processing modes are integrated in the Unity Shader and are respectively used for acquiring the positions and color parameters of the area images with different characteristic attributes on the front side and the back side of the hair to be drawn in the UV chartlet, and then the block-shaped areas and the filiform areas of the hair are respectively drawn and superposed according to the acquired positions and color parameters of the area images, so that the drawn hair is more detailed and layered, the fineness of drawing the hair by using the Unity is improved, and an independent hair drawing system is not required to be adopted to independently draw the hair, so that too much burden is not brought to the consumption of hardware performance.
Fig. 9 is a schematic structural diagram of an image drawing apparatus according to an embodiment of the present invention. The image drawing apparatus provided in this embodiment can be applied to the terminal device 100 shown in fig. 2, and is used to implement the image drawing method in the above-described embodiment. As shown in fig. 9, the image drawing device 30 includes:
an integration module 301 for setting a shader for drawing hair in Unity, the shader integrating a blend mode and a fine mode;
a parameter obtaining module 302, configured to obtain, through the shader, a position and a color parameter of each pixel point in a block region on the hair surface and the back in a texture map of the hair to be drawn by using the blend mode, and obtain a position and a color parameter of each pixel point in a filamentous region on the hair surface and the back in the texture map by using the fine mode;
and the drawing module 303 is configured to draw the hair according to the position and the color parameter of each pixel point in the block region and the position and the color parameter of each pixel point in the filamentous region through a preset graphical program interface.
For a specific process of each function module in the image drawing device 30 provided in this embodiment to implement each function, please refer to the specific contents described in the embodiments shown in fig. 3 to fig. 5, which is not described herein again.
In the embodiment of the invention, two different processing modes are integrated in the Unity Shader and are respectively used for acquiring the positions and color parameters of the area images with different characteristic attributes on the front side and the back side of the hair to be drawn in the UV chartlet, and then the block areas and the filiform areas of the hair are respectively drawn and superposed according to the acquired positions and color parameters of the area images, so that the drawn hair is more detailed and layered, the fineness of drawing the hair by using the Unity is improved, and an independent hair drawing system is not required to be adopted to independently draw the hair, and therefore, too much burden is not brought to the consumption of hardware performance.
Fig. 10 is a schematic structural diagram of an image drawing apparatus according to another embodiment of the present invention. The image drawing apparatus provided in this embodiment can be applied to the terminal device 100 shown in fig. 2, and is used to implement the image drawing method in the above-described embodiment. Unlike the image drawing device 30 shown in fig. 9, the image drawing device 40 according to the present embodiment:
further, the parameter obtaining module 302 includes:
a first obtaining module 3021, configured to set a value of a ZWrit parameter of a translucent object to be closed, obtain, by using the mixed mode, an original parameter value of a position and a color parameter of each pixel point in the block area, write the obtained original parameter value into a cache, and mix, using a preset mixing factor, a color value in the original parameter value written into the cache to obtain a target color value of each pixel point in the block area for hair drawing, so that the graphic program interface performs, according to the target color value, a step of drawing the hair according to the position and the color parameter of each pixel point in the block area and the position and the color parameter of each pixel point in the filament area;
a second obtaining module 3022, configured to set a value of the zwrite parameter of the translucent object to be on, and obtain, according to a preset filtering parameter, a position and a color parameter of a pixel point corresponding to a hair at a most transparent edge position in the texture map.
Further, the drawing module 303 includes:
a layered drawing module 3031, configured to draw, through the graphical program interface, a transparent region and a massive non-transparent region in the hair according to the position and color parameters of each pixel point in the massive region, and draw, according to the position and color parameters of each pixel point in the filamentous region, the hair and the hair tip in the hair;
the superimposing module 3032 is configured to superimpose the drawn hair and hair tips on the drawn transparent area and the drawn block-shaped non-transparent area through the graphical program interface;
and in the drawing process, when the position and the color parameter of each pixel point in the blocky area conflict with the position and the color parameter of each pixel point in the filiform area, drawing according to the position and the color parameter of the filiform area.
Further, the apparatus further comprises:
a mounting module 401, configured to mount the texture map of the hair to be drawn through the shader;
a channel optimization module 402, configured to convert the texture map of the hair to be drawn from a picture without a transparent channel to a picture with a preset format and an alpha channel, and improve the contrast of the alpha channel;
a filling module 403, configured to fill a transparent area of the texture map with a color corresponding to any gray value between 0 and 255;
and a blurring module 404, configured to blur the hair in the texture map.
For a specific process of each function module in the image drawing device 40 provided in this embodiment to implement each function, please refer to the specific contents described in the embodiments shown in fig. 3 to fig. 8, which is not described herein again.
In the embodiment of the invention, two different processing modes are integrated in the Unity Shader and are respectively used for acquiring the positions and color parameters of the area images with different characteristic attributes on the front side and the back side of the hair to be drawn in the UV chartlet, and then the block areas and the filiform areas of the hair are respectively drawn and superposed according to the acquired positions and color parameters of the area images, so that the drawn hair is more detailed and layered, the fineness of drawing the hair by using the Unity is improved, and an independent hair drawing system is not required to be adopted to independently draw the hair, and therefore, too much burden is not brought to the consumption of hardware performance.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Those skilled in the art will appreciate that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk, an optical disk, or the like.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. An image rendering method, characterized in that the method comprises:
setting a shader for drawing hair in Unity, the shader integrating a blend mode and a fine mode;
respectively utilizing the mixed mode through the shader to obtain the position and color parameters of each pixel point in the massive areas on the hair surface and the back in the texture mapping of the hair to be drawn, and utilizing the fine mode to obtain the position and color parameters of each pixel point in the filamentous areas on the hair surface and the back in the texture mapping;
drawing the hair according to the position and the color parameter of each pixel point in the blocky area and the position and the color parameter of each pixel point in the filiform area through a preset graphical program interface;
the obtaining of the position and the color parameter of each pixel point in the massive areas on the surface and the back of the hair in the texture mapping of the hair to be drawn by using the mixed mode comprises the following steps: setting a value of a ZWrit parameter of the translucent object to OFF; acquiring the position of each pixel point in the block area and the original parameter value of the color parameter by using the mixed mode, and writing the acquired original parameter value into a cache; mixing color values written into the original parameter values in the cache by using a preset mixing factor to obtain target color values of all pixel points in the blocky area for hair drawing, so that the graphical program interface performs the step of drawing the hair according to the position and color parameters of all the pixel points in the blocky area and the position and color parameters of all the pixel points in the filiform area according to the target color values;
the obtaining of the position and color parameters of each pixel point in the filamentous regions on the surface and back of the hair in the texture map by using the fine mode includes: setting a value of a ZWrit parameter of the translucent object to ON; acquiring the position and color parameters of a pixel point corresponding to the hair at the most edge transparent position in the texture map according to preset filtering parameters;
through a preset graphical program interface, drawing the hair according to the position and the color parameter of each pixel point in the blocky area and the position and the color parameter of each pixel point in the filiform area, and the method comprises the following steps: drawing a transparent area and a blocky non-transparent area in the hair according to the position and the color parameter of each pixel point in the blocky area through the graphical program interface, and drawing the hair and the hair tip in the hair according to the position and the color parameter of each pixel point in the filiform area; superposing the drawn hair and the drawn hair tip on the drawn transparent area and the block-shaped non-transparent area; in the drawing process, when the position and the color parameter of each pixel point in the blocky area conflict with the position and the color parameter of each pixel point in the filiform area, drawing according to the position and the color parameter of the filiform area;
the method comprises the following steps of respectively utilizing the mixed mode to obtain the position and the color parameter of each pixel point in the massive areas on the surface and the back of the hair in the texture map of the hair to be drawn through the shader, and utilizing the fine mode to obtain the position and the color parameter of each pixel point in the filamentous areas on the surface and the back of the hair in the texture map, wherein the steps comprise: and carrying out fuzzy processing on the hair in the texture mapping.
2. The image drawing method according to claim 1, wherein the obtaining, by the shader, the position and the color parameter of each pixel point in the block areas on the front and back of the hair in the texture map of the hair to be drawn by using the blend mode, and before obtaining the position and the color parameter of each pixel point in the filament areas on the front and back of the hair in the texture map by using the fine mode, comprises:
mounting the texture map through the shader;
converting the texture mapping from a picture without a transparent channel into a picture with a preset format of an alpha channel;
the contrast of the alpha channel is improved.
3. The image drawing method according to claim 1, wherein the obtaining, by the shader, the position and the color parameter of each pixel point in the block areas on the front and back of the hair in the texture map of the hair to be drawn by using the blend mode, and before obtaining the position and the color parameter of each pixel point in the filament areas on the front and back of the hair in the texture map by using the fine mode, comprises:
and filling the transparent area of the texture mapping by using the color corresponding to any gray value between 0 and 255.
4. An image drawing apparatus, characterized in that the apparatus comprises:
an integration module for setting a shader for drawing hair in Unity, the shader integrating a blend mode and a fine mode;
a parameter obtaining module, configured to obtain, through the shader, a position and a color parameter of each pixel point in a block region on a hair surface and a block region on a back surface of a texture map of hair to be drawn, respectively by using the blend mode, and obtain, by using the fine mode, a position and a color parameter of each pixel point in a filamentous region on a hair surface and a back surface of the texture map;
the drawing module is used for drawing the hair according to the position and the color parameter of each pixel point in the blocky area and the position and the color parameter of each pixel point in the filiform area through a preset graphical program interface;
the obtaining of the position and the color parameter of each pixel point in the massive areas on the surface and the back of the hair in the texture mapping of the hair to be drawn by using the mixed mode comprises the following steps: setting a value of a ZWrit parameter of the translucent object to OFF; acquiring the position of each pixel point in the block area and the original parameter value of the color parameter by using the mixed mode, and writing the acquired original parameter value into a cache; mixing color values written into the original parameter values in the cache by using a preset mixing factor to obtain target color values of all pixel points in the blocky area for hair drawing, so that the graphical program interface performs the step of drawing the hair according to the position and color parameters of all the pixel points in the blocky area and the position and color parameters of all the pixel points in the filiform area according to the target color values;
the obtaining of the position and color parameters of each pixel point in the filamentous regions on the surface and back of the hair in the texture map by using the fine mode includes: setting a value of a ZWrit parameter of the translucent object to ON; acquiring the position and color parameters of a pixel point corresponding to the hair at the most edge transparent position in the texture map according to preset filtering parameters;
the drawing module includes: the layered drawing module is used for drawing the transparent area and the blocky non-transparent area in the hair according to the position and the color parameter of each pixel point in the blocky area through the graphical program interface, and drawing the hair and the hair tips in the hair according to the position and the color parameter of each pixel point in the filiform area; the superposition module is used for superposing the drawn hairline and the drawn hairtip on the drawn transparent area and the block-shaped non-transparent area through the graphical program interface; in the drawing process, when the position and the color parameter of each pixel point in the blocky area conflict with the position and the color parameter of each pixel point in the filiform area, drawing according to the position and the color parameter of the filiform area;
and the fuzzy processing module is used for carrying out fuzzy processing on the hair in the texture mapping.
5. The image rendering apparatus of claim 4, the apparatus further comprising:
the mounting module is used for mounting the texture map through the shader;
the channel optimization module is used for converting the texture mapping from a picture without a transparent channel into a picture with a preset format of an alpha channel and improving the contrast of the alpha channel;
and the filling module is used for filling the transparent area of the texture map by using the color corresponding to any gray value between 0 and 255.
6. A terminal device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the image rendering method according to any one of claims 1 to 3 when executing the computer program.
7. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the image rendering method according to any one of claims 1 to 3.
CN201710369091.XA 2017-05-23 2017-05-23 Image drawing method and device, terminal equipment and computer readable storage medium Active CN108932745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710369091.XA CN108932745B (en) 2017-05-23 2017-05-23 Image drawing method and device, terminal equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710369091.XA CN108932745B (en) 2017-05-23 2017-05-23 Image drawing method and device, terminal equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108932745A CN108932745A (en) 2018-12-04
CN108932745B true CN108932745B (en) 2020-11-17

Family

ID=64449717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710369091.XA Active CN108932745B (en) 2017-05-23 2017-05-23 Image drawing method and device, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108932745B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619777B (en) * 2019-09-26 2021-08-27 重庆三原色数码科技有限公司 Criminal investigation and experiment intelligent training and assessment system creation method based on VR technology
CN111028338B (en) * 2019-12-06 2023-08-08 珠海金山数字网络科技有限公司 Image drawing method and device based on Unity3D
CN111292392A (en) * 2020-01-17 2020-06-16 上海米哈游天命科技有限公司 Unity-based image display method, apparatus, device and medium
CN111402384A (en) * 2020-03-25 2020-07-10 北京字节跳动网络技术有限公司 Image generation method and device, electronic equipment and computer readable storage medium
CN111508053B (en) * 2020-04-26 2023-11-28 网易(杭州)网络有限公司 Rendering method and device of model, electronic equipment and computer readable medium
CN114419193B (en) * 2022-01-24 2023-03-10 北京思明启创科技有限公司 Image drawing method, image drawing device, electronic equipment and computer-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1829559A1 (en) * 2006-03-01 2007-09-05 Johnson & Johnson Consumer Companies, Inc. Method for improving sleep behaviors
EP1986158A2 (en) * 2007-04-27 2008-10-29 DreamWorks Animation LLC Decorating computer generated character with surface attached features
EP2443167A1 (en) * 2009-06-15 2012-04-25 Basf Se Process for preparing regioregular poly-(3-substituted) thiophenes, selenophenes, thia- zoles and selenazoles
CN104574480A (en) * 2015-01-16 2015-04-29 北京科艺有容科技有限责任公司 Rapid generation method of role hair style in three-dimensional animation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1829559A1 (en) * 2006-03-01 2007-09-05 Johnson & Johnson Consumer Companies, Inc. Method for improving sleep behaviors
EP1986158A2 (en) * 2007-04-27 2008-10-29 DreamWorks Animation LLC Decorating computer generated character with surface attached features
EP2443167A1 (en) * 2009-06-15 2012-04-25 Basf Se Process for preparing regioregular poly-(3-substituted) thiophenes, selenophenes, thia- zoles and selenazoles
CN104574480A (en) * 2015-01-16 2015-04-29 北京科艺有容科技有限责任公司 Rapid generation method of role hair style in three-dimensional animation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Practical Real-Time Hair Rendering and Shading;Thorsten Scheuermann;《SIGGRAPH"04:ACM SIGGRAPH 2004 Sketches》;20040831;左栏第1段至右栏第4段 *

Also Published As

Publication number Publication date
CN108932745A (en) 2018-12-04

Similar Documents

Publication Publication Date Title
CN108932745B (en) Image drawing method and device, terminal equipment and computer readable storage medium
CN107680042B (en) Rendering method, device, engine and storage medium combining texture and convolution network
CN105138317B (en) Window display processing method and device for terminal device
TW202234341A (en) Image processing method and device, electronic equipment and storage medium
US20080307341A1 (en) Rendering graphical objects based on context
CN106886353B (en) Display processing method and device of user interface
CN105550973B (en) Graphics processing unit, graphics processing system and anti-aliasing processing method
CN107770618A (en) A kind of image processing method, device and storage medium
CN112884874B (en) Method, device, equipment and medium for applying applique on virtual model
WO2023093291A1 (en) Image processing method and apparatus, computer device, and computer program product
CN111489429A (en) Image rendering control method, terminal device and storage medium
CN106251382B (en) Method and system for realizing picture style switching, camera rendering and theme updating
CN114565708A (en) Method, device and equipment for selecting anti-aliasing algorithm and readable storage medium
GB2580740A (en) Graphics processing systems
CN110866965A (en) Mapping drawing method and device for three-dimensional model
US20120218261A1 (en) Graphic system comprising a fragment graphic module and relative rendering method
CN115131260A (en) Image processing method, device, equipment, computer readable storage medium and product
CN108898551B (en) Image merging method and device
WO2024125328A1 (en) Live-streaming image frame processing method and apparatus, and device, readable storage medium and product
CN114491914A (en) Model simplifying method and device, terminal device and readable storage medium
CN115953597B (en) Image processing method, device, equipment and medium
WO2010134292A1 (en) Drawing device and drawing method
CN110310341A (en) Method, device, equipment and storage medium for generating default parameters in color algorithm
CN111145358B (en) Image processing method, device and hardware device
CN117893663B (en) WebGPU-based Web graphic rendering performance optimization method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant