CN111369655A - Rendering method and device and terminal equipment - Google Patents

Rendering method and device and terminal equipment Download PDF

Info

Publication number
CN111369655A
CN111369655A CN202010137851.6A CN202010137851A CN111369655A CN 111369655 A CN111369655 A CN 111369655A CN 202010137851 A CN202010137851 A CN 202010137851A CN 111369655 A CN111369655 A CN 111369655A
Authority
CN
China
Prior art keywords
plane
virtual reference
model
rendering
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010137851.6A
Other languages
Chinese (zh)
Other versions
CN111369655B (en
Inventor
郑文劲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010137851.6A priority Critical patent/CN111369655B/en
Publication of CN111369655A publication Critical patent/CN111369655A/en
Application granted granted Critical
Publication of CN111369655B publication Critical patent/CN111369655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a rendering method, a rendering device and terminal equipment; the method comprises the following steps: acquiring a target pixel point on a model plane; setting a plurality of virtual reference surfaces corresponding to the model plane; selecting a target virtual reference surface from the virtual reference surfaces; determining the intersection point of a connecting line of a preset observation position and a target pixel point and a target virtual reference plane; determining the coordinates of the projection points of the intersection points on the model plane; determining rendering pixel points corresponding to the target pixel points based on the size relation between the gray value of the corresponding position of the coordinates of the projection points in the preset noise image and the gray value corresponding to the target virtual reference surface; and rendering the model corresponding to the observation position according to the rendering pixel value of the rendering pixel point. In the method, for each target pixel point on the model plane, a proper rendering pixel point corresponding to the target pixel point is found, and the rendering pixel value of the rendering pixel point is used for rendering the model, so that the fluff effect is realized; the consumption of computing resources and human resources can be effectively reduced.

Description

Rendering method and device and terminal equipment
Technical Field
The invention relates to the technical field of computer image processing and computer graphics, in particular to a rendering method, a rendering device and terminal equipment.
Background
The fluffy texture is very common in real life. From expensive fur to lovely dolls to ruddy small animals. The feeling of softness, mildness and high-priced atmosphere of the fluff has been enjoyed by people, so many designers are willing to add fluff elements to their respective works. The film and the game works are the texture with very high mirror-out rate. Such as: animal movie characters with various villi textures, and cats with villi textures in game scenes; the clothes with the fluff texture in the mobile game. To simulate the beautiful nap effect, the stereoscopic impression of the nap is simulated on the surface of the model. For offline rendered film and television works. The creators can directly realize the fur through modeling without cost-based most truth feeling. Real-time rendering 3D (Three-Dimensional) games need to take into account the balance between effects and performance, and some methods are needed to achieve the fluff effect.
In the related technology, the three-dimensional sense of fluff is made mainly by a general inserting sheet method and a shell type inserting sheet method under the environment with limited performance budget. The general insertion method means that a plurality of hairs are represented by one patch according to the growth direction of the hairs, if a thick surface is to be simulated, a model needs to be designed in advance, and a huge number of patches are created on the surface of the model, so that not only are great manpower resources consumed during manufacturing, but also modification is difficult after manufacturing, and calculation consumption is also great. The shell type inserting method is that a wrapping surface is added outwards according to the appearance of a model, and colors are displayed at the positions of the hairs through a rendering algorithm. The shell type insert method simulates a fluff three-dimensional effect in a surface patch overlapping mode with higher density, and although the performance consumption in real-time operation can be effectively reduced, a model needs to be designed in advance, and extra human resources need to be consumed; sometimes, the model is designed by means of program design, extra program human resources are consumed to support post production, and the game engine of some commercial games even needs to modify the bottom code of the rendering pipeline to realize the shell type plug-in.
Disclosure of Invention
In view of the above, the present invention provides a rendering method, an apparatus and a terminal device to reduce consumption of computational resources and human resources.
In a first aspect, an embodiment of the present invention provides a rendering method, where a rendered model includes a model plane; the method comprises the following steps: acquiring a target pixel point on a model plane; setting a plurality of virtual reference surfaces corresponding to the model plane, wherein the virtual reference surfaces are arranged in the model; selecting a target virtual reference surface from the virtual reference surfaces; determining the intersection point of a connecting line of a preset observation position and a target pixel point and a target virtual reference plane; determining the coordinates of the projection points of the intersection points on the model plane; determining rendering pixel points corresponding to the target pixel points based on the size relation between the gray value of the corresponding position of the coordinates of the projection points in the preset noise image and the gray value corresponding to the target virtual reference surface; and rendering the model corresponding to the observation position according to the rendering pixel value of the rendering pixel point.
In a preferred embodiment of the present invention, the rendered model is an arithmetic model.
In a preferred embodiment of the present invention, the step of obtaining the target pixel point on the model plane includes: and acquiring target pixel points on the surface of the model according to the observation position.
In a preferred embodiment of the present invention, the step of determining the coordinates of the projection point of the intersection point on the model plane includes: determining the distance between the target virtual reference surface and the model plane; determining an included angle between a connecting line of the observation position and the target pixel point and the model plane; performing trigonometric function operation on the distance and the included angle to multiply, and determining the offset of the target pixel point; and adding the coordinates of the target pixel points and the offset to obtain the coordinates of the projection points of the intersection points on the model plane.
In a preferred embodiment of the present invention, the step of determining the rendering pixel corresponding to the target pixel based on a size relationship between the gray-scale value of the corresponding position of the coordinate of the projection point in the preset noise map and the gray-scale value corresponding to the target virtual reference surface includes: judging whether the gray value of the coordinate of the projection point at the corresponding position in the preset noise image is larger than the gray value corresponding to the target virtual reference surface; if so, taking the projection point as a rendering pixel point corresponding to the target pixel point; if not, replacing the target virtual reference plane with the next virtual reference plane of the target virtual reference plane according to the plane sequence from the model plane to the innermost virtual reference plane, and continuing to execute the step of determining the intersection point of the connecting line of the preset observation position and the target pixel point and the target virtual reference plane until all the virtual reference planes are traversed.
In a preferred embodiment of the present invention, the method further includes: and if the target virtual reference surface is the innermost plane, taking the projection point corresponding to the innermost plane as a rendering pixel point corresponding to the target pixel point.
In a preferred embodiment of the present invention, the method further includes: determining the gray value corresponding to the target virtual reference surface through the corresponding relation between the following planes and the gray value:
Figure BDA0002397752460000031
wherein, N is the gray value corresponding to the target virtual reference surface, and N is the gray value corresponding to the target virtual reference surface1Characterizing that the target virtual reference plane is located at the nth level in the plane order from the model plane to the innermost virtual reference plane1A layer; n is the total number of layers in the planar sequence.
In a preferred embodiment of the present invention, the method further includes: determining the gray value corresponding to the target virtual reference surface through the corresponding relation between the following planes and the gray value:
Figure BDA0002397752460000032
wherein, M is the gray value corresponding to the target virtual reference surface, M1The distance between the target virtual reference plane and the model plane; and m is the distance between the plane of the innermost layer and the plane of the model.
In a preferred embodiment of the present invention, the step of rendering the model corresponding to the observation position according to the rendering pixel value of the rendering pixel point includes: determining a semi-transparent mask of the target pixel point, a normal value of the target pixel point and the color of the target pixel point based on the rendered pixel data of the target pixel point; and rendering the model based on the determined semi-transparent mask of the target pixel point, the normal value of the target pixel point and the color of the target pixel point.
In a preferred embodiment of the present invention, the method further includes: adding anisotropic highlight and edge light to the model, and performing scattering calculation on the model; rendering a model based on results of the scattering calculations.
In a second aspect, an embodiment of the present invention further provides a rendering apparatus, where a rendered model includes a model plane; the device comprises: the target pixel point acquisition module is used for acquiring a target pixel point on the model plane; the virtual reference surface setting module is used for setting a plurality of virtual reference surfaces corresponding to the model plane, and the virtual reference surfaces are arranged in the model; the target virtual reference surface selection module is used for selecting a target virtual reference surface from the virtual reference surfaces; the intersection point determining module is used for determining the intersection point of a connecting line of a preset observation position and a target pixel point and a target virtual reference plane; the projection point coordinate determination module is used for determining the coordinates of the projection points of the intersection points on the model plane; the rendering pixel point determining module is used for determining a rendering pixel point corresponding to the target pixel point based on the size relation between the gray value of the corresponding position of the coordinates of the projection point in the preset noise image and the gray value corresponding to the target virtual reference surface; and the rendering pixel value rendering module is used for rendering the model corresponding to the observation position according to the rendering pixel value of the rendering pixel point.
In a third aspect, an embodiment of the present invention further provides a terminal device, which includes a processor and a memory, where the memory stores computer-executable instructions that can be executed by the processor, and the processor executes the computer-executable instructions to implement the steps of the rendering method.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are called and executed by a processor, the computer-executable instructions cause the processor to implement the steps of the rendering method.
The embodiment of the invention has the following beneficial effects:
according to the rendering method, the rendering device and the terminal equipment provided by the embodiment of the invention, for each target pixel point on the model plane, a proper rendering pixel point corresponding to the target pixel point is found, and the rendering pixel value of the rendering pixel point is used for rendering the model, so that the fluff effect is realized; by rendering the model through the rendering pixel values of the rendering pixels, the parallax hair visual effect can be generated without adding additional models such as a patch or a clad and the like, the consumption of computing resources and human resources is effectively reduced, and the bottom codes of the rendering pipelines are not required to be modified.
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part may be learned by the practice of the above-described techniques of the disclosure, or may be learned by practice of the disclosure.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of a model according to an embodiment of the present invention;
fig. 2 is a flowchart of a rendering method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a rendering method according to an embodiment of the present invention;
fig. 4 is a schematic perspective view of an embodiment of the present invention;
FIG. 5 is a flow chart of another rendering method according to an embodiment of the present invention;
fig. 6 is a flowchart of a rendering method according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a rendering apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, the general insertion method and the shell insertion method are mainly used for making the three-dimensional feeling of fluff in the environment with limited performance budget, however, the general insertion method or the shell insertion method needs to add a surface patch or a cladding surface in a model, and the shell insertion method even needs to modify the bottom layer code of a rendering pipeline, which increases the consumption of computing resources and human resources. Based on this, the embodiment of the invention provides a rendering method, a rendering device and terminal equipment. The technology can be applied to equipment which can realize human-computer interaction such as a computer, a mobile phone, a tablet computer and the like, and is particularly suitable for game scenes such as music games, card games and competitive games.
In order to facilitate understanding of the embodiment, a rendering method disclosed in the embodiment of the present invention is first described in detail, where a rendered model includes a model plane, and a plurality of virtual reference surfaces corresponding to the model plane are set with the model plane as a reference; referring to the structural schematic diagram of a model shown in fig. 1, a solid line in fig. 1 represents a model plane, a dotted line in fig. 1 represents a virtual reference plane, from outside to inside, a first layer is the model plane, and a plurality of subsequent layers are virtual reference planes. The model plane refers to an actual plane of the model, the virtual reference plane refers to a plane where a certain pixel point which is assumed by human eyes is located, the virtual reference plane does not exist in an actual space, and the model plane exists in the actual space. That is, for a point on the model plane, the human eye would assume that the point is located on the virtual reference plane, not the model plane. The method provided by the embodiment is based on the movement of the virtual camera, and when the virtual camera moves, the effect of the movement can generate visual change on the spatial depth, so that the image can generate stereoscopic impression. In the actual model rendering process, rendering is performed for each frame, and for each frame, the virtual camera is fixed, that is, the observation position is not changed.
Wherein, the distance between each two layers of planes in the model can be equal or unequal, and is not limited herein; the number of virtual reference surfaces is likewise not limited. For a model, an observation position is fixed, the observation position represents the position of a virtual camera, and the virtual camera is equivalent to human eyes, images seen by the human eyes are equivalent to the virtual camera observing the model from the observation position, and a line emitted from the observation position is the shooting direction of the virtual camera.
Based on the above description, referring to the flowchart of a rendering method shown in fig. 2, the rendering method includes the following steps:
step S202, obtaining a target pixel point on the model plane.
The model plane is represented as a plurality of pixel points on the screen, and each pixel point is written with pixel data, wherein the pixel data comprises the color, the gray level and the like of the pixel. And acquiring some target pixel points from the model, wherein the target pixel points are pixel points needing to be rendered.
And step S204, setting a plurality of virtual reference surfaces corresponding to the model plane, wherein the virtual reference surfaces are arranged in the model.
As shown in fig. 1, the virtual reference surface is disposed within the model. The interval and position set by the virtual reference surface are not limited. It should be noted that the virtual reference plane may include a model plane.
In step S206, a target virtual reference plane is selected from the virtual reference planes.
The target virtual reference surface refers to the visual position of a target pixel point observed by the observation position in the pixel replacement operation at the time. That is, the data of the target pixel point should be the data of a certain position on the model plane, the target pixel point is observed from the observation position, and the target pixel point uses the data of another position to achieve the visual stereo effect by making the data change according to the movement of the observation point. If the model plane is taken as the target virtual reference plane, the target pixel point representing the pixel replacement operation at this time still looks on the model plane, that is, the visual position is the same as the actual position.
And S208, determining the intersection point of the connecting line of the preset observation position and the target pixel point and the target virtual reference plane.
Referring to a schematic diagram of a rendering method shown in fig. 3, as shown in fig. 3, a solid line plane in fig. 3 represents a model plane, and a dotted line plane represents a target virtual reference plane. When a target pixel point (i.e., point a in fig. 3) on the model plane is determined, the observation position is connected to the target pixel point a, and the obtained connection line is the sight line direction. The point B of intersection of the extended viewing direction and the target virtual reference plane can be obtained. This means that the point a actually on the model plane appears to be on the position of the point B of the target virtual reference plane when viewed from the observation position, and may be referred to as "seeing the point B through the point a".
And step S210, determining the coordinates of the projection point of the intersection point on the model plane.
As shown in fig. 3, the projection point of the intersection point B on the model plane is point C, i.e. a line BC perpendicular to the model plane is drawn from point B, point C intersects the model plane at point C, and point C is the projection point. The meaning of the point C is that the point C needs to be used as a rendering pixel point of the point a to realize that the point B is seen through the point a, that is, the point a looks transparent, and the point a is seen through the point B.
It should be noted here that the coordinates of the intersection point of the connection line between the preset observation position and the target pixel point and the target virtual reference plane may not be determined, and only the coordinates of the projection point of the intersection point on the model plane need to be determined.
Step S212, based on the size relationship between the gray value of the corresponding position of the projection point in the preset noise image and the gray value corresponding to the target virtual reference surface, determining the rendering pixel point corresponding to the target pixel point.
In order to realize that the point B is seen through the point A, the gray value of the point C and the gray value corresponding to the target virtual reference surface meet the preset size relationship. The determination that the gray scale value satisfies the magnitude relationship is to know how many layers of planes are required to be transmitted to "see the B point through the a point", and is very important for forming the stereoscopic effect.
A noise map is preset, and the noise map is used for representing the gray value corresponding to the projection point (C point). The gray value corresponding to the plane relation can be determined according to the position of the plane relation.
For example, the gray scale value of the point C is preset to be larger than the gray scale value corresponding to the target virtual reference surface, and the gray scale value of the coordinate of the point C at the corresponding position in the preset noise map is 50. Presetting the corresponding relation between the plane and the gray value as the gray value of the first layer plane is 255 and the gray value of the second layer plane is 200, and then sequentially decreasing; if the target virtual reference surface is the second layer plane, the gray scale value of the target virtual reference surface is 200. Because 200 is greater than 50, it is described that the gray value of the projection point C in this example is not greater than the gray value corresponding to the target virtual reference surface, and the point C is not the pixel rendering point corresponding to the point a, and it is necessary to select other points of the target virtual reference surface as rendering pixel points.
For another example, if the gray value of the point C is greater than the gray value corresponding to the target virtual reference surface, the rendering operation is performed on the pixel data of the point a of the target pixel point, that is, the point C is the pixel rendering point corresponding to the point a.
Step S214, rendering the model corresponding to the observation position according to the rendering pixel value of the rendering pixel point.
After the target pixel points on the model plane determine the corresponding rendering pixel points, the rendering pixel values of the rendering pixel points are adopted to render the model corresponding to the observation position, and part of the obtained points of the model on the model plane are probably points on a more internal virtual reference surface, so that the model has a fluff effect, and the model can be an arithmetic model.
The general model rendering includes the following three steps:
1. geometric processing: inputting a geometric model of the 3D model (including vertex data which defines the geometric shape of the object), and transforming the 3D model from a model local space to a screen space by using the geometric model and matching with transformation data of the model
2. Rasterization: rasterization is performed on each triangle of the vase transformed into screen space, resulting in each pixel covered by each triangle, and certain attributes at the vertices of the triangle, such as the normal, the map UV (U, V, i.e. texture map coordinates, and similar X, Y, Z axes for the spatial model, which define the information of the position of each point on the picture, these points being interrelated with the 3D model to determine the position of the surface texture map, UV, i.e. exactly mapping each point on the image to the surface of the model object, and the image smoothing interpolation, i.e. UV mapping, by the software at the positions of the gaps between the points). The arithmetic model in this embodiment is actually the data to which UV points, and this data is stored on a map.
3. Pixel processing: and calculating to obtain the final rendering result of each pixel by utilizing various data obtained by interpolation in the previous rasterization stage and matching with the illumination parameters and the chartlet data.
The stereoscopic effect is generally caused by a visual difference generated based on a lens depth. The stereoscopic effect is produced because the position seen through point a changes during the visual movement. Referring to a schematic diagram of the stereoscopic effect in fig. 4, if the solid line plane is a model plane and the dashed line plane is a virtual reference plane as shown in fig. 4, as the lens (observation position) moves, the position seen through point a changes, i.e. the virtual reference planes before and after the lens moves are different.
According to the rendering method provided by the embodiment of the invention, for each target pixel point on the model plane, a proper rendering pixel point corresponding to the target pixel point is found, and the rendering pixel value of the rendering pixel point is used for rendering the model, so that the fluff effect is realized; by rendering the model through the rendering pixel values of the rendering pixels, the parallax hair visual effect can be generated without adding additional models such as a patch or a clad and the like, the consumption of computing resources and human resources is effectively reduced, and the bottom codes of the rendering pipelines are not required to be modified.
The embodiment of the invention also provides another rendering method, which is realized on the basis of the method of the embodiment; the method mainly describes a specific processing mode of performing replacement operation on pixel data of a target pixel point based on the size relation between the gray value of the corresponding position of the projection point in a preset noise image and the gray value corresponding to a target virtual reference surface. Fig. 5 is a flowchart of another rendering method, which includes the steps of:
step S502, obtaining a target pixel point on the model plane.
The target pixel points are not necessarily all pixel points of the model plane, and the target pixel points on the model surface can be obtained according to the observation position, for example: selecting a pixel point of the model plane which can be seen at the observation position as a target pixel point, for example: the model plane is a sphere, the observation position is outside the sphere, and all pixel points outside the model plane can be selected as target pixel points.
If the target pixel points are all the pixel points of the model plane, traversal can be performed according to a certain sequence, and it is ensured that all the pixel points on the model plane are traversed. For example: the pixel points in the first row on the model plane can be traversed according to the sequence from left to right, the pixel points in the second row on the model plane can be traversed according to the sequence from left to right, and then the pixel points in the last row on the model plane can be traversed one by one until the pixel points on the rightmost side of the last row on the model plane are traversed.
Step S504, a plurality of virtual reference surfaces corresponding to the model plane are set, and the virtual reference surfaces are arranged in the model.
In step S506, a target virtual reference plane is selected from the virtual reference planes.
The rendering operation of this embodiment applies the concept of parallax, where parallax refers to a direction difference generated when the same target is observed from two points at a certain distance, and as shown in fig. 3, after the user reaches the model plane along the line-of-sight direction, the color corresponding to a is not directly displayed on the screen, but continues to extend backward along the line-of-sight direction by a certain distance to B. And then taking the pixel of the projection point C corresponding to the model surface according to B. This gives the user the impression that the model surface appears to be at the target virtual reference surface location. While in reality the target virtual reference surface does not exist in the game space. It can be understood that the color information of the surface of the object in the screen that the user sees through the screen is not the information of the point, but the color information of a point on the surface of another object in the vicinity of the point.
Step S508, determining an intersection point between a connection line between the preset observation position and the target pixel point and the target virtual reference plane.
Referring to fig. 3, the coordinates of the model plane, the coordinates of the target virtual reference plane, the position of the target pixel point a, and the coordinates of the observation position are known. Firstly, calculating a function of a connecting line between an observation position and A as a sight line direction; then, the coordinates of the intersection point B of the sight line direction and the target virtual reference plane can be calculated. It should be noted here that the target virtual reference plane and the model plane are generally calculated using a stereo space, and the calculated data is replaced with a UV coordinate system (texture map coordinate system). The UV coordinates define the information of the position of each point on the picture. UV corresponds each point on the image exactly to the surface of the model object.
Step S510, determining coordinates of the projection point of the intersection point on the model plane.
As shown in fig. 3, after the coordinates of B are determined, a function of a connecting line BC passing through B and perpendicular to the model plane is determined, and the coordinates of an intersection point C of the function and the model plane are calculated, where C is a projection point. In the above method, the coordinates of B are calculated first, and then the coordinates of C are calculated based on the coordinates of B. In the practical application process, the calculation of the coordinates of B can be omitted, the coordinates of C can be calculated in other ways, and the coordinates of C can be calculated through the steps A1-A4:
step A1, determining the distance between the target virtual reference plane and the model plane.
Knowing the coordinates of the target virtual reference plane and the model plane, as shown in fig. 3, the distance between the target virtual reference plane and the model plane can be calculated, which is the length of BC.
And step A2, determining the included angle between the connecting line of the observation position and the target pixel point and the model plane.
Knowing the coordinates of a and the coordinates of the observation position, a function of the line connecting a and the observation position, i.e. a function of the gaze direction, can be calculated, and based on the function of the gaze direction and the function of the model plane, the angle between the gaze direction and the model plane can be determined, which can be understood as the angle between AB and the model plane.
And step A3, performing trigonometric function operation on the distance and the included angle to multiply, and determining the offset of the target pixel point.
For example, if the included angle is 60 degrees and the converted radian system is pi/3 on the basis that the C is compared with the a and only the coordinate of the U axis is changed, assuming that the distance between the target virtual reference plane and the model plane (i.e., BC length is 3), the offset of the target pixel (i.e., the distance of AC) is calculated by the following formula, AC is BC/Tan (60) and AC is 3 × pi/3 is pi.
And step A4, adding the coordinates of the target pixel points and the offset to obtain the coordinates of the projection point of the intersection point on the model plane.
If A is in a space coordinate system and C is compared with A, only the coordinate of the X axis changes; the X-axis value of the coordinate of A is increased by the offset pi, and the coordinate of C, namely the coordinate of the projection point of the intersection point on the model plane, can be obtained.
In the method, based on the steps, the coordinate calculation of the intersection point of the connecting line of the target pixel point and the target virtual reference surface can be omitted, and the coordinate of the projection point of the intersection point on the model plane is directly determined.
In the above steps, only the condition of straight line offset is considered, that is, AC is a straight line, in practical application, a complementary offset operation and an offset accumulation operation based on the plane and the camera angle are also required to be added, and a more real projection point can be obtained through the above operations. And certain critical treatment can be carried out under the condition that the model plane is a curved surface and the projection point is positioned at the edge of the curved surface. Such as: critical space correction, critical clipping, etc.
Step S512, judging whether the gray value of the corresponding position of the coordinates of the projection point in the preset noise image is larger than the gray value corresponding to the target virtual reference surface. If yes, go to step S514; if not, step S516 is performed.
And respectively determining the size relationship between the gray value of the corresponding position of the coordinates of the projection point in the preset noise map (the gray value is called as a first gray value) and the gray value corresponding to the target virtual reference surface (the gray value is called as a second gray value), and judging whether the first gray value is larger than the second gray value.
Each coordinate in the noise map corresponds to a gray value, and the gray value of the position of the projection point in the noise map is determined, and the gray value is the first gray value. In this embodiment, the noise map and the model plane are in the same coordinate system, and the coordinates of the projection point are in the model plane, so as long as the gray value corresponding to the coordinates of the projection point in the noise map is determined, the gray value is the first gray value, and the gray value is read by the UV coordinates. For example, if the coordinates of the projection point in the model plane are (11,12), and the gray-scale value corresponding to the coordinates (11,12) in the noise map is 35, the first gray-scale value is 35.
The second gray value may be determined based on a correspondence of the plane to the gray value. The correspondence between the plane and the gray scale value is set in advance, so that the second gray scale value corresponding to the target virtual reference surface can be determined by knowing the parameters (for example, the number of layers, the distance from the model plane, or the number) of the target virtual reference surface and substituting the parameters into the correspondence.
If the second gray scale value is determined by numbering, all planes in the plane sequence can be numbered, and the second gray scale value corresponding to each number is set. For example: assuming that the plane order has 3 planes, wherein the second gray scale value corresponding to the 2 nd plane is 25, if the number of the target virtual reference plane is 2, the second gray scale value corresponding to the target virtual reference plane is 25.
If the second gray value is determined by the number of layers, the gray value corresponding to the target virtual reference surface can be determined by the following corresponding relation between the plane and the gray value:
Figure BDA0002397752460000131
wherein, N is the gray value corresponding to the target virtual reference surface, and N is the gray value corresponding to the target virtual reference surface1Characterizing that the target virtual reference plane is located at the nth level in the plane order from the model plane to the innermost virtual reference plane1A layer; n is the total number of layers in the planar sequence.
That is, first toThe model plane and the virtual reference plane are sorted from inside to outside to obtain a plane sequence, a second gray scale value is averagely distributed to each layer of plane, for example, the total number of layers of the plane sequence is 20, the target virtual reference plane is located at the 10 th layer in the plane sequence, and the second gray scale value corresponding to the target virtual reference plane is
Figure BDA0002397752460000132
If the second gray value is determined by the distance from the model plane, the gray value corresponding to the target virtual reference plane can be determined by the following correspondence between the plane and the gray value:
Figure BDA0002397752460000141
wherein, M is the gray value corresponding to the target virtual reference surface, M1The distance between the target virtual reference plane and the model plane; and m is the distance between the plane of the innermost layer and the plane of the model.
That is, the second gray values are assigned in terms of distance from the model plane, and the farther the distance from the model plane, the smaller the second gray values. For example, if the distance between the innermost plane and the model plane is 2000, and the distance between the target virtual reference plane and the model plane is 500, the second gray scale value corresponding to the target virtual reference plane is 2000
Figure BDA0002397752460000142
In this way, the corresponding relationship between the plane and the gray scale value can be set by the layer number of the target virtual reference plane, the distance between the target virtual reference plane and the model plane, or the number, and the like, and it is only required to ensure that the second gray scale value is smaller as the target virtual reference plane is lower.
It is necessary to satisfy the determination condition that the first gradation value is larger than the second gradation value. And if the first gray value is larger than the second gray value, the first gray value corresponding to the projection point is larger than the second gray value corresponding to the target virtual reference surface. The second gray value corresponding to the target virtual reference surface is closer to black than the first gray value corresponding to the projection point, so that the point B can be seen through the point A, the position where the point A looks is closer to the inner side than the position where the point B looks, and the third dimension is better achieved, and the fluff effect is reflected. For example, the first gray scale value is 120, the second gray scale value is 100, and the first gray scale value is greater than the second gray scale value, the determination condition is considered to be satisfied.
It should be noted that, in addition to the fact that the first gray scale value is greater than the second gray scale value, the determination condition may also be that a difference between the first gray scale value and the second gray scale value is smaller than a preset threshold value, that is, the first gray scale value is closer to the second gray scale value; the first gray value may be greater than the second gray value, and a difference between the first gray value and the second gray value is smaller than a preset threshold.
And step S514, taking the projection point as a rendering pixel point corresponding to the target pixel point.
If the gray value of the corresponding position of the coordinates of the projection point in the preset noise map is larger than the gray value corresponding to the target virtual reference surface, the projection point corresponding to the target virtual reference surface meets the judgment condition, the projection point is used as a rendering pixel point corresponding to the target pixel point, and after rendering, the 'seeing the B point through the A point' can be realized, the A point is made to look transparent, and the position of the B point is seen through the A point.
During rendering, preprocessing may be performed based on the replaced pixel data: determining a semi-transparent mask of the target pixel point, a normal value of the target pixel point and the color of the target pixel point based on the rendered pixel data of the target pixel point; and rendering the model based on the determined semi-transparent mask of the target pixel point, the normal value of the target pixel point and the color of the target pixel point.
The semi-permeable mask is gradually changed and is generated based on the pixel data of the replaced target pixel point, and the semi-permeable mask is used for generating a rough effect. The normal value of the target pixel point and the color of the target pixel point can be directly obtained based on the pixel data of the rendering pixel point. The model may be rendered based on the preprocessed data.
In the method, the semi-transparent mask of the target pixel point can be determined based on the pixel data of the rendering pixel point so as to generate a tip roughness effect, and the normal value and the color of the target pixel point can be obtained.
Step S516, replacing the target virtual reference plane with the next virtual reference plane of the target virtual reference plane according to the plane sequence from the model plane to the innermost virtual reference plane.
If the gray value of the corresponding position of the coordinates of the projection point in the preset noise map is not greater than the gray value corresponding to the target virtual reference surface, which indicates that the projection point corresponding to the target virtual reference surface does not satisfy the judgment condition, the target virtual reference surface needs to be replaced, and step S508 is executed again. The planes are firstly sorted according to the plane sequence from the model plane to the virtual reference plane at the innermost layer, namely, the model plane is arranged at the first layer according to the sequence from outside to inside, the virtual reference plane at the next inner layer of the model plane is arranged at the second layer, and the like.
And determining the layer number of the target virtual reference surface, then taking the next inner layer plane of the target virtual reference surface as a new target virtual reference surface, and continuously executing pixel replacement operation on the new target virtual reference surface until whether the projection point meets the judgment condition or not, or traversing all planes. That is, for each target pixel point, the first projection point meeting the judgment condition is taken as a rendering pixel point.
If the virtual reference plane is the innermost plane, the projection point corresponding to the innermost plane is directly used as the rendering pixel point corresponding to the target pixel point.
Traversing to the innermost plane in the plane sequence, and showing that all planes except the innermost plane in the plane sequence do not meet respective judgment conditions, so that whether the innermost plane meets the corresponding judgment conditions is not needed to be judged, and the pixel point is directly determined to be rendered. By doing so, the time for judging the condition of the innermost plane can be saved, and the generation efficiency of the model can be increased. In addition, other data can be used as pixel data of the rendering pixel point. For example: pixel data of pixel points on the map UV space are used.
And S518, rendering the model corresponding to the observation position according to the rendering pixel value of the rendering pixel point.
After rendering, the rendered model may be post-processed for data modification, such as: adding anisotropic highlight and edge light to the model, and performing scattering calculation on the model; rendering a model based on results of the scattering calculations. In the mode, anisotropic highlight and edge light are added to the model, scattering calculation is carried out on the model, post-processing is carried out on the model, various hair renderings can be added according to requirements, the model after post-processing can have a better optical effect, and the rendered model has a better visual effect.
The method provided by the embodiment of the present invention calculates the pixel points on the model plane one by one, referring to a flow diagram of a rendering method shown in fig. 6, as shown in fig. 6: for a current pixel point on a model plane of a model, firstly determining a projection point corresponding to the pixel point, determining whether the projection point meets a preset judgment condition, and if so, performing pixel point rendering confirmation and pretreatment. If not, judging whether the target virtual reference surface corresponding to the projection point is the last layer, and if not, continuing to determine the projection point for the next layer of the target virtual reference surface; if so, directly using the projection point corresponding to the last layer of target virtual reference surface to confirm and preprocess the rendering pixel point. And after the current pixel point completes data replacement and preprocessing, selecting the next pixel point to continue data replacement and preprocessing.
The method provided by the embodiment of the invention is a method based on a sight direction and a surface normal, and does not need to modify the model, the shell Processing Unit (CPU) processes the shell and the Graphics Processing Unit (GPU) processes and colors, and if pure GPU processes and colors, the rendering pipeline bottom layer needs to be modified. The method is a pure rendering method and pure GPU processing, so that the effect can be achieved without modifying the bottom layer of a rendering pipeline. The general shell type insert sheet uses a full-transparent rendering material, and the shielding elimination on the back of the model consumes extra computing performance. The method is based on single operation of pixels, and has no elimination performance consumption requirement on the back of the model. Therefore, theoretically, the method provided by the embodiment of the invention saves the operation performance compared with the shell type insert method.
Secondly, the conventional general plug-in method and the shell type plug-in method need to have certain operation consumption when cutting judgment is carried out on the plug-in sheet at the back of the model, but the method provided by the embodiment of the invention adopts a method of judging the condition once, can be completely processed by the GPU under the condition of not changing a rendering pipeline, and does not have the operation consumption.
Finally, because the method provided by the embodiment is based on the spatial operation at the pixel level, each pixel of the model plane can be replaced, and the single hair processing which cannot be realized by some conventional general insert methods and shell insert methods can be realized, the processing is more precise compared with the conventional method. For example, the shell-type tab can only perform density shift from the hair end to the base patch by using a triangular surface as a unit, and the method provided by the embodiment can perform density shift from the hair end to the base patch by using each hair as a unit.
It should be noted that the above method embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
Corresponding to the method embodiment, the embodiment of the invention provides a rendering device, wherein a rendered model comprises a model plane; fig. 7 is a schematic structural diagram of a rendering apparatus, including:
a target pixel point obtaining module 71, configured to obtain a target pixel point on the model plane;
a virtual reference plane setting module 72, configured to set a plurality of virtual reference planes corresponding to the model plane, where the virtual reference planes are set in the model;
a target virtual reference plane selection module 73, configured to select a target virtual reference plane from the virtual reference planes;
an intersection point determining module 74, configured to determine an intersection point between a connection line between a preset observation position and a target pixel point and a target virtual reference plane;
a projection point coordinate determination module 75, configured to determine coordinates of projection points of the intersection points on the model plane;
a rendering pixel point determining module 76, configured to determine a rendering pixel point corresponding to the target pixel point based on a size relationship between a gray value of the corresponding position of the coordinate of the projection point in the preset noise map and a gray value corresponding to the target virtual reference surface;
and a rendering pixel value rendering module 77, configured to render the model corresponding to the observation position according to the rendering pixel value of the rendering pixel point.
According to the rendering device provided by the embodiment of the invention, for each target pixel point on the model plane, a proper rendering pixel point corresponding to the target pixel point is found, and the rendering pixel value of the rendering pixel point is used for rendering the model, so that the fluff effect is realized; by rendering the model through the rendering pixel values of the rendering pixels, the parallax hair visual effect can be generated without adding additional models such as a patch or a clad and the like, the consumption of computing resources and human resources is effectively reduced, and the bottom codes of the rendering pipelines are not required to be modified.
In some embodiments, the rendered model is an arithmetic model.
In some embodiments, the target pixel point obtaining module is configured to obtain the target pixel point on the model surface according to the observation position.
In some embodiments, the projective point coordinate determining module is configured to determine a distance between the target virtual reference plane and the model plane; determining an included angle between a connecting line of the observation position and the target pixel point and the model plane; performing trigonometric function operation on the distance and the included angle to multiply, and determining the offset of the target pixel point; and adding the coordinates of the target pixel points and the offset to obtain the coordinates of the projection points of the intersection points on the model plane.
In some embodiments, the rendering pixel point determining module is configured to determine whether a gray value of a corresponding position of the coordinate of the projection point in a preset noise map is greater than a gray value corresponding to the target virtual reference surface; if so, taking the projection point as a rendering pixel point corresponding to the target pixel point; if not, replacing the target virtual reference plane with the next virtual reference plane of the target virtual reference plane according to the plane sequence from the model plane to the innermost virtual reference plane, and continuing to execute the step of determining the intersection point of the connecting line of the preset observation position and the target pixel point and the target virtual reference plane until all the virtual reference planes are traversed.
In some embodiments, the apparatus further includes an innermost plane pixel data replacement module, configured to, if the target virtual reference plane is an innermost plane, use a projection point corresponding to the innermost plane as a rendering pixel point corresponding to the target pixel point.
In some embodiments, the apparatus further includes a first gray value calculating module, configured to determine a gray value corresponding to the target virtual reference plane according to the following correspondence between the plane and the gray value:
Figure BDA0002397752460000191
wherein, N is the gray value corresponding to the target virtual reference surface, and N is the gray value corresponding to the target virtual reference surface1Characterizing that the target virtual reference plane is located at the nth level in the plane order from the model plane to the innermost virtual reference plane1A layer; n is the total number of layers in the planar sequence.
In some embodiments, the apparatus further includes a second gray value calculating module, configured to determine a gray value corresponding to the target virtual reference plane according to the following correspondence between the plane and the gray value:
Figure BDA0002397752460000192
wherein, M is the gray value corresponding to the target virtual reference surface, M1The distance between the target virtual reference plane and the model plane; and m is the distance between the plane of the innermost layer and the plane of the model.
In some embodiments, the apparatus further includes a rendering pixel value rendering module, configured to determine a semi-transparent mask of the target pixel, a normal value of the target pixel, and a color of the target pixel based on the rendered pixel data of the target pixel; and rendering the model based on the determined semi-transparent mask of the target pixel point, the normal value of the target pixel point and the color of the target pixel point.
In some embodiments, the apparatus further comprises a post-processing module for adding anisotropic high light and edge light to the model and performing scattering calculations on the model; rendering a model based on results of the scattering calculations.
The rendering device provided by the embodiment of the invention has the same technical characteristics as the rendering method provided by the embodiment of the invention, so that the same technical problems can be solved, and the same technical effects can be achieved.
The embodiment of the invention also provides terminal equipment, which is used for operating the rendering method; referring to fig. 8, a schematic structural diagram of a terminal device includes a memory 100 and a processor 101, where the memory 100 is used to store one or more computer instructions, and the one or more computer instructions are executed by the processor 101 to implement the rendering method.
Further, the terminal device shown in fig. 8 further includes a bus 102 and a communication interface 103, and the processor 101, the communication interface 103, and the memory 100 are connected through the bus 102.
The Memory 100 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 103 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 102 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 8, but that does not indicate only one bus or one type of bus.
The processor 101 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 101. The Processor 101 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 100, and the processor 101 reads the information in the memory 100, and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
Embodiments of the present invention further provide a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are called and executed by a processor, the computer-executable instructions cause the processor to implement the rendering method, and specific implementation may refer to method embodiments, and is not described herein again.
The rendering method, the rendering apparatus, and the computer program product of the terminal device provided in the embodiments of the present invention include a computer-readable storage medium storing program codes, where instructions included in the program codes may be used to execute the method in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus and/or the terminal device described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (13)

1. A rendering method, wherein a rendered model comprises a model plane; the method comprises the following steps:
acquiring a target pixel point on the model plane;
setting a plurality of virtual reference surfaces corresponding to the model plane, wherein the virtual reference surfaces are arranged in the model;
selecting a target virtual reference surface from the virtual reference surfaces;
determining the intersection point of a connecting line of a preset observation position and the target pixel point and the target virtual reference plane;
determining the coordinates of the projection point of the intersection point on the model plane;
determining a rendering pixel point corresponding to the target pixel point based on the size relationship between the gray value of the corresponding position of the projection point in a preset noise image and the gray value corresponding to the target virtual reference surface;
and rendering the model corresponding to the observation position according to the rendering pixel value of the rendering pixel point.
2. The method of claim 1, wherein the rendered model is an arithmetic model.
3. The method of claim 1, wherein the step of obtaining a target pixel point on the model plane comprises:
and acquiring a target pixel point on the surface of the model according to the observation position.
4. The method of claim 1, wherein the step of determining coordinates of a projection point of the intersection point on the model plane comprises:
determining the distance between the target virtual reference plane and the model plane;
determining an included angle between a connecting line of the observation position and the target pixel point and the model plane;
performing trigonometric function operation on the distance and the included angle to multiply, and determining the offset of the target pixel point;
and adding the coordinates of the target pixel points and the offset to obtain the coordinates of the projection points of the intersection points on the model plane.
5. The method of claim 1, wherein the step of determining the rendering pixel point corresponding to the target pixel point based on a magnitude relationship between a gray value of a corresponding position of the coordinates of the projection point in a preset noise map and a gray value corresponding to the target virtual reference surface comprises:
judging whether the gray value of the coordinate of the projection point at the corresponding position in a preset noise image is larger than the gray value corresponding to the target virtual reference surface;
if so, taking the projection point as a rendering pixel point corresponding to the target pixel point;
if not, replacing the target virtual reference plane with the next virtual reference plane of the target virtual reference plane according to the plane sequence from the model plane to the innermost virtual reference plane, and continuing to execute the step of determining the preset observation position and the intersection point of the connecting line of the target pixel point and the target virtual reference plane until all the virtual reference planes are traversed.
6. The method of claim 5, further comprising:
and if the target virtual reference surface is the innermost plane, taking the projection point corresponding to the innermost plane as a rendering pixel point corresponding to the target pixel point.
7. The method of claim 1, further comprising: determining the gray value corresponding to the target virtual reference surface through the corresponding relation between the following planes and the gray value:
Figure FDA0002397752450000021
wherein N is the gray value corresponding to the target virtual reference surface, and N is the gray value corresponding to the target virtual reference surface1Characterizing that the target virtual reference plane is located at the n-th position in the plane order from the model plane to the innermost virtual reference plane1A layer; n is the total number of layers in the planar sequence.
8. The method of claim 1, further comprising: determining the gray value corresponding to the target virtual reference surface through the corresponding relation between the following planes and the gray value:
Figure FDA0002397752450000022
wherein M is the gray value corresponding to the target virtual reference surface, and M1The distance between the target virtual reference plane and the model plane; and m is the distance between the plane of the innermost layer and the plane of the model.
9. The method according to claim 1, wherein the step of rendering the model corresponding to the observation location according to the rendering pixel value of the rendering pixel point comprises:
determining a semi-transparent mask of the target pixel point, a normal value of the target pixel point and a color of the target pixel point based on the rendered pixel data of the target pixel point;
and rendering the model based on the determined semi-transparent mask of the target pixel point, the normal value of the target pixel point and the color of the target pixel point.
10. The method of claim 1, further comprising:
adding anisotropic highlight and edge light to the model, and performing scattering calculation on the model;
rendering the model based on results of the scattering calculations.
11. A rendering apparatus, wherein a rendered model comprises a model plane; the device comprises:
the target pixel point acquisition module is used for acquiring a target pixel point on the model plane;
the virtual reference surface setting module is used for setting a plurality of virtual reference surfaces corresponding to the model plane, and the virtual reference surfaces are arranged in the model;
the target virtual reference surface selection module is used for selecting a target virtual reference surface from the virtual reference surfaces;
the intersection point determining module is used for determining an intersection point between a connecting line of a preset observation position and the target pixel point and the target virtual reference plane;
the projection point coordinate determination module is used for determining the coordinates of the projection points of the intersection points on the model plane;
the rendering pixel point determining module is used for determining a rendering pixel point corresponding to the target pixel point based on the size relation between the gray value of the corresponding position of the coordinates of the projection point in a preset noise image and the gray value corresponding to the target virtual reference surface;
and the rendering pixel value rendering module is used for rendering the model corresponding to the observation position according to the rendering pixel value of the rendering pixel point.
12. A terminal device comprising a processor and a memory, the memory storing computer-executable instructions executable by the processor, the processor executing the computer-executable instructions to implement the steps of the rendering method of any one of claims 1 to 10.
13. A computer-readable storage medium having stored thereon computer-executable instructions which, when invoked and executed by a processor, cause the processor to carry out the steps of the rendering method of any one of claims 1 to 10.
CN202010137851.6A 2020-03-02 2020-03-02 Rendering method, rendering device and terminal equipment Active CN111369655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010137851.6A CN111369655B (en) 2020-03-02 2020-03-02 Rendering method, rendering device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010137851.6A CN111369655B (en) 2020-03-02 2020-03-02 Rendering method, rendering device and terminal equipment

Publications (2)

Publication Number Publication Date
CN111369655A true CN111369655A (en) 2020-07-03
CN111369655B CN111369655B (en) 2023-06-30

Family

ID=71206496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010137851.6A Active CN111369655B (en) 2020-03-02 2020-03-02 Rendering method, rendering device and terminal equipment

Country Status (1)

Country Link
CN (1) CN111369655B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053423A (en) * 2020-09-18 2020-12-08 网易(杭州)网络有限公司 Model rendering method and device, storage medium and computer equipment
CN112669429A (en) * 2021-01-07 2021-04-16 稿定(厦门)科技有限公司 Image distortion rendering method and device
CN112755523A (en) * 2021-01-12 2021-05-07 网易(杭州)网络有限公司 Target virtual model construction method and device, electronic equipment and storage medium
CN112884873A (en) * 2021-03-12 2021-06-01 腾讯科技(深圳)有限公司 Rendering method, device, equipment and medium for virtual object in virtual environment
CN113240692A (en) * 2021-06-30 2021-08-10 北京市商汤科技开发有限公司 Image processing method, device, equipment and storage medium
CN113379885A (en) * 2021-06-22 2021-09-10 网易(杭州)网络有限公司 Virtual hair processing method and device, readable storage medium and electronic equipment
CN113421313A (en) * 2021-05-14 2021-09-21 北京达佳互联信息技术有限公司 Image construction method and device, electronic equipment and storage medium
CN113888398A (en) * 2021-10-21 2022-01-04 北京百度网讯科技有限公司 Hair rendering method and device and electronic equipment
CN115345862A (en) * 2022-08-23 2022-11-15 成都智元汇信息技术股份有限公司 Method and device for simulating X-ray machine scanning imaging based on column data and display
CN115797496A (en) * 2022-10-27 2023-03-14 深圳市欧冶半导体有限公司 Dotted line drawing method and related device
CN116883567A (en) * 2023-07-07 2023-10-13 上海散爆信息技术有限公司 Fluff rendering method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101149841A (en) * 2007-07-06 2008-03-26 浙江大学 Tri-dimensional application program convex mirror effect simulation method
US20090102837A1 (en) * 2007-10-22 2009-04-23 Samsung Electronics Co., Ltd. 3d graphic rendering apparatus and method
CN108154548A (en) * 2017-12-06 2018-06-12 北京像素软件科技股份有限公司 Image rendering method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101149841A (en) * 2007-07-06 2008-03-26 浙江大学 Tri-dimensional application program convex mirror effect simulation method
US20090102837A1 (en) * 2007-10-22 2009-04-23 Samsung Electronics Co., Ltd. 3d graphic rendering apparatus and method
CN108154548A (en) * 2017-12-06 2018-06-12 北京像素软件科技股份有限公司 Image rendering method and device

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053423A (en) * 2020-09-18 2020-12-08 网易(杭州)网络有限公司 Model rendering method and device, storage medium and computer equipment
CN112053423B (en) * 2020-09-18 2023-08-08 网易(杭州)网络有限公司 Model rendering method and device, storage medium and computer equipment
CN112669429A (en) * 2021-01-07 2021-04-16 稿定(厦门)科技有限公司 Image distortion rendering method and device
CN112755523A (en) * 2021-01-12 2021-05-07 网易(杭州)网络有限公司 Target virtual model construction method and device, electronic equipment and storage medium
CN112755523B (en) * 2021-01-12 2024-03-15 网易(杭州)网络有限公司 Target virtual model construction method and device, electronic equipment and storage medium
CN112884873B (en) * 2021-03-12 2023-05-23 腾讯科技(深圳)有限公司 Method, device, equipment and medium for rendering virtual object in virtual environment
CN112884873A (en) * 2021-03-12 2021-06-01 腾讯科技(深圳)有限公司 Rendering method, device, equipment and medium for virtual object in virtual environment
CN113421313A (en) * 2021-05-14 2021-09-21 北京达佳互联信息技术有限公司 Image construction method and device, electronic equipment and storage medium
CN113421313B (en) * 2021-05-14 2023-07-25 北京达佳互联信息技术有限公司 Image construction method and device, electronic equipment and storage medium
CN113379885A (en) * 2021-06-22 2021-09-10 网易(杭州)网络有限公司 Virtual hair processing method and device, readable storage medium and electronic equipment
CN113379885B (en) * 2021-06-22 2023-08-22 网易(杭州)网络有限公司 Virtual hair processing method and device, readable storage medium and electronic equipment
CN113240692B (en) * 2021-06-30 2024-01-02 北京市商汤科技开发有限公司 Image processing method, device, equipment and storage medium
CN113240692A (en) * 2021-06-30 2021-08-10 北京市商汤科技开发有限公司 Image processing method, device, equipment and storage medium
CN113888398B (en) * 2021-10-21 2022-06-07 北京百度网讯科技有限公司 Hair rendering method and device and electronic equipment
CN113888398A (en) * 2021-10-21 2022-01-04 北京百度网讯科技有限公司 Hair rendering method and device and electronic equipment
CN115345862B (en) * 2022-08-23 2023-03-10 成都智元汇信息技术股份有限公司 Method and device for simulating X-ray machine scanning imaging based on column data and display
CN115345862A (en) * 2022-08-23 2022-11-15 成都智元汇信息技术股份有限公司 Method and device for simulating X-ray machine scanning imaging based on column data and display
CN115797496A (en) * 2022-10-27 2023-03-14 深圳市欧冶半导体有限公司 Dotted line drawing method and related device
CN115797496B (en) * 2022-10-27 2023-05-05 深圳市欧冶半导体有限公司 Dotted line drawing method and related device
CN116883567A (en) * 2023-07-07 2023-10-13 上海散爆信息技术有限公司 Fluff rendering method and device

Also Published As

Publication number Publication date
CN111369655B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN111369655B (en) Rendering method, rendering device and terminal equipment
CN113838176B (en) Model training method, three-dimensional face image generation method and three-dimensional face image generation equipment
CN109325990B (en) Image processing method, image processing apparatus, and storage medium
US6791540B1 (en) Image processing apparatus
CN108230435B (en) Graphics processing using cube map textures
CN112316420A (en) Model rendering method, device, equipment and storage medium
US8294713B1 (en) Method and apparatus for illuminating objects in 3-D computer graphics
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
US20230230311A1 (en) Rendering Method and Apparatus, and Device
CN111583381B (en) Game resource map rendering method and device and electronic equipment
CN109979013B (en) Three-dimensional face mapping method and terminal equipment
CN113648655B (en) Virtual model rendering method and device, storage medium and electronic equipment
RU2680355C1 (en) Method and system of removing invisible surfaces of a three-dimensional scene
US20180211434A1 (en) Stereo rendering
GB2406252A (en) Generation of texture maps for use in 3D computer graphics
CN111382618B (en) Illumination detection method, device, equipment and storage medium for face image
US9019268B1 (en) Modification of a three-dimensional (3D) object data model based on a comparison of images and statistical information
CN111583398B (en) Image display method, device, electronic equipment and computer readable storage medium
CN117333637B (en) Modeling and rendering method, device and equipment for three-dimensional scene
WO2024037116A9 (en) Three-dimensional model rendering method and apparatus, electronic device and storage medium
KR20210123243A (en) Method for processing 3-d data
CN114119848B (en) Model rendering method and device, computer equipment and storage medium
JP2023527438A (en) Geometry Recognition Augmented Reality Effect Using Real-time Depth Map
WO2018140223A1 (en) Stereo rendering
WO2024148898A1 (en) Image denoising method and apparatus, and computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant