CN111369655B - Rendering method, rendering device and terminal equipment - Google Patents

Rendering method, rendering device and terminal equipment Download PDF

Info

Publication number
CN111369655B
CN111369655B CN202010137851.6A CN202010137851A CN111369655B CN 111369655 B CN111369655 B CN 111369655B CN 202010137851 A CN202010137851 A CN 202010137851A CN 111369655 B CN111369655 B CN 111369655B
Authority
CN
China
Prior art keywords
model
plane
virtual reference
rendering
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010137851.6A
Other languages
Chinese (zh)
Other versions
CN111369655A (en
Inventor
郑文劲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010137851.6A priority Critical patent/CN111369655B/en
Publication of CN111369655A publication Critical patent/CN111369655A/en
Application granted granted Critical
Publication of CN111369655B publication Critical patent/CN111369655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Abstract

The invention provides a rendering method, a rendering device and terminal equipment; the method comprises the following steps: obtaining a target pixel point on a model plane; setting a plurality of virtual reference surfaces corresponding to the model planes; selecting a target virtual reference plane from the virtual reference planes; determining an intersection point of a connecting line of a preset observation position and a target pixel point and a target virtual reference surface; determining coordinates of projection points of the intersection points on the model plane; determining a rendering pixel point corresponding to the target pixel point based on the magnitude relation between the gray value of the corresponding position of the coordinates of the projection point in the preset noise map and the gray value corresponding to the target virtual reference plane; and rendering a model corresponding to the observation position according to the rendering pixel value of the rendering pixel point. In the method, for each target pixel point on the model plane, a proper rendering pixel point corresponding to the target pixel point is searched, and a rendering pixel value rendering model of the rendering pixel point is used for realizing a fluff effect; the operation resource and the manpower resource consumption can be effectively reduced.

Description

Rendering method, rendering device and terminal equipment
Technical Field
The present invention relates to the technical field of computer image processing and computer graphics, and in particular, to a rendering method, apparatus and terminal device.
Background
Fluff textures are very common in real life. Ranging from noble fur to lovely dolls to small animals that are loved by the heart. The soft and mild feeling of the fluff and the noble atmosphere are always favored by people, so that a plurality of designers are willing to add the fluff element to the respective works. But in film and game works, the color is a texture with very high lens out rate. Such as: animal movie characters with various villous textures and cats with villous textures in game scenes; fluff-textured apparel in mobile games. To simulate the beautiful fluff effect, the three-dimensional effect of the fluff is simulated on the surface of the model. For film and television works rendered offline. The creator can directly realize the fur through modeling without considering the best sense of realism in the local area. While real-time rendered 3D (Three Dimensional, three-dimensional) games require a balance of effects and performance, some means of achieving the nap effect is required.
In the related art, the three-dimensional sense of the fluff is mainly produced under the environment with limited performance budget by a general inserting method and a shell inserting method. The general inserting method is to express a plurality of hairs as one dough sheet according to the growth direction of the hairs, if a dense surface is to be simulated, the general inserting method needs to design a model in advance, and a huge number of dough sheets are created on the surface of the model, so that the manufacturing process consumes huge manpower resources, the modification is difficult after the manufacturing is finished, and the operation consumption is also huge. The shell-type inserting method is to increase the wrapping surface outwards according to the appearance of the model, and display the color at the position of the wool through a rendering algorithm. The shell-type inserting method simulates the fluff three-dimensional effect in a high-density surface patch overlapping mode, and can effectively reduce the performance consumption during real-time operation, but a model is required to be designed in advance, and extra human resources are required to be consumed; sometimes, the model is designed by using program design, so that additional program manpower resources are consumed to support post-production, and even the bottom code of a rendering pipeline is required to be modified by a game engine of some commercial games to realize shell type inserting sheets.
Disclosure of Invention
In view of the above, the present invention aims to provide a rendering method, a rendering device and a terminal device, so as to reduce the consumption of computing resources and human resources.
In a first aspect, an embodiment of the present invention provides a rendering method, where a rendered model includes a model plane; the method comprises the following steps: obtaining a target pixel point on a model plane; setting a plurality of virtual reference surfaces corresponding to the model planes, wherein the virtual reference surfaces are arranged in the model; selecting a target virtual reference plane from the virtual reference planes; determining an intersection point of a connecting line of a preset observation position and a target pixel point and a target virtual reference surface; determining coordinates of projection points of the intersection points on the model plane; determining a rendering pixel point corresponding to the target pixel point based on the magnitude relation between the gray value of the corresponding position of the coordinates of the projection point in the preset noise map and the gray value corresponding to the target virtual reference plane; and rendering a model corresponding to the observation position according to the rendering pixel value of the rendering pixel point.
In a preferred embodiment of the present invention, the rendered model is an arithmetic model.
In a preferred embodiment of the present invention, the step of obtaining the target pixel point on the model plane includes: and acquiring target pixel points on the surface of the model according to the observation positions.
In a preferred embodiment of the present invention, the step of determining coordinates of a projection point of the intersection point on the model plane includes: determining the distance between the virtual reference plane of the target and the model plane; determining an included angle between a connecting line of the observation position and the target pixel point and a model plane; performing trigonometric function operation on the distance and the included angle to multiply, and determining the offset of the target pixel point; and adding the coordinates of the target pixel points and the offset to obtain the coordinates of the projection points of the intersection points on the model plane.
In a preferred embodiment of the present invention, the step of determining a rendering pixel corresponding to the target pixel based on a magnitude relation between a gray value of a corresponding position of the coordinates of the projection point in the preset noise map and a gray value corresponding to the target virtual reference plane includes: judging whether the gray value of the coordinate of the projection point at the corresponding position in the preset noise diagram is larger than the gray value corresponding to the virtual reference plane of the target; if so, taking the projection point as a rendering pixel point corresponding to the target pixel point; if not, replacing the next virtual reference plane of the target virtual reference plane with the target virtual reference plane according to the plane sequence from the model plane to the virtual reference plane of the innermost layer, and continuing to execute the step of determining the intersection point of the connecting line of the preset observation position and the target pixel point and the target virtual reference plane until all the virtual reference planes are traversed.
In a preferred embodiment of the present invention, the method further includes: and if the target virtual reference plane is the innermost plane, taking the projection point corresponding to the innermost plane as the rendering pixel point corresponding to the target pixel point.
In a preferred embodiment of the present invention, the method further includes: and determining the gray value corresponding to the target virtual reference surface through the corresponding relation between the following planes and the gray value:
Figure BDA0002397752460000031
wherein N is the gray value corresponding to the virtual reference plane of the target, N 1 Characterizing the target virtual reference plane as being located at the nth layer in planar order from the model plane to the innermost virtual reference plane 1 A layer; n is the total number of layers in planar order.
In a preferred embodiment of the present invention, the method further includes: and determining the gray value corresponding to the target virtual reference surface through the corresponding relation between the following planes and the gray value:
Figure BDA0002397752460000032
wherein M is the gray value corresponding to the virtual reference plane of the target, M 1 The distance between the virtual reference plane of the target and the model plane; m is the distance between the innermost plane and the model plane.
In a preferred embodiment of the present invention, the step of rendering the model corresponding to the observation position according to the rendering pixel value of the rendering pixel point includes: determining a semi-transparent mask of the target pixel point, a normal value of the target pixel point and a color of the target pixel point based on the pixel data of the rendered target pixel point; and rendering the model based on the determined semi-transparent mask of the target pixel point, the normal value of the target pixel point and the color of the target pixel point.
In a preferred embodiment of the present invention, the method further includes: adding anisotropic high light and edge light to the model, and performing scattering calculation on the model; rendering a model based on the results of the scatter calculation.
In a second aspect, an embodiment of the present invention further provides a rendering apparatus, where a rendered model includes a model plane; the device comprises: the target pixel point acquisition module is used for acquiring a target pixel point on the model plane; the virtual reference surface setting module is used for setting a plurality of virtual reference surfaces corresponding to the model plane, and the virtual reference surfaces are arranged in the model; a target virtual reference plane selection module for selecting a target virtual reference plane from the virtual reference planes; the intersection point determining module is used for determining an intersection point of a connecting line of a preset observation position and a target pixel point and a target virtual reference surface; the projection point coordinate determining module is used for determining the coordinates of the projection points of the intersection points on the model plane; the rendering pixel point determining module is used for determining a rendering pixel point corresponding to the target pixel point based on the magnitude relation between the gray value of the corresponding position of the coordinates of the projection point in the preset noise diagram and the gray value corresponding to the target virtual reference plane; and the rendering pixel value rendering module is used for rendering a model corresponding to the observation position according to the rendering pixel value of the rendering pixel point.
In a third aspect, an embodiment of the present invention further provides a terminal device, including a processor and a memory, where the memory stores computer executable instructions executable by the processor, and the processor executes the computer executable instructions to implement the steps of the above-mentioned rendering method.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium storing computer-executable instructions that, when invoked and executed by a processor, cause the processor to implement the steps of the rendering method described above.
The embodiment of the invention has the following beneficial effects:
according to the rendering method, the rendering device and the terminal equipment provided by the embodiment of the invention, each target pixel point on the model plane is searched for a proper rendering pixel point corresponding to the target pixel point, and a rendering pixel value rendering model of the rendering pixel point is used to realize a fluff effect; by rendering the pixel value rendering model of the pixel point, additional models such as a patch or a cladding surface are not required to be added, parallax hair visual effect can be generated, the consumption of operation resources and manpower resources is effectively reduced, and the bottom layer code of the rendering pipeline is not required to be modified.
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part will be obvious from the description, or may be learned by practice of the techniques of the disclosure.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a model according to an embodiment of the present invention;
FIG. 2 is a flow chart of a rendering method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a rendering method according to an embodiment of the present invention;
fig. 4 is a schematic view of a stereoscopic effect according to an embodiment of the present invention;
FIG. 5 is a flowchart of another rendering method according to an embodiment of the present invention;
FIG. 6 is a flow chart of a rendering method according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a rendering device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
At present, the three-dimensional sense of the fluff is mainly manufactured under the environment with limited performance budget by a general inserting method and a shell inserting method, however, the general inserting method or the shell inserting method needs to add a surface sheet or a cladding surface in a model, and the shell inserting method even needs to modify the bottom layer code of a rendering pipeline, so that the consumption of operation resources and manpower resources can be increased. Based on the above, the embodiment of the invention provides a rendering method, a rendering device and terminal equipment. The technology can be applied to equipment capable of realizing man-machine interaction, such as computers, mobile phones and tablet computers, and is particularly suitable for game scenes, such as music games, card games and competitive games.
For the convenience of understanding the present embodiment, first, a rendering method disclosed in the present embodiment of the present invention is described in detail, where a model to be rendered in the method includes a model plane, and a plurality of virtual reference planes corresponding to the model plane are set with the model plane as a reference; referring to the schematic structural diagram of a model shown in fig. 1, solid lines in fig. 1 represent model planes, dashed lines in fig. 1 represent virtual reference planes, the first layer is the model plane from outside to inside, and the subsequent layers are all virtual reference planes. The model plane refers to the actual plane of the model, the virtual reference plane refers to the plane where a certain pixel point for the human eye is located, the virtual reference plane does not exist in the actual space, and the model plane exists in the actual space. That is, for a point on the model plane, the human eye will be on the virtual reference plane for that point, rather than on the model plane. The method provided by the embodiment is based on the movement of the virtual camera, and when the virtual camera moves, the visual change in the spatial depth can be generated, so that the image can generate a stereoscopic impression. In the actual model rendering process, each frame is rendered, and the virtual camera is fixed and unchanged for each frame, namely the observation position is unchanged.
The distances between every two layers of planes in the model can be equal or unequal, and the distance is not limited herein; the number of virtual reference planes is also not limited. For one model, an observation position is fixed, the observation position represents the position of the virtual camera, the virtual camera is equivalent to human eyes, an image seen by the human eyes is equivalent to the observation model of the virtual camera from the observation position, and a line emitted from the observation position is the shooting direction of the virtual camera.
Based on the above description, referring to a flowchart of a rendering method shown in fig. 2, the rendering method includes the steps of:
step S202, obtaining a target pixel point on a model plane.
The model plane is represented on the screen as a number of pixels, each pixel being written with pixel data comprising the color, gray scale, etc. of the pixel. And obtaining some target pixel points from the model, wherein the target pixel points are pixel points needing to be rendered.
In step S204, a plurality of virtual reference planes corresponding to the model plane are set, and the virtual reference planes are set in the model.
As shown in fig. 1, the virtual reference plane is disposed within the model. The intervals and the positions of the virtual reference surfaces are not limited. It should be noted that the virtual reference plane may include a model plane.
Step S206, selecting a target virtual reference plane from the virtual reference planes.
The virtual reference plane of the target refers to the visual position of the target pixel point observed by the observation position in the pixel replacement operation. That is, the data of the target pixel point should be the data of a certain position on the model plane, the target pixel point is observed from the observation position, the target pixel point uses the data of another position, and the visual stereoscopic effect is achieved by making the data change according to the movement of the observation point. Wherein if the model plane is taken as the target virtual reference plane, the target pixel point representing the current pixel replacement operation looks like or is on the model plane, i.e. the visual position and the actual position are the same.
Step S208, determining an intersection point of a connecting line of the preset observation position and the target pixel point and the target virtual reference surface.
Referring to a schematic diagram of a rendering method shown in fig. 3, as shown in fig. 3, a solid line plane in fig. 3 represents a model plane, and a dotted line plane represents a target virtual reference plane. After the target pixel point (i.e., point a in fig. 3) on the model plane is determined, the observation position is connected with the target pixel point a, and the obtained connecting line is the line of sight direction. The point B of intersection of the extended line of sight direction with the virtual reference plane of the target can be obtained. The sense of this is that the point a actually on the model plane, when viewed from the observation position, appears to be at the position of the point B of the target virtual reference plane, and may be referred to as "see the point B through the point a".
In step S210, coordinates of the projection point of the intersection point on the model plane are determined.
As shown in fig. 3, the projection point of the intersection point B on the model plane is point C, that is, a line BC perpendicular to the model plane is made from point B, and BC intersects the model plane at point C, which is the projection point. The meaning of the point C is that the rendering pixel point of the point C as the point A is needed to realize the effect that the point B is seen through the point A, namely the point A looks transparent, and the position of the point B is seen through the point.
Here, the coordinates of the intersection point of the connection line between the preset observation position and the target pixel point and the target virtual reference plane may not be determined, and only the coordinates of the projection point of the intersection point on the model plane may be determined.
Step S212, determining a rendering pixel point corresponding to the target pixel point based on the relation between the gray value of the corresponding position of the coordinates of the projection point in the preset noise map and the gray value corresponding to the target virtual reference plane.
In order to realize the effect of 'seeing the B point through the A point', the gray value of the C point and the gray value corresponding to the virtual reference surface meeting the target are required to meet the preset size relation. The judgment that the gray value satisfies the magnitude relation is to know how many layers of planes the "see B point through the a point" needs to be transmitted, and is very important for forming the stereoscopic effect.
A noise map is preset, and the noise map is used for representing gray values corresponding to projection points (C points). The gray value corresponding to the planar relationship may be determined according to the position of the planar relationship.
For example, the gray value of the preset point C is greater than the gray value corresponding to the target virtual reference plane, and the gray value of the corresponding position of the coordinate of the point C in the preset noise map is 50. The corresponding relation between the plane and the gray value is preset to be 255 for the gray value of the first layer plane and 200 for the gray value of the second layer plane, and then the gray values are gradually decreased; if the target virtual reference plane is the second layer plane, the gray value of the target virtual reference plane is 200. Since 200 is greater than 50, it is explained that the gray value of the projection point C in the present example is not greater than the gray value corresponding to the target virtual reference plane, and the point C is not the pixel rendering point corresponding to the point a, and other points of the target virtual reference plane need to be selected as rendering pixel points.
For another example, if the gray value of the C point is greater than the gray value corresponding to the target virtual reference plane, the rendering operation is performed on the pixel data of the a point of the target pixel point, that is, the pixel rendering point corresponding to the a point is used as the C point.
Step S214, a model corresponding to the observation position is rendered according to the rendering pixel value of the rendering pixel point.
After the target pixel points on the model plane all determine the corresponding rendering pixel points, rendering the model corresponding to the observation position by using the rendering pixel values of the rendering pixel points, and the obtained partial points of the model on the model plane may be points on a more internal virtual reference plane, so that the model has a fluff effect, and the model can be an arithmetic model.
The general model rendering includes the following three steps:
1. geometric treatment: inputting a geometric model of the 3D model (comprising vertex data, wherein the vertex data define the geometric shape of the object), and transforming the 3D model from the model local space to the screen space by utilizing the geometric model and matching transformation data of the model
2. Rasterizing: rasterizing each triangle of the vase transformed into screen space to obtain each triangle covering each pixel, and performing certain attributes on the vertices of the triangle, such as normal, mapping UV (U, V is texture mapping coordinates, and X, Y and Z axes of a space model are similar). The arithmetic model in this embodiment is actually the data to which the UV is directed, this data being stored on a map.
3. Pixel processing: and calculating to obtain the final rendering result of each pixel by utilizing various data obtained by interpolation in the previous rasterization stage and matching with the illumination parameters and the mapping data.
The stereoscopic effect is generally caused by a visual difference generated based on the depth of the lens. The position seen through the point a changes when the vision moves, so that a stereoscopic effect is generated. Referring to a schematic perspective view in fig. 4, if the solid line plane is a model plane and the dotted line plane is a virtual reference plane as shown in fig. 4, as the lens (observation position) moves, the position seen through the point a changes, i.e., the virtual reference planes before and after the lens moves are different.
According to the rendering method provided by the embodiment of the invention, each target pixel point on the model plane is searched for a proper rendering pixel point corresponding to the target pixel point, and a rendering pixel value rendering model of the rendering pixel point is used to realize a fluff effect; by rendering the pixel value rendering model of the pixel point, additional models such as a patch or a cladding surface are not required to be added, parallax hair visual effect can be generated, the consumption of operation resources and manpower resources is effectively reduced, and the bottom layer code of the rendering pipeline is not required to be modified.
The embodiment of the invention also provides another rendering method, which is realized on the basis of the method of the embodiment; the method mainly describes a specific processing mode for carrying out replacement operation on pixel data of a target pixel point based on the relation between the gray value of a corresponding position of a coordinate of a projection point in a preset noise diagram and the gray value corresponding to a target virtual reference plane. Another rendering method, shown in fig. 5, is a flowchart, the rendering method comprising the steps of:
step S502, obtaining a target pixel point on a model plane.
The target pixel points are not necessarily all the pixel points of the model plane, and the target pixel points on the model surface can be obtained according to the observation position, for example: the pixel point of the model plane that can be seen at the observation position is selected as the target pixel point, for example: the model plane is a sphere, and the observation position is outside the sphere, and all pixel points outside the model plane can be selected as target pixel points.
If the target pixel points are all the pixel points of the model plane, traversing can be performed according to a certain sequence, and all the pixel points on the model plane are ensured to be traversed. For example: the pixels of the first row on the model plane may be traversed in a left-to-right order, followed by the pixels of the second row on the model plane in a left-to-right order, followed by a row-to-row traversal until the right-most pixel of the last row on the model plane is traversed.
In step S504, a plurality of virtual reference planes corresponding to the model plane are set, and the virtual reference planes are set in the model.
Step S506, selecting a target virtual reference plane from the virtual reference planes.
The rendering operation of this embodiment applies the concept of parallax, which is a difference in direction generated when the same object is observed from two points at a certain distance, as shown in fig. 3, after reaching the model plane along the direction of the user's line of sight, the color corresponding to a is not directly displayed on the screen, but continues to extend backwards for a certain distance along the direction of the line of sight to B. And then taking the pixel of the projection point C corresponding to the model surface according to the B. This allows the user to feel that the model surface is looking at the target virtual reference surface location. In practice the target virtual reference plane does not exist in the game space. It is understood that the color information of the surface of an object in the screen as seen by the user through the screen is not information of the point but color information of a point on the surface of another object in the vicinity of the point.
Step S508, determining an intersection point of a connecting line of the preset observation position and the target pixel point and the target virtual reference plane.
Referring to fig. 3, the coordinates of the model plane, the coordinates of the target virtual reference plane, the positions of the target pixel points a, and the coordinates of the observation positions are known. Firstly, calculating a function of a connecting line between an observation position and A as a sight line direction; then, the coordinates of the intersection point B of the line-of-sight direction and the target virtual reference plane can be calculated. Here, the target virtual reference plane and the model plane are generally calculated using a stereo space, and the calculated data is replaced by a UV coordinate system (texture map coordinate system). UV coordinates define information of the position of each point on the picture. The UV corresponds each point on the image precisely to the surface of the model object.
In step S510, coordinates of the projection point of the intersection point on the model plane are determined.
As shown in fig. 3, after the coordinates of B are determined, a function of a line BC passing through B and perpendicular to the model plane is determined, and the coordinates of an intersection point C of the function and the model plane, C being the projection point, are calculated. In the above method, the coordinates of B are calculated first, and then the coordinates of C are calculated based on the coordinates of B. In the practical application process, the calculation of the coordinates of B can be omitted, the coordinates of C can be calculated in other modes, and the coordinates of C can be calculated in the steps A1-A4:
and A1, determining the distance between the target virtual reference plane and the model plane.
Knowing the coordinates of the target virtual reference plane and the model plane, the distance of the target virtual reference plane from the model plane, which is the length of BC, can be calculated as shown in fig. 3.
And step A2, determining an included angle between a connecting line of the observation position and the target pixel point and the model plane.
Knowing the coordinates of a and the coordinates of the observation position, a function of the line of a with the observation position, that is, a function of the line of sight direction, can be calculated, and based on the function of the line of sight direction and the function of the model plane, an angle between the line of sight direction and the model plane can be determined, which can be understood as an angle between AB and the model plane.
And step A3, performing trigonometric function operation on the distance and the included angle to multiply, and determining the offset of the target pixel point.
And the included angle is expressed by adopting an radian system, and the offset of the target pixel point is obtained by multiplying the included angle expressed by the radian system by the distance between the target virtual reference plane and the model plane. For example: on the basis that the coordinate of the U axis is changed only when the C is compared with the A, if the included angle is 60 degrees and is converted into pi/3 in radian system, and the distance between the target virtual reference plane and the model plane (namely, the BC length is 3) is assumed, the offset (namely, the distance of the AC) of the target pixel point is calculated by the following formula: ac=bc/Tan (60), AC is 3×pi/3=pi.
And A4, adding the coordinates of the target pixel point and the offset to obtain the coordinates of the projection point of the intersection point on the model plane.
If A is in a space coordinate system, and C is compared with A, only the coordinate of the X axis changes; and increasing the value of the X axis of the coordinate of A by an offset pi to obtain the coordinate of C, namely the coordinate of the projection point of the intersection point on the model plane.
In this manner, based on the above steps, the coordinate calculation of the intersection point of the line of the target pixel point and the target virtual reference plane may be omitted, and the coordinate of the projection point of the intersection point on the model plane may be directly determined.
In the above steps, only the situation of straight line deviation is considered, that is, AC is a straight line, in practical application, the offset compensation operation and the offset accumulation operation based on the angles of the plane and the camera are added, and more real projection points can be obtained through the operations. For the case that the model plane is a curved surface and the projection point is positioned at the edge of the curved surface, certain critical treatment can be performed. Such as: critical spatial correction, critical cropping, etc.
Step S512, determining whether the gray value of the coordinate of the projection point at the corresponding position in the preset noise map is greater than the gray value corresponding to the virtual reference plane of the target. If yes, go to step S514; if not, step S516 is performed.
And respectively determining the magnitude relation between the gray value (called a first gray value) of the coordinates of the projection point at the corresponding position in the preset noise diagram and the gray value (called a second gray value) corresponding to the target virtual reference plane, and judging whether the first gray value is larger than the second gray value.
Each coordinate in the noise map corresponds to a gray value, and the gray value of the position of the projection point in the noise map is determined, and the gray value is the first gray value. In this embodiment, the noise map and the model plane are in the same coordinate system, and the coordinates of the projection points are in the model plane, so long as the gray value corresponding to the coordinates of the projection points in the noise map is determined, the gray value is the first gray value, and the gray value is read through the UV coordinates. For example, the coordinates of the projection point in the model plane are (11, 12), and the gray value corresponding to the coordinates (11, 12) in the noise map is 35, and the first gray value is 35.
The second gray value may be determined based on a correspondence of the plane to the gray value. The correspondence between the plane and the gray value is preset, so that the second gray value corresponding to the target virtual reference plane can be determined by substituting parameters (such as the number of layers, the distance between the model plane and the plane, or the number) of the target virtual reference plane into the correspondence.
If the second gray value is determined by numbering, all planes in the plane sequence can be numbered, and the second gray value corresponding to each number is set. For example: assuming that the planes sequentially have 3 planes in total, wherein the second gray value corresponding to the 2 nd plane is 25, if the number of the target virtual reference plane is 2, the second gray value corresponding to the target virtual reference plane is 25.
If the second gray value is determined by the number of layers, the gray value corresponding to the target virtual reference plane can be determined by the corresponding relation between the following planes and the gray values:
Figure BDA0002397752460000131
wherein N is the gray value corresponding to the virtual reference plane of the target, N 1 Characterizing the target virtual reference plane as being located at the nth layer in planar order from the model plane to the innermost virtual reference plane 1 A layer; n is the total number of layers in planar order.
That is, the model planes are firstly ordered from the first to the second virtual reference planes in the order from the inside to the outside to obtain a plane order, and the second gray value is evenly distributed to each layer of the planes, for example, the total layer number of the plane order is 20, the target virtual reference plane is positioned at the 10 th layer in the plane order, and then the second gray value corresponding to the target virtual reference plane is
Figure BDA0002397752460000132
If the second gray value is determined by the distance from the model plane, the gray value corresponding to the virtual reference plane of the target can be determined by the following correspondence between the plane and the gray value:
Figure BDA0002397752460000141
wherein M is the gray value corresponding to the virtual reference plane of the target, M 1 The distance between the virtual reference plane of the target and the model plane; m is the distance between the innermost plane and the model plane.
That is, the second gray value is allocated according to the distance from the model plane, and the model is releasedThe farther the pattern plane is, the smaller the second gray value is. For example, if the distance between the innermost plane and the model plane is 2000 and the distance between the target virtual reference plane and the model plane is 500, the second gray value corresponding to the target virtual reference plane is
Figure BDA0002397752460000142
In this manner, the correspondence between the plane and the gradation value may be set by means of the number of layers of the target virtual reference plane, the distance from the model plane, or the number, and the like, and it is only necessary to ensure that the second gradation value is smaller as the target virtual reference plane is lower.
The judgment condition is that the first gray value is larger than the second gray value. If the first gray value is larger than the second gray value, the first gray value corresponding to the projection point is larger than the second gray value corresponding to the target virtual reference plane. The second gray value corresponding to the virtual reference surface of the target is closer to black than the first gray value corresponding to the projection point, so that the point B can be seen through the point A, the position of the point A can be ensured to be closer to the inner side than the position of the point B, and the three-dimensional effect is realized, so that the fluff effect is reflected. For example, the first gray level is 120, the second gray level is 100, and the first gray level is greater than the second gray level, the judgment condition is considered to be satisfied.
It should be noted that, the judging condition may be that, in addition to the first gray value being greater than the second gray value, the difference between the first gray value and the second gray value is smaller than a preset threshold, that is, the first gray value is relatively close to the second gray value; it is also possible that the first gray value is larger than the second gray value, and the difference between the first gray value and the second gray value is smaller than a preset threshold value.
In step S514, the projection point is taken as the rendering pixel point corresponding to the target pixel point.
If the gray value of the coordinates of the projection point at the corresponding position in the preset noise diagram is larger than the gray value corresponding to the target virtual reference plane, the projection point corresponding to the target virtual reference plane is indicated to meet the judging condition, the projection point is taken as the rendering pixel point corresponding to the target pixel point, and the B point can be seen through the A point after rendering, so that the A point looks transparent, and the B point is seen through the A point.
During rendering, preprocessing may be performed based on the replaced pixel data: determining a semi-transparent mask of the target pixel point, a normal value of the target pixel point and a color of the target pixel point based on the pixel data of the rendered target pixel point; and rendering the model based on the determined semi-transparent mask of the target pixel point, the normal value of the target pixel point and the color of the target pixel point.
The semi-transparent mask is gradually changed, and is generated based on pixel data of the replaced target pixel points, and the semi-transparent mask is used for generating a rough effect. The normal value of the target pixel point and the color of the target pixel point may be directly acquired based on the pixel data of the rendering pixel point. The model may be rendered based on the preprocessed data.
In this manner, the semi-transparent mask of the target pixel may be determined based on the pixel data of the rendered pixel to generate the tip roughness effect, and the normal value and the color of the target pixel may be acquired.
Step S516, replacing the next virtual reference plane of the target virtual reference plane with the target virtual reference plane in the plane sequence from the model plane to the virtual reference plane of the innermost layer.
If the gray value of the coordinate of the projection point at the corresponding position in the preset noise map is not greater than the gray value corresponding to the target virtual reference plane, which indicates that the projection point corresponding to the target virtual reference plane does not meet the judgment condition, the target virtual reference plane needs to be replaced, and step S508 is executed again. The planes are first ordered in the order of the planes from the model plane to the virtual reference plane of the innermost layer, i.e. the model plane is arranged in the order from outside to inside, the virtual reference plane of the next inner layer of the model plane is arranged in the first layer, and so on.
And determining the layer number of the target virtual reference plane, taking the next inner layer plane of the target virtual reference plane as a new target virtual reference plane, and continuing to execute pixel replacement operation on the new target virtual reference plane until the projection point meets the judgment condition or all planes are traversed. That is, for each target pixel, the first projection point satisfying the judgment condition is taken as the rendering pixel.
If the plane sequence is traversed to the innermost plane, the projection point corresponding to the innermost plane can be directly used as the rendering pixel point without judgment, namely, if the target virtual reference plane is the innermost plane, the projection point corresponding to the innermost plane is used as the rendering pixel point corresponding to the target pixel point.
Traversing to the innermost plane in the plane sequence, and indicating that all planes except the innermost plane in the plane sequence do not meet respective judging conditions, then directly determining the rendering pixel point without judging whether the innermost plane meets the corresponding judging conditions. By doing so, the time for judging the conditions of the innermost plane can be saved, and the generation efficiency of the model is improved. In addition, other data may be used as pixel data for rendering the pixel points. For example: pixel data of pixel points on the map UV space is used.
Step S518, a model corresponding to the observation position is rendered according to the rendering pixel value of the rendering pixel point.
After rendering, the rendered model may be post-processed for data modification, such as: adding anisotropic high light and edge light to the model, and performing scattering calculation on the model; rendering a model based on the results of the scatter calculation. In the mode, the model is subjected to after-treatment by adding anisotropic high light and edge light to the model and performing scattering calculation on the model, and then various hair renderings can be added according to requirements, so that the model after the after-treatment has better optical effect and the model after the rendering has better visual effect.
The above method provided by the embodiment of the present invention operates pixel points on a model plane one by one, and can refer to a flow block diagram of a rendering method shown in fig. 6, as shown in fig. 6: for the current pixel point on the model plane of the model, firstly determining a projection point corresponding to the pixel point, confirming whether the projection point meets a preset judgment condition, and if so, carrying out rendering pixel point confirmation and preprocessing. If not, judging whether the target virtual reference surface corresponding to the projection point is the last layer, if not, continuing to determine the projection point for the next layer of the target virtual reference surface; if yes, directly using the projection point corresponding to the virtual reference surface of the last layer of target to confirm and preprocess the rendering pixel point. After the current pixel point finishes data replacement and preprocessing, selecting the next pixel point to continue the data replacement and preprocessing operation.
In the conventional general inserting method and shell inserting method, additional models such as a surface patch or a cladding surface are required to be added in the process of establishing the model, the method provided by the embodiment of the invention is a method based on the sight line direction and the surface normal, the model is not required to be modified, the shell is processed by a CPU (Central Processing Unit ), the coloring is processed by a GPU (Graphics Processing Unit, graphics processor), and the rendering pipeline bottom layer is required to be modified if the coloring is processed by a pure GPU. The method is a pure rendering method and is pure GPU processing, so that the method can be effective without modifying the bottom layer of a rendering pipeline. The general shell type inserting sheet is made of full-transparent rendering materials, and extra operation performance is consumed by shielding and eliminating the back of the model. The method is based on single operation of pixels, and no rejecting performance consumption requirement of the back surface of the model exists. Therefore, in theory, the method provided by the embodiment of the invention saves the operation performance compared with the shell-type inserting method.
Secondly, the conventional general inserting sheet method and shell inserting sheet method need to have certain operation consumption when cutting and judging the model back inserting sheet, and the method provided by the embodiment of the invention adopts a judging condition single judging method, can be completely processed by the GPU without changing a rendering pipeline, and does not have the operation consumption.
Finally, since the method provided in this embodiment is based on pixel-level spatial operation, that is, each pixel of the model plane can be replaced, and a single hair treatment which cannot be achieved by some conventional general and shell type inserting methods can be achieved, the treatment is finer than that by the conventional method. For example, the shell type inserting sheet can only perform the surface patch density shift from the hair end to the substrate in the unit of triangular surface, and the method provided by the embodiment can perform the surface patch density shift from the hair end to the substrate in the unit of each hair.
It should be noted that, the foregoing method embodiments are all described in a progressive manner, and each embodiment focuses on the differences from the other embodiments, and the same similar parts between the embodiments are all mutually referred to.
Corresponding to the method embodiment, the embodiment of the invention provides a rendering device, and a rendered model comprises a model plane; a schematic structural diagram of a rendering apparatus shown in fig. 7, the apparatus comprising:
a target pixel point obtaining module 71, configured to obtain a target pixel point on the model plane;
a virtual reference plane setting module 72, configured to set a plurality of virtual reference planes corresponding to the model planes, where the virtual reference planes are set in the model;
A target virtual reference plane selection module 73 for selecting a target virtual reference plane from the virtual reference planes;
the intersection point determining module 74 is configured to determine an intersection point between a connection line between a preset observation position and a target pixel point and a target virtual reference plane;
a projection point coordinate determining module 75, configured to determine coordinates of projection points of the intersection points on the model plane;
the rendering pixel point determining module 76 is configured to determine a rendering pixel point corresponding to the target pixel point based on a magnitude relation between a gray value of a corresponding position of the coordinates of the projection point in the preset noise map and a gray value corresponding to the target virtual reference plane;
the rendering pixel value rendering module 77 is configured to render a model corresponding to the observation position according to the rendering pixel value of the rendering pixel point.
According to the rendering device provided by the embodiment of the invention, each target pixel point on the model plane is searched for a proper rendering pixel point corresponding to the target pixel point, and a rendering pixel value rendering model of the rendering pixel point is used to realize a fluff effect; by rendering the pixel value rendering model of the pixel point, additional models such as a patch or a cladding surface are not required to be added, parallax hair visual effect can be generated, the consumption of operation resources and manpower resources is effectively reduced, and the bottom layer code of the rendering pipeline is not required to be modified.
In some embodiments, the rendered model is an arithmetic model.
In some embodiments, the target pixel acquiring module is configured to acquire a target pixel on the model surface according to the observation position.
In some embodiments, the above-mentioned projection point coordinate determining module is configured to determine a distance between the virtual reference plane of the target and the model plane; determining an included angle between a connecting line of the observation position and the target pixel point and a model plane; performing trigonometric function operation on the distance and the included angle to multiply, and determining the offset of the target pixel point; and adding the coordinates of the target pixel points and the offset to obtain the coordinates of the projection points of the intersection points on the model plane.
In some embodiments, the rendering pixel point determining module is configured to determine whether a gray value of a corresponding position of a coordinate of a projection point in a preset noise map is greater than a gray value corresponding to a virtual reference plane of a target; if so, taking the projection point as a rendering pixel point corresponding to the target pixel point; if not, replacing the next virtual reference plane of the target virtual reference plane with the target virtual reference plane according to the plane sequence from the model plane to the virtual reference plane of the innermost layer, and continuing to execute the step of determining the intersection point of the connecting line of the preset observation position and the target pixel point and the target virtual reference plane until all the virtual reference planes are traversed.
In some embodiments, the apparatus further includes an innermost plane pixel data replacing module, configured to, if the target virtual reference plane is an innermost plane, use a projection point corresponding to the innermost plane as a rendering pixel point corresponding to the target pixel point.
In some embodiments, the apparatus further includes a first gray value calculating module, configured to determine a gray value corresponding to the target virtual reference plane by a correspondence between the following planes and gray values:
Figure BDA0002397752460000191
wherein N is the gray value corresponding to the virtual reference plane of the target, N 1 Characterizing the target virtual reference plane as being located at the nth layer in planar order from the model plane to the innermost virtual reference plane 1 A layer; n is the total number of layers in planar order.
In some embodiments, the apparatus further includes a second gray value calculating module, configured to determine a gray value corresponding to the target virtual reference plane by a correspondence between the following planes and gray values:
Figure BDA0002397752460000192
wherein M is the gray value corresponding to the virtual reference plane of the target, M 1 The distance between the virtual reference plane of the target and the model plane; m is the distance between the innermost plane and the model plane.
In some embodiments, the apparatus further includes a rendering pixel value rendering module configured to determine, based on the rendered pixel data of the target pixel, a semi-transparent mask of the target pixel, a normal value of the target pixel, and a color of the target pixel; and rendering the model based on the determined semi-transparent mask of the target pixel point, the normal value of the target pixel point and the color of the target pixel point.
In some embodiments, the apparatus further comprises a post-processing module for adding anisotropic highlights and edge lights to the model and performing scatter calculations on the model; rendering a model based on the results of the scatter calculation.
The rendering device provided by the embodiment of the invention has the same technical characteristics as the rendering method provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
The embodiment of the invention also provides a terminal device for running the rendering method; referring to a schematic structural diagram of a terminal device shown in fig. 8, the terminal device includes a memory 100 and a processor 101, where the memory 100 is configured to store one or more computer instructions, and the one or more computer instructions are executed by the processor 101 to implement the above-mentioned rendering method.
Further, the terminal device shown in fig. 8 further includes a bus 102 and a communication interface 103, and the processor 101, the communication interface 103, and the memory 100 are connected through the bus 102.
The memory 100 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 103 (which may be wired or wireless), and may use the internet, a wide area network, a local network, a metropolitan area network, etc. Bus 102 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 8, but not only one bus or type of bus.
The processor 101 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 101 or instructions in the form of software. The processor 101 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 100 and the processor 101 reads information in the memory 100 and in combination with its hardware performs the steps of the method of the previous embodiments.
The embodiment of the invention also provides a computer readable storage medium, which stores computer executable instructions that, when being called and executed by a processor, cause the processor to implement the above-mentioned rendering method, and the specific implementation can be referred to the method embodiment and will not be described herein.
The rendering method, apparatus and computer program product of terminal device provided in the embodiments of the present invention include a computer readable storage medium storing program codes, and instructions included in the program codes may be used to execute the method in the foregoing method embodiment, and specific implementation may refer to the method embodiment and will not be described herein.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus and/or the terminal device described above may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (13)

1. A rendering method, characterized in that a rendered model comprises a model plane; the method comprises the following steps:
obtaining a target pixel point on the model plane;
setting a plurality of virtual reference surfaces corresponding to the model plane, wherein the virtual reference surfaces are arranged in the model;
selecting a target virtual reference plane from the virtual reference planes;
determining an intersection point of a connecting line of a preset observation position and the target pixel point and the target virtual reference surface;
determining coordinates of projection points of the intersection points on the model plane;
determining a rendering pixel point corresponding to the target pixel point based on the magnitude relation between the gray value of the corresponding position of the coordinate of the projection point in a preset noise diagram and the gray value corresponding to the target virtual reference plane; the noise map is used for representing gray values corresponding to projection points of the model plane, the noise map and the model plane are in the same coordinate system, and each coordinate in the noise map corresponds to one gray value;
and rendering a model corresponding to the observation position according to the rendering pixel value of the rendering pixel point.
2. The method of claim 1, wherein the model rendered is an arithmetic model.
3. The method of claim 1, wherein the step of obtaining the target pixel point on the model plane comprises:
and acquiring a target pixel point on the surface of the model according to the observation position.
4. The method of claim 1, wherein the step of determining coordinates of the point of intersection at the projection of the model plane comprises:
determining the distance between the target virtual reference plane and the model plane;
determining an included angle between a connecting line of the observation position and the target pixel point and the model plane;
performing trigonometric function operation on the distance and the included angle to multiply, and determining the offset of the target pixel point;
and adding the coordinates of the target pixel point and the offset to obtain the coordinates of the projection point of the intersection point on the model plane.
5. The method according to claim 1, wherein the step of determining the rendering pixel corresponding to the target pixel based on the magnitude relation between the gray value of the corresponding position of the coordinates of the projection point in the preset noise map and the gray value corresponding to the target virtual reference plane includes:
judging whether the gray value of the coordinate of the projection point at the corresponding position in a preset noise diagram is larger than the gray value corresponding to the target virtual reference plane or not;
If yes, taking the projection point as a rendering pixel point corresponding to the target pixel point;
if not, replacing the target virtual reference plane with the next virtual reference plane of the target virtual reference plane according to the plane sequence from the model plane to the innermost virtual reference plane, and continuing to execute the step of determining the intersection point of the connection line of the preset observation position and the target pixel point and the target virtual reference plane until all virtual reference planes are traversed.
6. The method of claim 5, wherein the method further comprises:
and if the target virtual reference plane is the innermost plane, taking the projection point corresponding to the innermost plane as the rendering pixel point corresponding to the target pixel point.
7. The method according to claim 1, wherein the method further comprises: and determining the gray value corresponding to the target virtual reference surface through the corresponding relation between the following planes and the gray value:
Figure FDA0004212418160000021
wherein N is the gray value corresponding to the target virtual reference plane, N 1 Characterizing that the target virtual reference plane is located at an nth stage in a planar order from the model plane to an innermost virtual reference plane 1 A layer; n is the total number of layers of the planar sequence.
8. The method according to claim 1, wherein the method further comprises: and determining the gray value corresponding to the target virtual reference surface through the corresponding relation between the following planes and the gray value:
Figure FDA0004212418160000031
wherein M is the gray value corresponding to the target virtual reference plane, M 1 The distance between the target virtual reference plane and the model plane; m is the distance between the innermost plane and the model plane.
9. The method of claim 1, wherein the step of rendering the model corresponding to the observation position according to the rendering pixel values of the rendering pixels comprises:
determining a semi-transparent mask of the target pixel point, a normal value of the target pixel point and a color of the target pixel point based on the rendered pixel data of the target pixel point;
and rendering the model based on the determined semi-transparent mask of the target pixel point, the normal value of the target pixel point and the color of the target pixel point.
10. The method according to claim 1, wherein the method further comprises:
adding anisotropic highlight and edge light to the model, and performing scattering calculation on the model;
Rendering the model based on the results of the scatter calculation.
11. A rendering device, characterized in that a rendered model comprises a model plane; the device comprises:
the target pixel point acquisition module is used for acquiring a target pixel point on the model plane;
the virtual reference surface setting module is used for setting a plurality of virtual reference surfaces corresponding to the model plane, and the virtual reference surfaces are arranged in the model;
a target virtual reference plane selection module, configured to select a target virtual reference plane from the virtual reference planes;
the intersection point determining module is used for determining an intersection point of a connecting line of a preset observation position and the target pixel point and the target virtual reference surface;
the projection point coordinate determining module is used for determining the coordinates of the projection points of the intersection points on the model plane;
the rendering pixel point determining module is used for determining a rendering pixel point corresponding to the target pixel point based on the magnitude relation between the gray value of the corresponding position of the coordinate of the projection point in the preset noise image and the gray value corresponding to the target virtual reference surface; the noise map is used for representing gray values corresponding to projection points of the model plane, the noise map and the model plane are in the same coordinate system, and each coordinate in the noise map corresponds to one gray value;
And the rendering pixel value rendering module is used for rendering the model corresponding to the observation position according to the rendering pixel value of the rendering pixel point.
12. A terminal device comprising a processor and a memory, the memory storing computer executable instructions executable by the processor, the processor executing the computer executable instructions to implement the steps of the rendering method of any one of claims 1 to 10.
13. A computer readable storage medium storing computer executable instructions which, when invoked and executed by a processor, cause the processor to implement the steps of the rendering method of any one of claims 1 to 10.
CN202010137851.6A 2020-03-02 2020-03-02 Rendering method, rendering device and terminal equipment Active CN111369655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010137851.6A CN111369655B (en) 2020-03-02 2020-03-02 Rendering method, rendering device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010137851.6A CN111369655B (en) 2020-03-02 2020-03-02 Rendering method, rendering device and terminal equipment

Publications (2)

Publication Number Publication Date
CN111369655A CN111369655A (en) 2020-07-03
CN111369655B true CN111369655B (en) 2023-06-30

Family

ID=71206496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010137851.6A Active CN111369655B (en) 2020-03-02 2020-03-02 Rendering method, rendering device and terminal equipment

Country Status (1)

Country Link
CN (1) CN111369655B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053423B (en) * 2020-09-18 2023-08-08 网易(杭州)网络有限公司 Model rendering method and device, storage medium and computer equipment
CN112669429A (en) * 2021-01-07 2021-04-16 稿定(厦门)科技有限公司 Image distortion rendering method and device
CN112755523B (en) * 2021-01-12 2024-03-15 网易(杭州)网络有限公司 Target virtual model construction method and device, electronic equipment and storage medium
CN112884873B (en) * 2021-03-12 2023-05-23 腾讯科技(深圳)有限公司 Method, device, equipment and medium for rendering virtual object in virtual environment
CN113421313B (en) * 2021-05-14 2023-07-25 北京达佳互联信息技术有限公司 Image construction method and device, electronic equipment and storage medium
CN113379885B (en) * 2021-06-22 2023-08-22 网易(杭州)网络有限公司 Virtual hair processing method and device, readable storage medium and electronic equipment
CN113240692B (en) * 2021-06-30 2024-01-02 北京市商汤科技开发有限公司 Image processing method, device, equipment and storage medium
CN113888398B (en) * 2021-10-21 2022-06-07 北京百度网讯科技有限公司 Hair rendering method and device and electronic equipment
CN115345862B (en) * 2022-08-23 2023-03-10 成都智元汇信息技术股份有限公司 Method and device for simulating X-ray machine scanning imaging based on column data and display
CN115797496B (en) * 2022-10-27 2023-05-05 深圳市欧冶半导体有限公司 Dotted line drawing method and related device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101149841A (en) * 2007-07-06 2008-03-26 浙江大学 Tri-dimensional application program convex mirror effect simulation method
US8289320B2 (en) * 2007-10-22 2012-10-16 Samsung Electronics Co., Ltd. 3D graphic rendering apparatus and method
CN108154548B (en) * 2017-12-06 2022-02-22 北京像素软件科技股份有限公司 Image rendering method and device

Also Published As

Publication number Publication date
CN111369655A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
CN111369655B (en) Rendering method, rendering device and terminal equipment
US9508191B2 (en) Optimal point density using camera proximity for point-based global illumination
US8294713B1 (en) Method and apparatus for illuminating objects in 3-D computer graphics
CN108230435B (en) Graphics processing using cube map textures
JP3626144B2 (en) Method and program for generating 2D image of cartoon expression from 3D object data
CN109979013B (en) Three-dimensional face mapping method and terminal equipment
CN111382618B (en) Illumination detection method, device, equipment and storage medium for face image
CN111583381B (en) Game resource map rendering method and device and electronic equipment
US20230230311A1 (en) Rendering Method and Apparatus, and Device
KR101507776B1 (en) methof for rendering outline in three dimesion map
CN111583398B (en) Image display method, device, electronic equipment and computer readable storage medium
CN116051713B (en) Rendering method, electronic device, and computer-readable storage medium
CN112446943A (en) Image rendering method and device and computer readable storage medium
US9019268B1 (en) Modification of a three-dimensional (3D) object data model based on a comparison of images and statistical information
CN115496845A (en) Image rendering method and device, electronic equipment and storage medium
US20180211434A1 (en) Stereo rendering
US20070216680A1 (en) Surface Detail Rendering Using Leap Textures
RU2680355C1 (en) Method and system of removing invisible surfaces of a three-dimensional scene
CN116228943B (en) Virtual object face reconstruction method, face reconstruction network training method and device
WO2019042028A1 (en) All-around spherical light field rendering method
JP3629243B2 (en) Image processing apparatus and method for rendering shading process using distance component in modeling
US10754498B2 (en) Hybrid image rendering system
KR102413146B1 (en) Method for processing 3-d data
US20230206567A1 (en) Geometry-aware augmented reality effects with real-time depth map
JP5848071B2 (en) A method for estimating the scattering of light in a homogeneous medium.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant