CN112316425A - Picture rendering method and device, storage medium and electronic equipment - Google Patents

Picture rendering method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112316425A
CN112316425A CN202011270927.9A CN202011270927A CN112316425A CN 112316425 A CN112316425 A CN 112316425A CN 202011270927 A CN202011270927 A CN 202011270927A CN 112316425 A CN112316425 A CN 112316425A
Authority
CN
China
Prior art keywords
dimensional model
camera
target
camera distance
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011270927.9A
Other languages
Chinese (zh)
Inventor
郑超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202011270927.9A priority Critical patent/CN112316425A/en
Publication of CN112316425A publication Critical patent/CN112316425A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Abstract

The embodiment of the application discloses a picture rendering method, a picture rendering device, a storage medium and electronic equipment; according to the method and the device, the first camera distance of the target three-dimensional model relative to the plane where the virtual camera is located and the second camera distance of the two-dimensional model relative to the plane where the virtual camera is located in the virtual scene are respectively obtained, and the front-back position relation between the target three-dimensional model and the two-dimensional model is determined according to the first camera distance and the second camera distance. And determining the area of the target three-dimensional model needing to be drawn based on the front-back position relation, and performing picture rendering on the area needing to be drawn. According to the scheme, the model is regarded as a whole, the front-back position relation between the objects is judged through the distance between the model and the virtual camera plane, so that the distances between all points on the model and the camera are kept consistent, the three-dimensional model is ensured to be positioned in front of or behind the two-dimensional model at the same time, the problem of interpenetration between the objects is avoided, and the picture display effect is improved.

Description

Picture rendering method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of information processing, and in particular, to a method and an apparatus for rendering a screen, a storage medium, and an electronic device.
Background
In game production, it is a common game mode to place two-dimensional paper characters in a three-dimensional scene. In practical application, when a two-dimensional paper sheet human role is placed in a three-dimensional scene, the problem of object interpenetration is easily caused by the crossed placement of the two roles.
In the related art, the problem of interspersing between objects is typically to add a certain amount of depth offset to the objects so that one of the objects is in front of or behind the other. However, since the depth information is pixel-level, the depth of different places on the same object is different, which causes a part of the object to be in front of the person and another part to be behind the person, resulting in poor occlusion effect between the objects.
Disclosure of Invention
The embodiment of the application provides a picture rendering method, a picture rendering device, a storage medium and electronic equipment, which can solve the problem of interpenetration among objects and improve the picture display effect.
The embodiment of the application provides a picture rendering method, which comprises the following steps:
respectively acquiring a first camera distance of a target three-dimensional model in a virtual scene relative to a plane where a virtual camera is located and a second camera distance of a two-dimensional model relative to the plane;
determining the front-back position relation between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance;
determining a region of the target three-dimensional model to be drawn based on the front-back position relation;
and performing picture rendering on the area needing to be drawn in the virtual scene.
Correspondingly, an embodiment of the present application further provides a screen rendering apparatus, including:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for respectively acquiring a first camera distance of a target three-dimensional model in a virtual scene relative to a plane where a virtual camera is located and a second camera distance of a two-dimensional model relative to the plane;
the first determining unit is used for determining the front-back position relation between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance;
a second determining unit, configured to determine, based on the front-back positional relationship, a region where the target three-dimensional model needs to be drawn;
and the rendering unit is used for performing picture rendering on the area needing to be drawn in the virtual scene.
In some embodiments, in acquiring a first camera distance of the three-dimensional model of the object in the virtual scene with respect to a plane in which the virtual camera is located, the acquisition unit is configured to:
determining the position of a central point of the target three-dimensional model;
and taking the distance between the central point position and the plane where the virtual camera is positioned as the first camera distance of the target three-dimensional model relative to the plane where the virtual camera is positioned.
In some embodiments, when determining the anteroposterior positional relationship of the target three-dimensional model and the two-dimensional model from the first camera distance and the second camera distance, the first determination unit is configured to:
determining the target three-dimensional model as being located behind the two-dimensional model when the first camera distance is greater than the second camera distance;
determining the target three-dimensional model as being located before the two-dimensional model when the first camera distance is less than the second camera distance.
In some embodiments, when determining the region where the target three-dimensional model needs to be rendered based on the anteroposterior positional relationship, the second determination unit is configured to:
and determining the region of the target three-dimensional model to be drawn according to preset mask information and the front-back position relation.
In some embodiments, when determining the region where the target three-dimensional model needs to be rendered based on the anteroposterior positional relationship, the second determination unit is configured to:
obtaining a third camera distance of each pixel point in the target three-dimensional model relative to the plane;
when the first camera distance is larger than the second camera distance, determining an area needing to be drawn according to pixel points in the target three-dimensional model, of which the third camera distance is larger than the first camera distance;
and when the first camera distance is smaller than the second camera distance, determining the area to be drawn according to the pixel point of which the third camera distance is smaller than the first camera distance in the target three-dimensional model.
In some embodiments, the apparatus further comprises:
the dividing unit is used for dividing the two-dimensional models into models of different model types according to the positions of the two-dimensional models in the virtual scene when the number of the two-dimensional models in the virtual scene exceeds a preset number;
and the setting unit is used for setting the front-back position relationship between the two-dimensional model and the target three-dimensional model according to the model type, wherein the front-back position relationship between the two-dimensional model and the three-dimensional model is the same for the two-dimensional model of the same model type.
In some embodiments, when the dividing of the two-dimensional models into models of different model types according to the position of each two-dimensional model in the virtual scene, the dividing unit is configured to:
dividing the virtual scene into the preset number of regions along the visual angle direction of the virtual camera;
and dividing the two-dimensional models in the same region into the same model type.
In some embodiments, in acquiring a first camera distance of the three-dimensional model of the object in the virtual scene with respect to a plane in which the virtual camera is located, the acquisition unit is configured to:
obtaining the main body shape of the target three-dimensional model;
decomposing the target three-dimensional model into a plurality of sub three-dimensional models according to the main body shape, wherein the main body shape of the sub three-dimensional models is a non-concave body;
and respectively obtaining the distances of the plurality of sub three-dimensional models relative to the plane where the virtual camera is located to obtain a first camera distance.
In some embodiments, the first camera distance comprises: the sub-camera distance of the sub-three-dimensional model relative to the plane where the virtual camera is located; when determining the front-back position relationship between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance, the first determination unit is further configured to:
and determining the front-back position relation between each sub three-dimensional model and the two-dimensional model according to the plurality of sub camera distances and the second camera distance.
Correspondingly, the embodiment of the present application further provides a computer-readable storage medium, where a plurality of instructions are stored, and the instructions are suitable for being loaded by a processor to perform the steps in any one of the image rendering methods provided in the embodiment of the present application.
Correspondingly, the embodiment of the application also provides the electronic equipment, which comprises a memory, a storage and a control unit, wherein the memory stores a plurality of instructions; the processor loads instructions from the memory to execute the steps of any screen rendering method provided by the embodiment of the application.
According to the scheme provided by the embodiment of the application, the first camera distance of the target three-dimensional model relative to the plane where the virtual camera is located and the second camera distance of the two-dimensional model relative to the plane where the virtual camera is located in the virtual scene are respectively obtained, and the front-back position relation between the target three-dimensional model and the two-dimensional model is determined according to the first camera distance and the second camera distance. And determining the area of the target three-dimensional model needing to be drawn based on the front-back position relation, and performing picture rendering on the area needing to be drawn. According to the scheme, the model is regarded as a whole, the front-back position relation between the objects is judged through the distance between the model and the virtual camera plane, so that the distances between all points on the model and the camera are kept consistent, the three-dimensional model is ensured to be positioned in front of or behind the two-dimensional model at the same time, the problem of interpenetration between the objects is avoided, and the picture display effect is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a picture rendering method according to an embodiment of the present application.
Fig. 2 is a scene schematic diagram of a camera distance calculation method according to an embodiment of the present application.
Fig. 3 is a scene schematic diagram of a two-dimensional model classification manner provided in the embodiment of the present application.
Fig. 4 is a scene schematic diagram of interspersed display of a three-dimensional model and a two-dimensional model according to the embodiment of the present application.
Fig. 5 is a schematic view of an exploded rule of a concave body provided in an embodiment of the present application.
Fig. 6 is an exploded schematic view of a three-dimensional model provided in an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a screen rendering apparatus according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a picture rendering method, a picture rendering device, a storage medium and electronic equipment.
The screen rendering device may be specifically integrated in an electronic device. The electronic device may be a terminal or a server. For example, the terminal may be a mobile phone, a tablet Computer, an intelligent bluetooth device, a notebook Computer, or a Personal Computer (PC).
The following are detailed below. The numbers in the following examples are not intended to limit the order of preference of the examples.
In this embodiment, a screen rendering method is provided, where an execution subject of each step included in the method may be a server or a terminal, and execution subjects of different steps may be the same or different. As shown in fig. 1, the method for rendering a screen includes the following steps:
101. a first camera distance of a target three-dimensional model in the virtual scene relative to a plane where the virtual camera is located and a second camera distance of the two-dimensional model relative to the plane are obtained respectively.
The virtual scene refers to a three-dimensional scene which needs to use a picture rendering technology, namely a three-dimensional virtual scene. For example, the virtual scene may be a network game scene, a network animation scene, and the like.
In this embodiment, the number of the target three-dimensional models may be one or more. The target three-dimensional model can be a three-dimensional object with a relatively fixed position in a virtual scene, such as a three-dimensional model of furniture articles like chairs, tables, sofas, boxes and the like. The two-dimensional model may be a two-dimensional map with variable positions (especially freely movable) placed in a three-dimensional virtual scene, such as a two-dimensional paper person, a two-dimensional paper car, and the like.
In this embodiment, the virtual camera may be considered to be an observer of the screen. When the distance between the three-dimensional model and the virtual camera is calculated, the three-dimensional model can be regarded as a whole to calculate the distance between the three-dimensional model and the plane where the virtual camera is located, and the distances between each pixel point in the three-dimensional model and the virtual camera are not calculated. Referring to fig. 2, the dotted line is considered as the plane of the virtual camera (i.e., the light incident surface from the viewer's perspective), which may be perpendicular to the horizontal plane. Then, the first camera distance of the target three-dimensional model relative to the plane where the virtual camera is located, namely, the vertical distance from the target three-dimensional model to the plane; the second camera distance of the two-dimensional model relative to the plane in which the virtual camera is located means the perpendicular distance of the two dimensions to the plane.
In practical application, the manner of obtaining the distance between the three-dimensional model and the virtual camera may be various, and may be specifically selected according to actual requirements. For example, the distance between the three-dimensional model and the virtual camera may be calculated by using the position of the center point of the three-dimensional model (i.e., the position point with the same distance to the periphery of the model) as the position of the entire model. That is, in some embodiments, when obtaining the first camera distance of the three-dimensional model of the object in the virtual scene relative to the plane where the virtual camera is located, the following process may be included:
determining the position of a central point of the target three-dimensional model;
and taking the distance between the central point position and the virtual camera as the first camera distance of the target three-dimensional model relative to the plane where the virtual camera is located.
For another example, the distance between the three-dimensional model and the virtual camera may be calculated by using the position of the point closest to the virtual camera in the three-dimensional model as the position of the entire model. That is, in some embodiments, when obtaining the first camera distance of the three-dimensional model of the object in the virtual scene relative to the plane where the virtual camera is located, the following process may be included:
determining the distance between each pixel point in the target three-dimensional model and the virtual camera to obtain a distance set;
and taking the shortest distance in the distance set as the first camera distance of the target three-dimensional model relative to the plane where the virtual camera is located.
In addition, the position of the point farthest from the virtual camera in the three-dimensional model can be used as the position of the whole model, so that the distance between the three-dimensional model and the virtual camera can be calculated.
In this embodiment, a three-dimensional model in a screen may be controlled to be displayed by using a Stencil Buffer. The stereo Buffer is a Buffer for recording object information used by a GPU (graphics Processing Unit), and can control the display of an object on each pixel when rendering a screen. Since the Stencil Buffer generally has 8 bits, if the number of the two-dimensional models is less than or equal to 8, 0-7 serial numbers can be allocated in sequence.
In some embodiments, when the number of the two-dimensional models in the virtual scene exceeds a preset number, the two-dimensional models may be grouped according to the position of each two-dimensional model in the virtual scene to divide the two-dimensional models into models of different model types, and the front-back position relationship between the two-dimensional models and the target three-dimensional model may be set according to the divided model types. Specifically, the anteroposterior positional relationship between the two-dimensional model and the target three-dimensional model belonging to the same model type may be set to be the same. That is, for a two-dimensional model of the same model type, the anteroposterior positional relationship between it and the three-dimensional model is the same.
In specific implementation, two-dimensional models with relatively close positions can be divided into the same model type according to the distance relation of the positions.
In order to achieve better rendering effect, the Buffer space provided by the Stencil Buffer can be fully utilized, and the number of the packets is maximized. That is, the two-dimensional model may be divided into models of different model types to obtain the preset number of model types. For example, if the number of two-dimensional models exceeds 8, the two-dimensional models may be clustered to be divided into 8 categories, and then 0 to 7 serial numbers may be assigned according to the divided categories. That is, when all the two-dimensional models are divided into different model types according to the positions of the two-dimensional models in the virtual scene, the virtual scene may be specifically divided into a preset number of regions along the view direction of the virtual camera, and the two-dimensional models located in the same region may be divided into the same model type.
For example, referring to fig. 3, a method of dividing a region may be employed to divide a scene into 8 regions along the direction of a camera. The two-dimensional model characters in the same region can be assigned with the same serial number and share the same digit on the Stencil Buffer, so that more two-dimensional model characters are supported.
102. And determining the front-back position relation of the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance.
In this embodiment, in order to avoid the problem of object insertion display caused by comparing the front-back relationship between the three-dimensional model and the two-dimensional model according to the depth information at the pixel level, the three-dimensional model is regarded as a whole to calculate the camera distance, so that the camera distances of all the pixel points on the three-dimensional model are kept consistent, and all the pixel points on the three-dimensional model are ensured to be positioned in front of or behind the two-dimensional model at the same time.
Specifically, when the first camera distance is greater than the second camera distance, the target three-dimensional model is determined to be behind the two-dimensional model; when the first camera distance is less than the second camera distance, the target three-dimensional model is determined to precede the two-dimensional model.
It should be noted that, if the first camera distance is equal to the second camera distance, returning to the essence of the camera distance, when the first camera distance is calculated based on the model center point, the target three-dimensional model may be arbitrarily determined to be located behind or in front of the two-dimensional model; when the first camera distance is calculated based on a point in the model closest to the plane where the virtual camera is located, the target three-dimensional model may be arbitrarily determined to be located behind the two-dimensional model; when the first camera distance is calculated based on the point in the model that is furthest from the plane in which the virtual camera lies, the target three-dimensional model may be arbitrarily determined to be located before the two-dimensional model.
103. And determining the region of the target three-dimensional model to be drawn based on the front-back position relation.
In some embodiments, when determining the region where the target three-dimensional model and the two-dimensional model need to be drawn based on the front-back position relationship, the region where the target three-dimensional model and the two-dimensional model need to be drawn may be determined according to preset Mask (Mask) information and the front-back position relationship. The mask information may define an area of the scene to be rendered. Specifically, when determining a region to be drawn in the target three-dimensional model, the following process may be specifically included:
acquiring a third camera distance of each pixel point in the target three-dimensional model relative to a plane where the virtual camera is located;
when the first camera distance is larger than the second camera distance, determining an area to be drawn according to pixel points of the target three-dimensional model, of which the third camera distance is larger than the first camera distance, and preset mask information;
and when the first camera distance is smaller than the second camera distance, determining the area to be drawn according to the pixel point of which the third camera distance is smaller than the first camera distance in the target three-dimensional model and preset mask information.
The distance of each pixel point in the target three-dimensional model relative to the third camera of the plane where the virtual camera is located is equivalent to the depth of each pixel point relative to the plane where the virtual camera is located.
And when the first camera distance is greater than the second camera distance, the target three-dimensional model is positioned behind the two-dimensional model, and the part of the target three-dimensional model inserted in front of the two-dimensional model is not drawn. At this time, a region to be rendered in the scene picture may be determined based on the preset mask information, and a local region formed by pixel points (i.e., pixel points located behind the central point) having a depth value greater than the first camera distance in the target three-dimensional model located in the region range is determined as the region to be drawn.
And when the first camera distance is smaller than the second camera distance, the target three-dimensional model is positioned in front of the two-dimensional model, and the part of the target three-dimensional model inserted behind the two-dimensional model is not drawn. At this time, a region to be rendered in the scene picture may be determined based on the preset mask information, and a local region formed by pixel points (i.e., pixel points located before the central point) having a depth value smaller than the first camera distance in the target three-dimensional model located in the region range is determined as a region to be drawn.
In this embodiment, when determining the context, the camera distance is used instead of the depth information because the depth information is at the pixel level and the depths of different places on the same object are different, which may cause a part of the object to be located in front of the person and another part to be located behind the person, resulting in a puncturing effect (refer to the left image of fig. 4). And the information of the camera distance on the object is consistent, so that the object can be ensured to be positioned in front of or behind the person at the same time, and when the distance between the object and the camera is calculated by selecting the central point, the camera distances of other points on the object and the central point can be ensured to be consistent, so that the interpenetration of the objects is avoided, and a better shielding effect is achieved (refer to a right side diagram of fig. 4).
104. And performing picture rendering on an area needing to be drawn in the virtual scene.
Specifically, the region to be drawn is colored based on preset rendering parameters, and the depth information of the region to be drawn relative to the virtual camera is written into a depth buffer, so as to realize the picture rendering of the virtual scene.
In practical application, for some scenes where a more complex three-dimensional model is constructed to interact with a two-dimensional model, the three-dimensional model is not absolutely positioned before or behind the two-dimensional model. For example, taking a three-dimensional model as a three-dimensional seat model with armrests as an example, when a two-dimensional paper sheet person sits on the three-dimensional seat model and a hand of the two-dimensional paper sheet person is placed on the outer side of the armrests of the three-dimensional seat model, if the constructed scene is viewed from the side, the three-dimensional seat model should be displayed sequentially from near to far: two-dimensional paper people's hands, the armrests of the three-dimensional seat model, and other visible parts, two-dimensional paper people's body parts. At this time, the camera distance cannot be calculated by taking the three-dimensional seat model as a whole, so that the problem that the display effect is not consistent with the actual situation due to the fact that the three-dimensional seat model is behind the two-dimensional paper people because the two-dimensional paper people are closer to the camera and the whole three-dimensional seat model is shielded is solved.
Based on the scheme, the three-dimensional model with the complex structure can be split into the plurality of three-dimensional models with the simple structures, the camera distance of each three-dimensional model with the simple structure is calculated, and the front-back position relation with other two-dimensional models is determined according to the camera distance, so that the problem that the display effect is not consistent with the actual situation due to the complex structure of the three-dimensional models is solved. The three-dimensional model with a complex structure can be a concave body model with uneven appearance; the structurally simple three-dimensional model may be a non-concave model that is not concave in appearance. That is, in some embodiments, when obtaining the first camera distance of the three-dimensional model of the target in the virtual scene relative to the plane where the virtual camera is located, the following process may be specifically included:
obtaining the main body shape of the target three-dimensional model;
decomposing the target three-dimensional model into a plurality of sub three-dimensional models according to the main body shape, wherein the main body shape of each sub three-dimensional model is a non-concave body;
and respectively obtaining the distances of the plurality of sub three-dimensional models relative to the plane where the virtual camera is located to obtain a first camera distance.
Specifically, when the shape of the main body of the target three-dimensional model is obtained, small decorations, small objects and the like which have no shielding effect on the model can be ignored. Then, the target three-dimensional model is decomposed to different degrees according to the complexity of the shape of the main body. For example, when it is recognized that the main body shape is a concave body, the target three-dimensional model may be decomposed into a plurality of sub three-dimensional models whose main body shapes are non-concave bodies, and a splitting example may refer to fig. 5 (the left side is a concave body, and the right side is a non-concave body after splitting). And then, respectively acquiring the distance of each sub three-dimensional model relative to the plane where the virtual camera is located by utilizing the camera distance acquisition mode to obtain a first camera distance, so that a picture can be rendered based on the position relation between each sub three-dimensional model and the two-dimensional model, and the picture display effect is further improved. In order to save system resources, when the shape of the main body is recognized to be a convex body (such as a sphere and a cuboid), model decomposition is not needed.
In some embodiments, the first camera distance comprises: and the sub-camera distance of each sub-three-dimensional model relative to the plane of the virtual camera. When the front-back position relationship between the target three-dimensional model and the two-dimensional model is determined according to the first camera distance and the second camera distance, the front-back position relationship between each sub three-dimensional model and the two-dimensional model may be determined according to a plurality of sub camera distances and the second camera distance.
It can be known that, in the image rendering method provided in the embodiment of the present application, a first camera distance of the target three-dimensional model relative to the plane where the virtual camera is located and a second camera distance of the two-dimensional model relative to the plane where the virtual camera is located in the virtual scene are respectively obtained, and a front-back position relationship between the target three-dimensional model and the two-dimensional model is determined according to the first camera distance and the second camera distance. And determining the areas of the target three-dimensional model and the two-dimensional model which need to be drawn at least based on the front-back position relation, and performing picture rendering on the areas which need to be drawn. The model is regarded as a whole in the scheme, the front-back position relation between the objects is judged through the distance between the model and the virtual camera plane, so that the distances between all points on the model and the camera are kept consistent, the model is ensured to be positioned in front of or behind the two-dimensional model at the same time, the problem of interpenetration between the objects is avoided, and the picture display effect is improved.
The above-mentioned image rendering method will be further described in detail in the embodiments of the present application, taking the two-dimensional model as a two-dimensional paper figure and the three-dimensional model as a seat model as examples.
Specifically, a sequence number needs to be allocated to each two-dimensional character in the virtual scene, and since the Stencil Buffer generally has 8 bits, if the number of characters is less than 8, the sequence numbers of 0-7 can be allocated in sequence; otherwise, the human beings can be clustered, and divided into 8 classes, and then the serial numbers of 0-7 are distributed according to the classes.
Since the 2.5D view angle is used, it is simple to determine the anteroposterior relationship between the 2D character and the 3D model, and therefore, the positions of the two can be selected and compared by calculating the distance in the camera direction. The camera distance is used instead of the depth of the prior art because the depth information is pixel-level and the depth is different at different places on the same object, which causes one part of the object to be in front of the character and the other part to be behind the character, resulting in the interpenetration effect. And the information that the camera distance is consistent on the object can ensure that the object is in front of or behind the person at the same time.
In practical application, the center point of the three-dimensional model can be selected to calculate the distance between the three-dimensional model and the plane where the virtual camera is located, so that the camera distances between other points on the three-dimensional model and the plane where the virtual camera is located are consistent with the camera distance between the center point and the plane where the virtual camera is located.
It should be noted that. For some special models (such as carpet and floor) which do not cause the blocking of people, the position relation of the model does not need to be considered.
Because the Stencil Buffer generally has 8 bits and can respectively correspond to 8 personal article serial numbers, for each three-dimensional model, one bit of the three-dimensional model is set to be 1 to represent that a two-dimensional paper sheet person with a corresponding serial number is in front of the three-dimensional model, and is set to be 0 to represent that a two-dimensional paper sheet person with a corresponding serial number is behind the three-dimensional model; for each two-dimensional sheet man, the corresponding bit is set to 1.
During model rendering, specifically, a tencil value of the model is written into a tencil Buffer, so that the effect of controlling the display of the two-dimensional model is achieved. Assume for three-dimensional model a that two-dimensional sheet people with numbers 1, 3, and 6 are located at its back, and the rest are located at its front. Then, the tencel value of the three-dimensional model a: 0b 00100101. Where 0b represents the prefix of the binary number, the ontology is 00100101, the tencel operation is set to replace, and the value is written into the tencel Buffer template cache when drawing the picture.
Before rendering the character, the current tencil value is checked. If the corresponding position is 1, the position is blocked, and drawing is not performed, otherwise, the control of the two-dimensional model is drawn. Assuming that the serial number is 1 for the person B, the mask information step mask of B is 0B00000010, and the operation of the stencil buffer test step is not equal, that is, the and operation is performed on the step value of the current pixel and the mask information step mask during drawing. If the value is 1, drawing is not carried out so as to realize the shielding of the three-dimensional model on the two-dimensional paper sheet.
In practical applications, when a three-dimensional model is taken as an example of a seat presenting a concave body, when a two-dimensional paper sheet person sits on the seat, the two-dimensional paper sheet person will block in front due to the armrest of the seat, and therefore an effect inconsistent with the actual situation is generated. This is because the small person is closer to the camera than the seat, so the system determines that the seat is positioned behind the two-dimensional paper sheet person, thus causing the seat as a whole to be obscured by the two-dimensional paper sheet person.
In order to make the armrest part block in front of the two-dimensional paper people, the seat model needs to be disassembled, and the base and the armrest are separated. Specifically, when the model is split, the main body shape of the three-dimensional model may be obtained first (ignoring small decorations and small objects without occlusion), then the shape with the recess may be split into shapes without the recess (specifically, refer to a method of convex splitting a concave polygon), and finally a plurality of convex polyhedrons of the three-dimensional model may be obtained. For example, referring to fig. 6, the concave body seat is divided into three rectangular parallelepipeds, an armrest X1, a base X2, and an armrest X3.
And finally splitting the original model according to the split position to obtain a final result. When a small person sits in the chair, the base is adjusted to the back of the small person and the armrests are adjusted to the front of the small person, and then drawing is performed according to the previous method. The scheme can solve the problems of seats with armrests, galleries with railings and the like, and the characters have more substituted feeling.
The model is regarded as a whole in the scheme, the front-back position relation between the objects is judged through the distance between the model and the virtual camera plane, so that the distances between all points on the model and the camera are kept consistent, the model is ensured to be positioned in front of or behind the two-dimensional model at the same time, the problem of interpenetration between the objects is avoided, and the picture display effect is improved.
In order to better implement the method, an embodiment of the present application further provides a screen rendering apparatus, which may be specifically integrated in an electronic device. The electronic device can be a terminal, a server and the like. The terminal can be a mobile phone, a tablet computer, an intelligent Bluetooth device, a notebook computer, a personal computer and other devices; the server may be a single server or a server cluster composed of a plurality of servers.
In this embodiment, the method of the embodiment of the present application will be described in detail by taking an example in which the screen rendering device is specifically integrated in a smart phone. For example, as shown in fig. 7, the screen rendering apparatus may include a first acquisition unit 301, a first determination unit 302, a second determination unit 303, and a rendering unit 304 as follows:
an obtaining unit 301, configured to obtain a first camera distance of a three-dimensional model of a target in a virtual scene relative to a plane where a virtual camera is located, and a second camera distance of a two-dimensional model relative to the plane, respectively;
a first determining unit 302, configured to determine a front-back position relationship between the target three-dimensional model and the two-dimensional model according to a first camera distance and a second camera distance;
a second determining unit 303, configured to determine, based on the front-back position relationship, a region where the target three-dimensional model needs to be drawn;
and a rendering unit 304, configured to perform screen rendering on an area that needs to be drawn in the virtual scene.
In some embodiments, in acquiring a first camera distance of the three-dimensional model of the object in the virtual scene with respect to a plane in which the virtual camera is located, the acquisition unit is configured to:
determining the position of a central point of the target three-dimensional model;
and taking the distance between the central point position and the plane where the virtual camera is positioned as the first camera distance of the target three-dimensional model relative to the plane where the virtual camera is positioned.
In some embodiments, when determining the anteroposterior positional relationship of the target three-dimensional model and the two-dimensional model from the first camera distance and the second camera distance, the first determination unit is configured to:
determining the target three-dimensional model as being located behind the two-dimensional model when the first camera distance is greater than the second camera distance;
determining the target three-dimensional model as being located before the two-dimensional model when the first camera distance is less than the second camera distance.
In some embodiments, when determining the region where the target three-dimensional model needs to be rendered based on the anteroposterior positional relationship, the second determination unit is configured to:
and determining the region of the target three-dimensional model to be drawn according to preset mask information and the front-back position relation.
In some embodiments, when determining the region where the target three-dimensional model needs to be rendered based on the anteroposterior positional relationship, the second determination unit is configured to:
obtaining a third camera distance of each pixel point in the target three-dimensional model relative to the plane;
when the first camera distance is larger than the second camera distance, determining an area needing to be drawn according to pixel points in the target three-dimensional model, of which the third camera distance is larger than the first camera distance;
and when the first camera distance is smaller than the second camera distance, determining the area to be drawn according to the pixel point of which the third camera distance is smaller than the first camera distance in the target three-dimensional model.
In some embodiments, the apparatus further comprises:
the dividing unit is used for dividing the two-dimensional models into models of different model types according to the positions of the two-dimensional models in the virtual scene when the number of the two-dimensional models in the virtual scene exceeds a preset number;
and the setting unit is used for setting the front-back position relationship between the two-dimensional model and the target three-dimensional model according to the model type, wherein the front-back position relationship between the two-dimensional model and the three-dimensional model is the same for the two-dimensional model of the same model type.
In some embodiments, when the dividing of the two-dimensional models into models of different model types according to the position of each two-dimensional model in the virtual scene, the dividing unit is configured to:
dividing the virtual scene into the preset number of regions along the visual angle direction of the virtual camera;
and dividing the two-dimensional models in the same region into the same model type.
In some embodiments, in acquiring a first camera distance of the three-dimensional model of the object in the virtual scene with respect to a plane in which the virtual camera is located, the acquisition unit is configured to:
obtaining the main body shape of the target three-dimensional model;
decomposing the target three-dimensional model into a plurality of sub three-dimensional models according to the main body shape, wherein the main body shape of the sub three-dimensional models is a non-concave body;
and respectively obtaining the distances of the plurality of sub three-dimensional models relative to the plane where the virtual camera is located to obtain a first camera distance.
In some embodiments, the first camera distance comprises: the sub-camera distance of the sub-three-dimensional model relative to the plane where the virtual camera is located; when determining the front-back position relationship between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance, the first determination unit is further configured to:
and determining the front-back position relation between each sub three-dimensional model and the two-dimensional model according to the plurality of sub camera distances and the second camera distance.
As can be seen from the above, in the image rendering apparatus of this embodiment, the obtaining unit 301 obtains a first camera distance of the three-dimensional model of the target in the virtual scene relative to the plane where the virtual camera is located and a second camera distance of the two-dimensional model relative to the plane where the virtual camera is located, respectively; the first determining unit 302 determines the front-back position relation between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance; the second determining unit 303 determines a region where the target three-dimensional model needs to be drawn based on the anteroposterior position relationship; the rendering unit 404 performs screen rendering on an area that needs to be drawn. According to the scheme, the model is regarded as a whole, the front-back position relation between the objects is judged through the distance between the model and the virtual camera plane, so that the distances between all points on the model and the camera are kept consistent, the three-dimensional model is ensured to be positioned in front of or behind the two-dimensional model at the same time, the problem of interpenetration between the objects is avoided, and the picture display effect is improved.
Correspondingly, the embodiment of the present application further provides an electronic device, where the electronic device may be a terminal or a server, and the terminal may be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game machine, a Personal computer, and a Personal Digital Assistant (PDA). As shown in fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 400 includes a processor 401 having one or more processing cores, a memory 402 having one or more computer-readable storage media, and a computer program stored on the memory 402 and executable on the processor. The processor 401 is electrically connected to the memory 402. Those skilled in the art will appreciate that the electronic device configurations shown in the figures do not constitute limitations of the electronic device, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The processor 401 is a control center of the electronic device 400, connects various parts of the whole electronic device 400 by using various interfaces and lines, performs various functions of the electronic device 400 and processes data by running or loading software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device 400.
In this embodiment, the processor 401 in the electronic device 400 loads instructions corresponding to processes of one or more application programs into the memory 402 according to the following steps, and the processor 401 runs the application programs stored in the memory 402, so as to implement various functions:
respectively acquiring a first camera distance of a target three-dimensional model in a virtual scene relative to a plane where a virtual camera is located and a second camera distance of a two-dimensional model relative to the plane;
determining the front-back position relation between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance;
determining a region of the target three-dimensional model to be drawn based on the front-back position relation;
and performing picture rendering on the area needing to be drawn in the virtual scene.
Optionally, as shown in fig. 8, the electronic device 400 further includes: touch-sensitive display screen 403, radio frequency circuit 404, audio circuit 405, input unit 406 and power 407. The processor 401 is electrically connected to the touch display screen 403, the radio frequency circuit 404, the audio circuit 405, the input unit 406, and the power source 407. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 8 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The touch display screen 403 may be used for displaying a graphical user interface and receiving operation instructions generated by a user acting on the graphical user interface. The touch display screen 403 may include a display panel and a touch panel. The display panel may be used, among other things, to display information entered by or provided to a user and various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 401, and can receive and execute commands sent by the processor 401. The touch panel may overlay the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel may transmit the touch operation to the processor 401 to determine the type of the touch event, and then the processor 401 may provide a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 403 to realize input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display screen 403 may also be used as a part of the input unit 406 to implement an input function.
The rf circuit 404 may be used for transceiving rf signals to establish wireless communication with a network device or other electronic devices via wireless communication, and for transceiving signals with the network device or other electronic devices.
The audio circuit 405 may be used to provide an audio interface between the user and the electronic device through a speaker, microphone. The audio circuit 405 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 405 and converted into audio data, which is then processed by the audio data output processor 401 and then transmitted to, for example, another electronic device via the rf circuit 404, or the audio data is output to the memory 402 for further processing. The audio circuit 405 may also include an earbud jack to provide communication of a peripheral headset with the electronic device.
The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 407 is used to power the various components of the electronic device 400. Optionally, the power source 407 may be logically connected to the processor 401 through a power management system, so as to implement functions of managing charging, discharging, power consumption management, and the like through the power management system. The power supply 407 may also include one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, or any other component.
Although not shown in fig. 8, the electronic device 400 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
Therefore, the electronic device provided by the embodiment judges the front-back position relationship between the objects through the distance between the model and the virtual camera plane, so that all points on the model are consistent with the camera distance, the three-dimensional model is ensured to be positioned in front of or behind the two-dimensional model at the same time, the problem of interpenetration between the objects is avoided, and the picture display effect is improved.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a computer-readable storage medium, in which a plurality of computer programs are stored, where the computer programs can be loaded by a processor to execute the steps in any one of the screen rendering methods provided in the embodiments of the present application. For example, the computer program may perform the steps of:
respectively acquiring a first camera distance of a target three-dimensional model relative to a plane where a virtual camera is located and a second camera distance of a two-dimensional model relative to the plane where the virtual camera is located in a virtual scene;
determining the front-back position relation between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance;
determining a region of the target three-dimensional model to be drawn based on the front-back position relation;
and performing picture rendering on the area needing to be drawn in the virtual scene.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium can execute the steps in any of the image rendering methods provided in the embodiments of the present application, beneficial effects that can be achieved by any of the image rendering methods provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The above detailed description is given of a screen rendering method, an apparatus, a storage medium, and an electronic device provided in the embodiments of the present application, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understanding the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (12)

1. A screen rendering method, comprising:
respectively acquiring a first camera distance of a target three-dimensional model in a virtual scene relative to a plane where a virtual camera is located and a second camera distance of a two-dimensional model relative to the plane;
determining the front-back position relation between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance;
determining a region of the target three-dimensional model to be drawn based on the front-back position relation;
and performing picture rendering on the area needing to be drawn in the virtual scene.
2. The picture rendering method according to claim 1, wherein obtaining a first camera distance of the three-dimensional model of the object in the virtual scene with respect to a plane in which the virtual camera is located comprises:
determining the position of a central point of the target three-dimensional model;
and taking the distance between the central point position and the plane where the virtual camera is positioned as the first camera distance of the target three-dimensional model relative to the plane where the virtual camera is positioned.
3. The picture rendering method according to claim 1, wherein determining a front-back positional relationship of the target three-dimensional model and the two-dimensional model according to a first camera distance and a second camera distance comprises:
determining the target three-dimensional model as being located behind the two-dimensional model when the first camera distance is greater than the second camera distance;
determining the target three-dimensional model as being located before the two-dimensional model when the first camera distance is less than the second camera distance.
4. The screen rendering method according to claim 1, wherein the determining, based on the front-back position relationship, a region where the target three-dimensional model needs to be rendered comprises:
and determining the region of the target three-dimensional model to be drawn according to preset mask information and the front-back position relation.
5. The screen rendering method according to claim 1, wherein the determining, based on the front-back position relationship, a region where the target three-dimensional model needs to be rendered comprises:
obtaining a third camera distance of each pixel point in the target three-dimensional model relative to the plane;
when the first camera distance is larger than the second camera distance, determining an area needing to be drawn according to pixel points in the target three-dimensional model, of which the third camera distance is larger than the first camera distance;
and when the first camera distance is smaller than the second camera distance, determining the area to be drawn according to the pixel point of which the third camera distance is smaller than the first camera distance in the target three-dimensional model.
6. The screen rendering method according to claim 1, further comprising:
when the number of the two-dimensional models in the virtual scene exceeds a preset number, dividing the two-dimensional models into models of different model types according to the positions of the two-dimensional models in the virtual scene;
and setting the front-back position relation between the two-dimensional model and the target three-dimensional model according to the model type, wherein the front-back position relation between the two-dimensional model and the three-dimensional model is the same for the two-dimensional model of the same model type.
7. The picture rendering method according to claim 6, wherein the dividing the two-dimensional model into models of different model types according to the position of the two-dimensional model in the virtual scene comprises:
dividing the virtual scene into the preset number of regions along the visual angle direction of the virtual camera;
and dividing the two-dimensional models in the same region into the same model type.
8. The picture rendering method according to any one of claims 1 to 7, wherein the obtaining a first camera distance of the three-dimensional model of the object in the virtual scene with respect to a plane in which the virtual camera is located comprises:
obtaining the main body shape of the target three-dimensional model;
decomposing the target three-dimensional model into a plurality of sub three-dimensional models according to the main body shape, wherein the main body shape of the sub three-dimensional models is a non-concave body;
and respectively obtaining the distances of the plurality of sub three-dimensional models relative to the plane where the virtual camera is located to obtain a first camera distance.
9. The screen rendering method according to claim 8, wherein the first camera distance includes: the sub-camera distance of the sub-three-dimensional model relative to the plane where the virtual camera is located;
determining the front-back position relation between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance, wherein the determining comprises the following steps:
and determining the front-back position relation between each sub three-dimensional model and the two-dimensional model according to the plurality of sub camera distances and the second camera distance.
10. A screen rendering apparatus, comprising:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for respectively acquiring a first camera distance of a target three-dimensional model in a virtual scene relative to a plane where a virtual camera is located and a second camera distance of a two-dimensional model relative to the plane;
the first determining unit is used for determining the front-back position relation between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance;
a second determining unit, configured to determine, based on the front-back positional relationship, a region where the target three-dimensional model needs to be drawn;
and the rendering unit is used for performing picture rendering on the area needing to be drawn in the virtual scene.
11. A computer-readable storage medium storing instructions adapted to be loaded by a processor to perform the steps of the picture rendering method according to any one of claims 1 to 9.
12. An electronic device comprising a processor and a memory, the memory storing a plurality of instructions; the processor loads instructions from the memory to perform the steps in the picture rendering method according to any of claims 1-9.
CN202011270927.9A 2020-11-13 2020-11-13 Picture rendering method and device, storage medium and electronic equipment Pending CN112316425A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011270927.9A CN112316425A (en) 2020-11-13 2020-11-13 Picture rendering method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011270927.9A CN112316425A (en) 2020-11-13 2020-11-13 Picture rendering method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN112316425A true CN112316425A (en) 2021-02-05

Family

ID=74318176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011270927.9A Pending CN112316425A (en) 2020-11-13 2020-11-13 Picture rendering method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112316425A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113691796A (en) * 2021-08-16 2021-11-23 福建凯米网络科技有限公司 Three-dimensional scene interaction method through two-dimensional simulation and computer-readable storage medium
CN114615487A (en) * 2022-02-22 2022-06-10 聚好看科技股份有限公司 Three-dimensional model display method and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513112A (en) * 2014-10-16 2016-04-20 北京畅游天下网络技术有限公司 Image processing method and device
CN110264568A (en) * 2019-06-21 2019-09-20 网易(杭州)网络有限公司 A kind of three dimensional virtual models exchange method and device
CN110889890A (en) * 2019-11-29 2020-03-17 深圳市商汤科技有限公司 Image processing method and device, processor, electronic device and storage medium
US20200184710A1 (en) * 2018-12-11 2020-06-11 Canon Kabushiki Kaisha Method, system and apparatus for capture of image data for free viewpoint video
CN111729307A (en) * 2020-07-30 2020-10-02 腾讯科技(深圳)有限公司 Virtual scene display method, device, equipment and storage medium
CN111803945A (en) * 2020-07-23 2020-10-23 网易(杭州)网络有限公司 Interface rendering method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513112A (en) * 2014-10-16 2016-04-20 北京畅游天下网络技术有限公司 Image processing method and device
US20200184710A1 (en) * 2018-12-11 2020-06-11 Canon Kabushiki Kaisha Method, system and apparatus for capture of image data for free viewpoint video
CN110264568A (en) * 2019-06-21 2019-09-20 网易(杭州)网络有限公司 A kind of three dimensional virtual models exchange method and device
CN110889890A (en) * 2019-11-29 2020-03-17 深圳市商汤科技有限公司 Image processing method and device, processor, electronic device and storage medium
CN111803945A (en) * 2020-07-23 2020-10-23 网易(杭州)网络有限公司 Interface rendering method and device, electronic equipment and storage medium
CN111729307A (en) * 2020-07-30 2020-10-02 腾讯科技(深圳)有限公司 Virtual scene display method, device, equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113691796A (en) * 2021-08-16 2021-11-23 福建凯米网络科技有限公司 Three-dimensional scene interaction method through two-dimensional simulation and computer-readable storage medium
CN113691796B (en) * 2021-08-16 2023-06-02 福建凯米网络科技有限公司 Three-dimensional scene interaction method through two-dimensional simulation and computer readable storage medium
CN114615487A (en) * 2022-02-22 2022-06-10 聚好看科技股份有限公司 Three-dimensional model display method and equipment
CN114615487B (en) * 2022-02-22 2023-04-25 聚好看科技股份有限公司 Three-dimensional model display method and device

Similar Documents

Publication Publication Date Title
US10055879B2 (en) 3D human face reconstruction method, apparatus and server
CN108846274B (en) Security verification method, device and terminal
CN112138386A (en) Volume rendering method and device, storage medium and computer equipment
US11776197B2 (en) Method and apparatus for displaying personalized face of three-dimensional character, device, and storage medium
CN113052947B (en) Rendering method, rendering device, electronic equipment and storage medium
CN112316425A (en) Picture rendering method and device, storage medium and electronic equipment
CN109753892B (en) Face wrinkle generation method and device, computer storage medium and terminal
CN113516742A (en) Model special effect manufacturing method and device, storage medium and electronic equipment
CN113398583A (en) Applique rendering method and device of game model, storage medium and electronic equipment
CN112465945A (en) Model generation method and device, storage medium and computer equipment
CN112206517A (en) Rendering method, device, storage medium and computer equipment
CN113546411A (en) Rendering method and device of game model, terminal and storage medium
CN114782605A (en) Rendering method and device of hair virtual model, computer equipment and storage medium
CN113426129A (en) User-defined role appearance adjusting method, device, terminal and storage medium
CN113487662A (en) Picture display method and device, electronic equipment and storage medium
CN117455753A (en) Special effect template generation method, special effect generation device and storage medium
CN116385615A (en) Virtual face generation method, device, computer equipment and storage medium
CN115645921A (en) Game indicator generating method and device, computer equipment and storage medium
CN114266849A (en) Model automatic generation method and device, computer equipment and storage medium
CN113413600A (en) Information processing method, information processing device, computer equipment and storage medium
CN112245914A (en) Visual angle adjusting method and device, storage medium and computer equipment
CN113345059B (en) Animation generation method and device, storage medium and electronic equipment
CN113362348B (en) Image processing method, image processing device, electronic equipment and storage medium
US20230260076A1 (en) System, information processing apparatus, and method
CN116392809A (en) Method, device, electronic equipment and storage medium for updating virtual model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination