CN112316425B - Picture rendering method and device, storage medium and electronic equipment - Google Patents

Picture rendering method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112316425B
CN112316425B CN202011270927.9A CN202011270927A CN112316425B CN 112316425 B CN112316425 B CN 112316425B CN 202011270927 A CN202011270927 A CN 202011270927A CN 112316425 B CN112316425 B CN 112316425B
Authority
CN
China
Prior art keywords
dimensional model
camera distance
target
camera
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011270927.9A
Other languages
Chinese (zh)
Other versions
CN112316425A (en
Inventor
郑超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202011270927.9A priority Critical patent/CN112316425B/en
Publication of CN112316425A publication Critical patent/CN112316425A/en
Application granted granted Critical
Publication of CN112316425B publication Critical patent/CN112316425B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a picture rendering method, a picture rendering device, a storage medium and electronic equipment; according to the embodiment of the application, the first camera distance of the target three-dimensional model in the virtual scene relative to the plane of the virtual camera and the second camera distance of the two-dimensional model relative to the plane of the virtual camera are respectively obtained, and the front-back position relationship between the target three-dimensional model and the two-dimensional model is determined according to the first camera distance and the second camera distance. And determining the region of the target three-dimensional model to be drawn based on the front-back position relation, and performing picture rendering on the region to be drawn. In the scheme, the model is regarded as a whole, the front-back position relationship between objects is judged through the distance between the model and the virtual camera plane, so that all points on the model are consistent with the camera distance, the three-dimensional model is ensured to be positioned in front of or behind the two-dimensional model at the same time, the problem of interpenetration between the objects is avoided, and the picture display effect is improved.

Description

Picture rendering method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of information processing, and in particular, to a method and apparatus for rendering a picture, a storage medium, and an electronic device.
Background
In game making, it is a relatively common game mode to place two-dimensional sheet characters in a three-dimensional scene. In practical application, when a two-dimensional paper character is placed in a three-dimensional scene, the problem of object penetration is easy to occur due to the crossed placement between the two characters.
In the related art, the problem of interpenetration between objects is generally to add a depth offset to the objects such that one object is located in front of or behind the other object. However, since the depth information is at the pixel level, different depths in different places on the same object may cause a part of the object to be located in front of the person and another part to be located behind the person, resulting in poor occlusion effect between objects.
Disclosure of Invention
The embodiment of the application provides a picture rendering method, a picture rendering device, a storage medium and electronic equipment, which can solve the problem of interpenetration among objects and improve the picture display effect.
The embodiment of the application provides a picture rendering method, which comprises the following steps:
Respectively acquiring a first camera distance of a target three-dimensional model in a virtual scene relative to a plane where a virtual camera is located and a second camera distance of a two-dimensional model relative to the plane;
determining the front-back position relationship between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance;
Determining an area to be drawn of the target three-dimensional model based on the front-rear position relation;
and carrying out picture rendering on the area needing to be drawn in the virtual scene.
Correspondingly, the embodiment of the application also provides a picture rendering device, which comprises:
The device comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is used for respectively acquiring a first camera distance of a target three-dimensional model in a virtual scene relative to a plane where a virtual camera is located and a second camera distance of a two-dimensional model relative to the plane;
The first determining unit is used for determining the front-back position relationship between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance;
A second determining unit, configured to determine, based on the front-rear positional relationship, an area where the target three-dimensional model needs to be drawn;
And the rendering unit is used for performing picture rendering on the area needing to be drawn in the virtual scene.
In some embodiments, when acquiring the first camera distance of the target three-dimensional model in the virtual scene relative to the plane in which the virtual camera is located, the acquiring unit is configured to:
Determining the position of a central point of the target three-dimensional model;
and taking the distance between the center point position and the plane where the virtual camera is located as a first camera distance of the target three-dimensional model relative to the plane where the virtual camera is located.
In some embodiments, in determining a front-rear positional relationship of the target three-dimensional model and the two-dimensional model according to a first camera distance and a second camera distance, the first determining unit is configured to:
Determining the target three-dimensional model to be located behind the two-dimensional model when the first camera distance is greater than the second camera distance;
when the first camera distance is less than the second camera distance, the target three-dimensional model is determined to be located before the two-dimensional model.
In some embodiments, when determining an area where the target three-dimensional model needs to be drawn based on the front-rear positional relationship, the second determining unit is configured to:
and determining the region of the target three-dimensional model to be drawn according to the preset mask information and the front-back position relation.
In some embodiments, when determining an area where the target three-dimensional model needs to be drawn based on the front-rear positional relationship, the second determining unit is configured to:
Acquiring a third camera distance of each pixel point in the target three-dimensional model relative to the plane;
When the first camera distance is greater than the second camera distance, determining an area to be drawn according to the pixel point of the target three-dimensional model, wherein the third camera distance of the pixel point is greater than the first camera distance;
And when the first camera distance is smaller than the second camera distance, determining an area to be drawn according to the pixel point of the target three-dimensional model, wherein the third camera distance of the pixel point is smaller than the first camera distance.
In some embodiments, the apparatus further comprises:
The dividing unit is used for dividing the two-dimensional models into models of different model types according to the position of each two-dimensional model in the virtual scene when the number of the two-dimensional models in the virtual scene exceeds the preset number;
And the setting unit is used for setting the front-back position relation between the two-dimensional model and the target three-dimensional model according to the model type, wherein the front-back position relation between the two-dimensional model and the three-dimensional model is the same for the two-dimensional model of the same model type.
In some embodiments, when the two-dimensional model is divided into models of different model types according to the position of each two-dimensional model in the virtual scene, the dividing unit is configured to:
dividing the virtual scene into the preset number of areas along the view angle direction of the virtual camera;
two-dimensional models located within the same region are divided into the same model type.
In some embodiments, when acquiring the first camera distance of the target three-dimensional model in the virtual scene relative to the plane in which the virtual camera is located, the acquiring unit is configured to:
acquiring the main body shape of the target three-dimensional model;
Decomposing the target three-dimensional model into a plurality of sub three-dimensional models according to the main body shape, wherein the main body shape of the sub three-dimensional model is a non-concave body;
and respectively acquiring the distances of the plurality of sub three-dimensional models relative to the plane where the virtual camera is located, and obtaining a first camera distance.
In some embodiments, the first camera distance comprises: the sub-camera distance of the sub-three-dimensional model relative to the plane in which the virtual camera is located; the first determining unit is further configured to, when determining a front-rear positional relationship of the target three-dimensional model and the two-dimensional model based on the first camera distance and the second camera distance:
and determining the front-back position relation between each sub three-dimensional model and the two-dimensional model according to the plurality of sub camera distances and the second camera distance.
Correspondingly, the embodiment of the application also provides a computer readable storage medium, which stores a plurality of instructions, wherein the instructions are suitable for being loaded by a processor to execute the steps in any picture rendering method provided by the embodiment of the application.
Correspondingly, the embodiment of the application also provides electronic equipment, which comprises a memory, wherein the memory stores a plurality of instructions; the processor loads instructions from the memory to execute steps in any of the picture rendering methods provided by the embodiments of the present application.
According to the scheme provided by the embodiment of the application, the first camera distance of the target three-dimensional model relative to the plane of the virtual camera in the virtual scene and the second camera distance of the two-dimensional model relative to the plane of the virtual camera are respectively obtained, and the front-back position relationship between the target three-dimensional model and the two-dimensional model is determined according to the first camera distance and the second camera distance. And determining the region of the target three-dimensional model to be drawn based on the front-back position relation, and performing picture rendering on the region to be drawn. In the scheme, the model is regarded as a whole, the front-back position relationship between objects is judged through the distance between the model and the virtual camera plane, so that all points on the model are consistent with the camera distance, the three-dimensional model is ensured to be positioned in front of or behind the two-dimensional model at the same time, the problem of interpenetration between the objects is avoided, and the picture display effect is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a picture rendering method according to an embodiment of the present application.
Fig. 2 is a schematic view of a camera distance calculation method according to an embodiment of the present application.
Fig. 3 is a schematic view of a two-dimensional model classification manner according to an embodiment of the present application.
Fig. 4 is a schematic view of a scene in which a three-dimensional model and a two-dimensional model are alternately displayed according to an embodiment of the present application.
Fig. 5 is an exploded rule schematic diagram of a concave body according to an embodiment of the present application.
FIG. 6 is an exploded schematic view of a three-dimensional model provided by an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a picture rendering device according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The embodiment of the application provides a picture rendering method, a picture rendering device, a storage medium and electronic equipment.
The picture rendering device may be integrated in an electronic device. The electronic device may be a terminal or a server. For example, the terminal can be a mobile phone, a tablet computer, an intelligent Bluetooth device, a notebook computer, a personal computer (Personal Computer, PC) or the like.
The following will describe in detail. The numbers of the following examples are not intended to limit the preferred order of the examples.
In this embodiment, a method for rendering a picture is provided, where the execution subject of each step included in the method may be a server or a terminal, and the execution subjects of different steps may be the same or different. As shown in fig. 1, the picture rendering method includes the following steps:
101. And respectively acquiring a first camera distance of a target three-dimensional model in the virtual scene relative to a plane where the virtual camera is located and a second camera distance of the two-dimensional model relative to the plane.
Wherein, the virtual scene refers to a three-dimensional scene, namely a three-dimensional virtual scene, which needs to be used in a picture rendering technology. For example, the virtual scene may be a network game scene, a network animation scene, or the like.
In this embodiment, the number of the target three-dimensional models may be one or a plurality of the target three-dimensional models. The target three-dimensional model may be a three-dimensional model of a relatively fixed position three-dimensional object in a virtual scene, such as a chair, a table, a sofa, a box, or other furniture item. The two-dimensional model can be a two-dimensional map which is placed in a three-dimensional virtual scene and has a changeable position (particularly, can freely move), such as a two-dimensional paper sheet person, a two-dimensional paper sheet automobile and the like.
In this embodiment, the virtual camera may be considered as an observer of the screen. When the distance between the three-dimensional model and the virtual camera is calculated, the three-dimensional model can be regarded as a whole to calculate the distance between the three-dimensional model and the plane where the virtual camera is located, rather than calculate the distance between each pixel point in the three-dimensional model and the virtual camera. Referring to fig. 2, the dashed line is considered to be the plane in which the virtual camera lies (i.e., the light entrance surface of the observer's view), which may be perpendicular to the horizontal plane. Then, a first camera distance of the target three-dimensional model relative to the plane in which the virtual camera is located, that is, a vertical distance of the target three-dimensional model to the plane; the second camera distance of the two-dimensional model relative to the plane in which the virtual camera lies, i.e. the perpendicular distance from the two dimensions to the plane.
In practical applications, there are various ways of obtaining the distance between the three-dimensional model and the virtual camera, and the method can be specifically selected according to the actual requirements. For example, the distance between the three-dimensional model and the virtual camera can be calculated by taking the position of the center point of the three-dimensional model (i.e., the position point with the same distance from the periphery of the model) as the position of the whole model. That is, in some embodiments, when obtaining the first camera distance of the target three-dimensional model in the virtual scene relative to the plane in which the virtual camera is located, the following procedure may be included:
determining the position of a central point of the target three-dimensional model;
and taking the distance between the center point position and the virtual camera as a first camera distance of the target three-dimensional model relative to the plane of the virtual camera.
For another example, the distance between the three-dimensional model and the virtual camera may be calculated by taking the position of the closest point to the virtual camera in the three-dimensional model as the position of the model as a whole. That is, in some embodiments, when obtaining the first camera distance of the target three-dimensional model in the virtual scene relative to the plane in which the virtual camera is located, the following procedure may be included:
Determining the distance between each pixel point in the target three-dimensional model and the virtual camera to obtain a distance set;
and taking the shortest distance in the distance set as a first camera distance of the target three-dimensional model relative to the plane of the virtual camera.
In addition, the position of the point farthest from the virtual camera in the three-dimensional model can be used as the position of the whole model, so that the distance between the three-dimensional model and the virtual camera can be calculated.
In this embodiment, the display of the three-dimensional model in the screen may be controlled by using a stepil Buffer (template Buffer). The stepil Buffer is a Buffer used by the GPU (Graphic Processing Unit, graphics processor) to record object information, and can control the display of objects on each pixel when rendering a screen. Since the Stencil Buffer has 8 bits in general, if the number of two-dimensional models is 8 or less, 0 to 7 sequence numbers can be allocated in order.
In some embodiments, when the number of two-dimensional models in the virtual scene exceeds a preset number, the two-dimensional models may be grouped according to the position of each two-dimensional model in the virtual scene, so as to divide the two-dimensional models into models of different model types, and the front-back position relationship between the two-dimensional models and the target three-dimensional model is set according to the divided model types. Specifically, the front-rear positional relationship between the two-dimensional model and the target three-dimensional model belonging to the same model type may be set to be the same. That is, for a two-dimensional model of the same model type, the front-rear positional relationship is the same as that between the three-dimensional models.
In the implementation, the two-dimensional model with relatively close positions can be divided into the same model type according to the distance relation of the positions.
In order to achieve better rendering effect, the Buffer space provided by the Stencil Buffer can be fully utilized to maximize the number of packets. That is, in dividing the two-dimensional model into models of different model types, the two-dimensional model may be divided to obtain the above-mentioned preset number of model types. For example, if the number of the two-dimensional models exceeds 8, the two-dimensional models may be clustered to be divided into 8 categories, and then 0 to 7 serial numbers may be assigned according to the divided categories. That is, when all the two-dimensional models are divided into different model types according to the positions of the two-dimensional models in the virtual scene, the virtual scene may be specifically divided into a preset number of regions along the viewing angle direction of the virtual camera, and the two-dimensional models located in the same region may be divided into the same model type.
For example, referring to fig. 3, a method of dividing a scene into 8 regions along a direction of a camera may be employed. The two-dimensional model characters in the same area can be assigned the same serial number and share the same number of digits on the Stencil Buffer, so that more two-dimensional model characters are supported.
102. And determining the front-back position relationship between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance.
In this embodiment, in order to avoid the problem of object penetration display caused by comparing the front-rear relationship between the three-dimensional model and the two-dimensional model according to the depth information of the pixel level, the three-dimensional model is regarded as a whole to calculate the camera distance, so that the camera distances of all the pixel points on the three-dimensional model are kept consistent, and all the pixel points on the three-dimensional model are ensured to be positioned in front of or behind the two-dimensional model at the same time.
Specifically, when the first camera distance is greater than the second camera distance, determining the target three-dimensional model as being behind the two-dimensional model; when the first camera distance is less than the second camera distance, the target three-dimensional model is determined to precede the two-dimensional model.
It should be noted that, if the first camera distance is equal to the second camera distance, the essence of the camera distance is returned, and when the first camera distance is calculated based on the model center point, the target three-dimensional model can be arbitrarily determined to be located behind or in front of the two-dimensional model; when the first camera distance is calculated based on a point closest to the plane in which the virtual camera is located in the model, the target three-dimensional model can be arbitrarily determined to be positioned behind the two-dimensional model; when the first camera distance is calculated based on a point in the model furthest from the plane in which the virtual camera lies, the target three-dimensional model may be arbitrarily determined to be located before the two-dimensional model.
103. And determining the area where the target three-dimensional model needs to be drawn based on the front-back position relation.
In some embodiments, when determining the region where the target three-dimensional model and the two-dimensional model need to be drawn based on the front-rear positional relationship, the region where the target three-dimensional model and the two-dimensional model need to be drawn may be determined according to preset Mask (Mask) information and the front-rear positional relationship. The mask information may define an area of the scene to be rendered. Specifically, when determining the region to be drawn in the target three-dimensional model, the method specifically includes the following steps:
Acquiring a third camera distance of each pixel point in the target three-dimensional model relative to a plane where the virtual camera is located;
When the first camera distance is greater than the second camera distance, determining an area to be drawn according to the pixel point, the third camera distance of which is greater than the first camera distance, in the target three-dimensional model and preset mask information;
And when the first camera distance is smaller than the second camera distance, determining an area to be drawn according to the pixel point of the target three-dimensional model, the third camera distance of which is smaller than the first camera distance, and preset mask information.
The third camera distance of each pixel point in the target three-dimensional model relative to the plane of the virtual camera is equivalent to the depth of each pixel point relative to the plane of the virtual camera.
When the first camera distance is greater than the second camera distance, it is indicated that the target three-dimensional model is located behind the two-dimensional model, and that the portion of the target three-dimensional model that is interspersed in front of the two-dimensional model should not be rendered. At this time, the region to be rendered in the scene may be determined based on the preset mask information, and a local region formed by pixels (i.e., pixels located behind the center point) having a depth value greater than the first camera distance in the target three-dimensional model located within the region is determined as the region to be rendered.
When the first camera distance is smaller than the second camera distance, the target three-dimensional model is positioned in front of the two-dimensional model, and the part, which is penetrated behind the two-dimensional model, of the target three-dimensional model is not drawn. At this time, the region to be rendered in the scene may be determined based on the preset mask information, and a local region formed by the pixels (i.e., the pixels located before the center point) having a depth value smaller than the first camera distance in the target three-dimensional model located within the region range may be determined as the region to be rendered.
In this embodiment, the camera distance is used instead of the depth information in determining the context, and because the depth information is at the pixel level and the depths of different places on the same object are different, it may cause a part of the object to be located in front of the person and another part to be located behind the person, resulting in a penetration effect (refer to the left-hand diagram of fig. 4). The camera distance is consistent with the information on the object, so that the object can be ensured to be positioned in front of or behind the person at the same time, and when the center point is selected to calculate the distance from the camera, the camera distances between other points on the object and the center point can be ensured to be consistent, thereby avoiding interpenetration among the objects and achieving a better shielding effect (refer to a right side diagram of fig. 4).
104. And performing picture rendering on the region to be drawn in the virtual scene.
Specifically, the region to be drawn is colored based on preset rendering parameters, and meanwhile, the depth information of the region to be drawn relative to the virtual camera is written into the depth buffer, so that picture rendering of the virtual scene is achieved.
In practice, for some scenes in which a more complex three-dimensional model is constructed to interact with a two-dimensional model, the three-dimensional model is not positioned in front of or behind the two-dimensional model. For example, taking a three-dimensional model as an example of a three-dimensional chair model with armrests, when a two-dimensional paper sheet is seated on the three-dimensional chair model and the hands of the two-dimensional paper sheet are placed outside the armrests of the three-dimensional chair model, if the constructed scene is viewed from the side, it is supposed that: the hands of the two-dimensional sheet, the armrests of the three-dimensional chair model, and other visible portions, the body portions of the two-dimensional sheet. At this time, the camera distance cannot be calculated by taking the three-dimensional seat model as a whole, so that the problem that the display effect does not coincide with the actual situation due to the fact that the three-dimensional seat model is blocked by the whole because the two-dimensional paper sheet is closer to the camera and the sequence of the three-dimensional seat model is behind the two-dimensional paper sheet is avoided.
Based on the method, the three-dimensional model with a complex structure can be split into a plurality of three-dimensional models with simple structures, the camera distance of each three-dimensional model with simple structures is calculated, and the front-back position relation with other two-dimensional models is determined according to the camera distance, so that the problem that the display effect is inconsistent with the actual situation due to the complex structure of the three-dimensional model is avoided. The three-dimensional model with a complex structure can be a concave model with uneven appearance; the three-dimensional model with a simple structure can be a non-concave model without concave surface in appearance. That is, in some embodiments, when obtaining the first camera distance of the target three-dimensional model in the virtual scene relative to the plane in which the virtual camera is located, the following procedure may specifically be included:
Acquiring a main body shape of a target three-dimensional model;
Decomposing the target three-dimensional model into a plurality of sub three-dimensional models according to the main body shape, wherein the main body shape of the sub three-dimensional model is a non-concave body;
And respectively acquiring the distances of the plurality of sub three-dimensional models relative to the plane where the virtual camera is located, and obtaining a first camera distance.
Specifically, when the main body shape of the target three-dimensional model is obtained, small decorations, small objects and the like which have no shielding effect on the model can be omitted. Then, the target three-dimensional model is decomposed to different degrees according to the complexity of the shape of the main body. For example, when the main body shape is recognized as a concave body, the target three-dimensional model may be decomposed into a plurality of sub three-dimensional models whose main body shape is a non-concave body, and an example of the decomposition may refer to fig. 5 (a concave body on the left side, a non-concave body after the decomposition on the right side). And then, respectively acquiring the distance of each sub three-dimensional model relative to the plane where the virtual camera is positioned by using the camera distance acquisition mode to obtain a first camera distance, so that a picture can be rendered based on the position relation between each sub three-dimensional model and the two-dimensional model, and the picture display effect is further improved. In order to save system resources, when the main body is recognized as a convex body (such as a sphere and a cuboid), model decomposition is not needed.
In some embodiments, the first camera distance comprises: each sub-three-dimensional model is a sub-camera distance relative to a plane in which the virtual camera lies. When determining the front-rear position relationship between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance, the front-rear position relationship between each sub-three-dimensional model and the two-dimensional model can be determined according to the plurality of sub-camera distances and the second camera distance.
It can be seen that, in the image rendering method provided by the embodiment of the present application, the first camera distance of the target three-dimensional model in the virtual scene relative to the plane where the virtual camera is located and the second camera distance of the two-dimensional model relative to the plane where the virtual camera is located are respectively obtained, and the front-back position relationship between the target three-dimensional model and the two-dimensional model is determined according to the first camera distance and the second camera distance. And determining the areas of the target three-dimensional model and the two-dimensional model, which need to be drawn, at least based on the front-back position relation, and performing picture rendering on the areas, which need to be drawn. In the scheme, the model is regarded as a whole, the front-back position relationship between objects is judged through the distance between the model and the virtual camera plane, so that all points on the model are consistent with the camera distance, the model is ensured to be positioned in front of or behind the two-dimensional model at the same time, the problem of interpenetration between the objects is avoided, and the picture display effect is improved.
The embodiment of the present application will be described in further detail below with reference to the two-dimensional model as a two-dimensional sheet and the three-dimensional model as a seat model.
Specifically, a serial number is allocated to each two-dimensional character in the virtual scene, and because the Stencil Buffer usually has 8 bits, if the number of characters is less than 8, the serial numbers of 0 to 7 can be allocated in sequence; otherwise, people can be clustered and divided into 8 classes, and then serial numbers of 0-7 are allocated according to the classes.
Because the visual angle is 2.5D, the front-back relation between the 2D character and the 3D model is judged to be relatively simple, and therefore, the distance between the two positions in the camera direction can be selected for comparison. The camera distance is used instead of the depth of the prior art because the depth information is pixel level and the depth is different in different places on the same object, which results in a part of the object being in front of the person and another part behind the person, resulting in a penetration effect. The camera distance is consistent with the information on the object, so that the object can be ensured to be positioned in front of or behind the person at the same time.
In practical application, the center point of the three-dimensional model can be selected to calculate the distance between the three-dimensional model and the plane of the virtual camera, so that the camera distances between other points on the three-dimensional model and the plane of the virtual camera are consistent with the camera distances between the center point and the plane of the virtual camera.
Note that, the present invention is not limited to the above-described embodiments. For some special models (such as carpets and floors) which cannot be blocked by people, the position relation of the special models is not needed to be considered.
Because the Stencil Buffer generally has 8 bits, 8 person serial numbers can be respectively corresponding to the Stencil Buffer, for each three-dimensional model, a two-dimensional paper sheet with one bit set as 1 representing the corresponding serial number is arranged in front of the three-dimensional model, and a two-dimensional paper sheet with one bit set as 0 representing the corresponding serial number is arranged behind the three-dimensional model; for each two-dimensional sheet, the corresponding bit is set to 1.
During model rendering, particularly, a Stencil value of a model is written into a Stencil Buffer, so that the effect of two-dimensional model display after control is achieved. Assume that for three-dimensional model A, a two-dimensional sheet of paper with numbers 1,3, and 6 is located behind it, while the remaining characters are located in front of it. Then, the stepil value of the three-dimensional model a: 0b00100101. Where 0b represents the prefix of a binary number, the body is 00100101, the stepil operation is set to replace, and a value is written into the stepil Buffer template Buffer when drawing the picture.
The current stepil value is checked before rendering the person. If the corresponding bit is 1, the position is blocked, drawing is not performed, and otherwise, the control of the two-dimensional model is drawn. Assuming that for person B, the number is 1, then mask information STENCIL MASK for B is 0B00000010, and the operation of stencil buffer test STENCIL TEST is not equal, i.e., the stepil value of the current pixel and mask information STENCIL MASK are anded at the time of rendering. If the three-dimensional model is 1, drawing is not carried out so as to realize the shielding of the three-dimensional model on the two-dimensional paper sheet.
In practical applications, when a three-dimensional model is taken as an example of a seat exhibiting a concave surface, when a two-dimensional paper sheet is seated on the seat, the two-dimensional paper sheet will be caught in front due to the presence of the armrests of the seat, so that an effect which does not coincide with the practical situation will be produced. This is because the small person is closer to the camera than the seat, so the system determines that the position of the seat is behind the two-dimensional sheet, thus resulting in the entire seat being obscured by the two-dimensional sheet.
In order to allow the armrest portion to be caught in front of the two-dimensional sheet, the seat model needs to be detached to separate the base from the armrest. Specifically, when the model is split, the main body shape of the three-dimensional model can be obtained (small decorations and small objects without shielding effect are omitted) first, then the concave shape is split into the shape without the concave (the method of convex splitting the concave polygon can be referred to specifically), and finally a plurality of convex polyhedrons of the three-dimensional model are obtained. For example, referring to fig. 6, the concave seat is split into three rectangular parallelepiped, an armrest X1, a base X2, and an armrest X3.
And finally, splitting the original model according to the split positions to obtain a final result. When the small person sits in the seat, the base is adjusted to the back of the small person, the armrest is adjusted to the front of the small person, and then the drawing is performed according to the previous method. By adopting the scheme, the problems of seats with handrails, galleries with handrails and the like can be solved, and roles can be more substituted.
In the scheme, the model is regarded as a whole, the front-back position relationship between objects is judged through the distance between the model and the virtual camera plane, so that all points on the model are consistent with the camera distance, the model is ensured to be positioned in front of or behind the two-dimensional model at the same time, the problem of interpenetration between the objects is avoided, and the picture display effect is improved.
In order to better implement the above method, the embodiment of the present application also provides a picture rendering device, which may be integrated in an electronic apparatus. The electronic device may be a terminal, a server, or the like. The terminal can be a mobile phone, a tablet personal computer, an intelligent Bluetooth device, a notebook computer, a personal computer and other devices; the server may be a single server or a server cluster composed of a plurality of servers.
In this embodiment, a method according to an embodiment of the present application will be described in detail by taking a specific integration of a screen rendering device in a smart phone as an example. For example, as shown in fig. 7, the screen rendering apparatus may include a first acquisition unit 301, a first determination unit 302, a second determination unit 303, and a rendering unit 304, as follows:
an obtaining unit 301, configured to obtain a first camera distance of a target three-dimensional model in a virtual scene relative to a plane in which a virtual camera is located, and a second camera distance of a two-dimensional model relative to the plane, respectively;
A first determining unit 302, configured to determine a front-rear position relationship between the target three-dimensional model and the two-dimensional model according to a first camera distance and a second camera distance;
a second determining unit 303, configured to determine, based on the front-rear position relationship, an area where the target three-dimensional model needs to be drawn;
and a rendering unit 304, configured to perform picture rendering on an area to be drawn in the virtual scene.
In some embodiments, when acquiring the first camera distance of the target three-dimensional model in the virtual scene relative to the plane in which the virtual camera is located, the acquiring unit is configured to:
Determining the position of a central point of the target three-dimensional model;
and taking the distance between the center point position and the plane where the virtual camera is located as a first camera distance of the target three-dimensional model relative to the plane where the virtual camera is located.
In some embodiments, in determining a front-rear positional relationship of the target three-dimensional model and the two-dimensional model according to a first camera distance and a second camera distance, the first determining unit is configured to:
Determining the target three-dimensional model to be located behind the two-dimensional model when the first camera distance is greater than the second camera distance;
when the first camera distance is less than the second camera distance, the target three-dimensional model is determined to be located before the two-dimensional model.
In some embodiments, when determining an area where the target three-dimensional model needs to be drawn based on the front-rear positional relationship, the second determining unit is configured to:
and determining the region of the target three-dimensional model to be drawn according to the preset mask information and the front-back position relation.
In some embodiments, when determining an area where the target three-dimensional model needs to be drawn based on the front-rear positional relationship, the second determining unit is configured to:
Acquiring a third camera distance of each pixel point in the target three-dimensional model relative to the plane;
When the first camera distance is greater than the second camera distance, determining an area to be drawn according to the pixel point of the target three-dimensional model, wherein the third camera distance of the pixel point is greater than the first camera distance;
And when the first camera distance is smaller than the second camera distance, determining an area to be drawn according to the pixel point of the target three-dimensional model, wherein the third camera distance of the pixel point is smaller than the first camera distance.
In some embodiments, the apparatus further comprises:
The dividing unit is used for dividing the two-dimensional models into models of different model types according to the position of each two-dimensional model in the virtual scene when the number of the two-dimensional models in the virtual scene exceeds the preset number;
And the setting unit is used for setting the front-back position relation between the two-dimensional model and the target three-dimensional model according to the model type, wherein the front-back position relation between the two-dimensional model and the three-dimensional model is the same for the two-dimensional model of the same model type.
In some embodiments, when the two-dimensional model is divided into models of different model types according to the position of each two-dimensional model in the virtual scene, the dividing unit is configured to:
dividing the virtual scene into the preset number of areas along the view angle direction of the virtual camera;
two-dimensional models located within the same region are divided into the same model type.
In some embodiments, when acquiring the first camera distance of the target three-dimensional model in the virtual scene relative to the plane in which the virtual camera is located, the acquiring unit is configured to:
acquiring the main body shape of the target three-dimensional model;
Decomposing the target three-dimensional model into a plurality of sub three-dimensional models according to the main body shape, wherein the main body shape of the sub three-dimensional model is a non-concave body;
and respectively acquiring the distances of the plurality of sub three-dimensional models relative to the plane where the virtual camera is located, and obtaining a first camera distance.
In some embodiments, the first camera distance comprises: the sub-camera distance of the sub-three-dimensional model relative to the plane in which the virtual camera is located; the first determining unit is further configured to, when determining a front-rear positional relationship of the target three-dimensional model and the two-dimensional model based on the first camera distance and the second camera distance:
and determining the front-back position relation between each sub three-dimensional model and the two-dimensional model according to the plurality of sub camera distances and the second camera distance.
As can be seen from the above, in the image rendering apparatus of the present embodiment, the obtaining unit 301 obtains the first camera distance of the target three-dimensional model in the virtual scene relative to the plane where the virtual camera is located and the second camera distance of the two-dimensional model relative to the plane where the virtual camera is located, respectively; the first determining unit 302 determines a front-rear position relationship between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance; the second determining unit 303 determines an area where the target three-dimensional model needs to be drawn based on the front-rear positional relationship; the rendering unit 404 performs screen rendering on an area to be drawn. In the scheme, the model is regarded as a whole, the front-back position relationship between objects is judged through the distance between the model and the virtual camera plane, so that all points on the model are consistent with the camera distance, the three-dimensional model is ensured to be positioned in front of or behind the two-dimensional model at the same time, the problem of interpenetration between the objects is avoided, and the picture display effect is improved.
Correspondingly, the embodiment of the application also provides electronic equipment which can be a terminal or a server, wherein the terminal can be terminal equipment such as a smart phone, a tablet Personal computer, a notebook computer, a touch screen, a game machine, a Personal computer, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA) and the like. Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in fig. 8. The electronic device 400 includes a processor 401 having one or more processing cores, a memory 402 having one or more computer readable storage media, and a computer program stored on the memory 402 and executable on the processor. The processor 401 is electrically connected to the memory 402. It will be appreciated by those skilled in the art that the electronic device structure shown in the figures is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The processor 401 is a control center of the electronic device 400, connects various parts of the entire electronic device 400 using various interfaces and lines, and performs various functions of the electronic device 400 and processes data by running or loading software programs and/or modules stored in the memory 402, and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device 400.
In the embodiment of the present application, the processor 401 in the electronic device 400 loads the instructions corresponding to the processes of one or more application programs into the memory 402 according to the following steps, and the processor 401 executes the application programs stored in the memory 402, so as to implement various functions:
Respectively acquiring a first camera distance of a target three-dimensional model in a virtual scene relative to a plane where a virtual camera is located and a second camera distance of a two-dimensional model relative to the plane;
determining the front-back position relationship between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance;
Determining an area to be drawn of the target three-dimensional model based on the front-rear position relation;
and carrying out picture rendering on the area needing to be drawn in the virtual scene.
Optionally, as shown in fig. 8, the electronic device 400 further includes: a touch display 403, a radio frequency circuit 404, an audio circuit 405, an input unit 406, and a power supply 407. The processor 401 is electrically connected to the touch display 403, the radio frequency circuit 404, the audio circuit 405, the input unit 406, and the power supply 407, respectively. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 8 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The touch display 403 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display screen 403 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of the electronic device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 401, and can receive and execute commands sent from the processor 401. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 401 to determine the type of touch event, and the processor 401 then provides a corresponding visual output on the display panel in accordance with the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 403 to realize the input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch-sensitive display 403 may also implement an input function as part of the input unit 406.
The radio frequency circuitry 404 may be used to transceive radio frequency signals to establish wireless communication with a network device or other electronic device via wireless communication.
The audio circuitry 405 may be used to provide an audio interface between a user and an electronic device through a speaker, microphone. The audio circuit 405 may transmit the received electrical signal after audio data conversion to a speaker, where the electrical signal is converted into a sound signal for output; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 405 and converted into audio data, which are processed by the audio data output processor 401 and sent via the radio frequency circuit 404 to e.g. another electronic device, or which are output to the memory 402 for further processing. The audio circuit 405 may also include an ear bud jack to provide communication of the peripheral headphones with the electronic device.
The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 407 is used to power the various components of the electronic device 400. Alternatively, the power supply 407 may be logically connected to the processor 401 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. The power supply 407 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 8, the electronic device 400 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
It can be known that, the electronic device provided in this embodiment determines the front-rear positional relationship between objects through the distance between the model and the virtual camera plane, so that all the points on the model keep consistent with the camera distance, and the three-dimensional model is ensured to be located in front of or behind the two-dimensional model at the same time, so as to avoid the problem of interpenetration between objects, and improve the picture display effect.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer readable storage medium in which a plurality of computer programs are stored, the computer programs being capable of being loaded by a processor to perform steps in any one of the picture rendering methods provided by the embodiment of the present application. For example, the computer program may perform the steps of:
respectively acquiring a first camera distance of a target three-dimensional model in a virtual scene relative to a plane where a virtual camera is located and a second camera distance of a two-dimensional model relative to the plane where the virtual camera is located;
determining the front-back position relationship between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance;
Determining an area to be drawn of the target three-dimensional model based on the front-rear position relation;
and carrying out picture rendering on the area needing to be drawn in the virtual scene.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The steps of any one of the image rendering methods provided by the embodiments of the present application can be executed by the computer program stored in the storage medium, so that the beneficial effects that any one of the image rendering methods provided by the embodiments of the present application can be achieved, and detailed descriptions of the previous embodiments are omitted.
The above describes in detail a method, an apparatus, a storage medium, and an electronic device for rendering a picture provided by the embodiments of the present application, and specific examples are applied to describe the principles and implementations of the present application, where the descriptions of the above embodiments are only used to help understand the method and core ideas of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (16)

1. A picture rendering method, comprising:
Respectively acquiring a first camera distance of a target three-dimensional model in a virtual scene relative to a plane where a virtual camera is located and a second camera distance of a two-dimensional model relative to the plane;
determining the front-back position relationship between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance;
Determining an area to be drawn of the target three-dimensional model based on the front-rear position relation;
performing picture rendering on the region to be drawn in the virtual scene;
The determining, based on the front-rear position relationship, an area where the target three-dimensional model needs to be drawn includes:
Acquiring a third camera distance of each pixel point in the target three-dimensional model relative to the plane; when the first camera distance is greater than the second camera distance, determining an area to be drawn according to the pixel point of the target three-dimensional model, wherein the third camera distance of the pixel point is greater than the first camera distance; and when the first camera distance is smaller than the second camera distance, determining an area to be drawn according to the pixel point of the target three-dimensional model, wherein the third camera distance of the pixel point is smaller than the first camera distance.
2. The method of claim 1, wherein obtaining a first camera distance of a three-dimensional model of a target in a virtual scene relative to a plane in which the virtual camera is located comprises:
Determining the position of a central point of the target three-dimensional model;
and taking the distance between the center point position and the plane where the virtual camera is located as a first camera distance of the target three-dimensional model relative to the plane where the virtual camera is located.
3. The picture rendering method according to claim 1, wherein determining the front-rear positional relationship of the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance comprises:
Determining the target three-dimensional model to be located behind the two-dimensional model when the first camera distance is greater than the second camera distance;
when the first camera distance is less than the second camera distance, the target three-dimensional model is determined to be located before the two-dimensional model.
4. The picture rendering method according to claim 1, wherein the determining an area in which the target three-dimensional model needs to be drawn based on the front-rear positional relationship includes:
and determining the region of the target three-dimensional model to be drawn according to the preset mask information and the front-back position relation.
5. The method for rendering a picture according to claim 1, wherein,
Further comprises:
When the number of the two-dimensional models in the virtual scene exceeds the preset number, dividing the two-dimensional models into models of different model types according to the positions of the two-dimensional models in the virtual scene;
And setting the front-back position relation between the two-dimensional model and the target three-dimensional model according to the model type, wherein the front-back position relation between the two-dimensional model and the three-dimensional model is the same for the two-dimensional model of the same model type.
6. The picture rendering method as claimed in claim 5, wherein the dividing the two-dimensional model into models of different model types according to the position of the two-dimensional model in the virtual scene comprises:
dividing the virtual scene into the preset number of areas along the view angle direction of the virtual camera;
two-dimensional models located within the same region are divided into the same model type.
7. The method for rendering a picture according to any one of claims 1 to 6, wherein the obtaining a first camera distance of the three-dimensional model of the target in the virtual scene relative to a plane in which the virtual camera is located includes:
acquiring the main body shape of the target three-dimensional model;
Decomposing the target three-dimensional model into a plurality of sub three-dimensional models according to the main body shape, wherein the main body shape of the sub three-dimensional model is a non-concave body;
and respectively acquiring the distances of the plurality of sub three-dimensional models relative to the plane where the virtual camera is located, and obtaining a first camera distance.
8. The picture rendering method of claim 7, wherein the first camera distance comprises: the sub-camera distance of the sub-three-dimensional model relative to the plane in which the virtual camera is located;
The determining the front-back position relationship between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance comprises the following steps:
and determining the front-back position relation between each sub three-dimensional model and the two-dimensional model according to the plurality of sub camera distances and the second camera distance.
9. A picture rendering method, comprising:
Respectively acquiring a first camera distance of a target three-dimensional model in a virtual scene relative to a plane where a virtual camera is located and a second camera distance of a two-dimensional model relative to the plane;
Determining the front-back position relationship between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance; when the number of the two-dimensional models in the virtual scene exceeds the preset number, dividing the two-dimensional models into models of different model types according to the positions of the two-dimensional models in the virtual scene; setting a front-back position relation between the two-dimensional model and the target three-dimensional model according to the model type; for a two-dimensional model of the same model type, the front-back position relationship between the two-dimensional model and the three-dimensional model is the same;
Determining an area to be drawn of the target three-dimensional model based on the front-rear position relation;
and carrying out picture rendering on the area needing to be drawn in the virtual scene.
10. The picture rendering method as claimed in claim 9, wherein the dividing the two-dimensional model into models of different model types according to the position of the two-dimensional model in the virtual scene comprises:
dividing the virtual scene into the preset number of areas along the view angle direction of the virtual camera;
two-dimensional models located within the same region are divided into the same model type.
11. The method for rendering a picture according to claim 9, wherein the obtaining a first camera distance of the target three-dimensional model in the virtual scene relative to the plane in which the virtual camera is located includes:
acquiring the main body shape of the target three-dimensional model;
Decomposing the target three-dimensional model into a plurality of sub three-dimensional models according to the main body shape, wherein the main body shape of the sub three-dimensional model is a non-concave body;
and respectively acquiring the distances of the plurality of sub three-dimensional models relative to the plane where the virtual camera is located, and obtaining a first camera distance.
12. The picture rendering method of claim 11, wherein the first camera distance comprises: the sub-camera distance of the sub-three-dimensional model relative to the plane in which the virtual camera is located;
The determining the front-back position relationship between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance comprises the following steps:
and determining the front-back position relation between each sub three-dimensional model and the two-dimensional model according to the plurality of sub camera distances and the second camera distance.
13. A picture rendering apparatus, comprising:
The device comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is used for respectively acquiring a first camera distance of a target three-dimensional model in a virtual scene relative to a plane where a virtual camera is located and a second camera distance of a two-dimensional model relative to the plane;
The first determining unit is used for determining the front-back position relationship between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance;
A second determining unit, configured to determine, based on the front-rear positional relationship, an area where the target three-dimensional model needs to be drawn;
The rendering unit is used for performing picture rendering on the area to be drawn in the virtual scene;
The second determining unit is used for obtaining a third camera distance of each pixel point in the target three-dimensional model relative to the plane; when the first camera distance is greater than the second camera distance, determining an area to be drawn according to the pixel point of the target three-dimensional model, wherein the third camera distance of the pixel point is greater than the first camera distance; and when the first camera distance is smaller than the second camera distance, determining an area to be drawn according to the pixel point of the target three-dimensional model, wherein the third camera distance of the pixel point is smaller than the first camera distance.
14. A picture rendering apparatus, comprising:
The device comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is used for respectively acquiring a first camera distance of a target three-dimensional model in a virtual scene relative to a plane where a virtual camera is located and a second camera distance of a two-dimensional model relative to the plane;
The first determining unit is used for determining the front-back position relationship between the target three-dimensional model and the two-dimensional model according to the first camera distance and the second camera distance; when the number of the two-dimensional models in the virtual scene exceeds the preset number, dividing the two-dimensional models into models of different model types according to the positions of the two-dimensional models in the virtual scene; setting a front-back position relation between the two-dimensional model and the target three-dimensional model according to the model type; for a two-dimensional model of the same model type, the front-back position relationship between the two-dimensional model and the three-dimensional model is the same;
A second determining unit, configured to determine, based on the front-rear positional relationship, an area where the target three-dimensional model needs to be drawn;
And the rendering unit is used for performing picture rendering on the area needing to be drawn in the virtual scene.
15. A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps in the picture rendering method of any one of claims 1-12.
16. An electronic device comprising a processor and a memory, the memory storing a plurality of instructions; the processor loads instructions from the memory to perform the steps in the picture rendering method according to any of claims 1-12.
CN202011270927.9A 2020-11-13 2020-11-13 Picture rendering method and device, storage medium and electronic equipment Active CN112316425B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011270927.9A CN112316425B (en) 2020-11-13 2020-11-13 Picture rendering method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011270927.9A CN112316425B (en) 2020-11-13 2020-11-13 Picture rendering method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112316425A CN112316425A (en) 2021-02-05
CN112316425B true CN112316425B (en) 2024-07-09

Family

ID=74318176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011270927.9A Active CN112316425B (en) 2020-11-13 2020-11-13 Picture rendering method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112316425B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113691796B (en) * 2021-08-16 2023-06-02 福建凯米网络科技有限公司 Three-dimensional scene interaction method through two-dimensional simulation and computer readable storage medium
CN114615487B (en) * 2022-02-22 2023-04-25 聚好看科技股份有限公司 Three-dimensional model display method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110264568A (en) * 2019-06-21 2019-09-20 网易(杭州)网络有限公司 A kind of three dimensional virtual models exchange method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513112B (en) * 2014-10-16 2018-11-16 北京畅游天下网络技术有限公司 Image processing method and device
US10839594B2 (en) * 2018-12-11 2020-11-17 Canon Kabushiki Kaisha Method, system and apparatus for capture of image data for free viewpoint video
CN110889890B (en) * 2019-11-29 2023-07-28 深圳市商汤科技有限公司 Image processing method and device, processor, electronic equipment and storage medium
CN111803945B (en) * 2020-07-23 2024-02-09 网易(杭州)网络有限公司 Interface rendering method and device, electronic equipment and storage medium
CN111729307B (en) * 2020-07-30 2023-08-22 腾讯科技(深圳)有限公司 Virtual scene display method, device, equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110264568A (en) * 2019-06-21 2019-09-20 网易(杭州)网络有限公司 A kind of three dimensional virtual models exchange method and device

Also Published As

Publication number Publication date
CN112316425A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
CN108846274B (en) Security verification method, device and terminal
CN113052947B (en) Rendering method, rendering device, electronic equipment and storage medium
CN112138386A (en) Volume rendering method and device, storage medium and computer equipment
CN112316425B (en) Picture rendering method and device, storage medium and electronic equipment
US20210312696A1 (en) Method and apparatus for displaying personalized face of three-dimensional character, device, and storage medium
CN113952720A (en) Game scene rendering method and device, electronic equipment and storage medium
CN112206517A (en) Rendering method, device, storage medium and computer equipment
CN114782605A (en) Rendering method and device of hair virtual model, computer equipment and storage medium
CN113538696A (en) Special effect generation method and device, storage medium and electronic equipment
CN112465945A (en) Model generation method and device, storage medium and computer equipment
CN113797531B (en) Occlusion rejection implementation method and device, computer equipment and storage medium
CN117455753B (en) Special effect template generation method, special effect generation device and storage medium
CN118135081A (en) Model generation method, device, computer equipment and computer readable storage medium
CN113426129A (en) User-defined role appearance adjusting method, device, terminal and storage medium
CN113487662A (en) Picture display method and device, electronic equipment and storage medium
CN117593493A (en) Three-dimensional face fitting method, three-dimensional face fitting device, electronic equipment and storage medium
CN116385615A (en) Virtual face generation method, device, computer equipment and storage medium
CN115645921A (en) Game indicator generating method and device, computer equipment and storage medium
CN113345059B (en) Animation generation method and device, storage medium and electronic equipment
CN116797631A (en) Differential area positioning method, differential area positioning device, computer equipment and storage medium
CN115222867A (en) Overlap detection method, overlap detection device, electronic equipment and storage medium
CN114266849A (en) Model automatic generation method and device, computer equipment and storage medium
CN114189731A (en) Feedback method, device, equipment and storage medium after presenting virtual gift
CN113413600A (en) Information processing method, information processing device, computer equipment and storage medium
CN113362348B (en) Image processing method, image processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant