Disclosure of Invention
The invention mainly aims to provide a method for solving the technical problem that a large amount of time and system resources are consumed for rendering a large amount of three-dimensional models in the prior art.
To achieve the above object, the present invention provides a model rendering method, the method comprising:
receiving manual marking operation, and determining a model to be detected from models included in a virtual reality scene based on the manual marking operation;
and determining a model to be rendered from the model to be detected through light detection, and rendering the model to be rendered.
Optionally, the determining, by light detection, the model to be rendered from the models to be detected includes:
and controlling a camera to emit a light ray to each pixel point on a screen where the virtual reality scene is located, and taking the model irradiated by the light ray in the model to be detected as the model to be rendered.
Optionally, the step of taking the model irradiated by the light in the model to be detected as the model to be rendered includes:
detecting whether each model in the models to be detected has an intersection point with at least one ray;
and taking a model with an intersection point with at least one ray in the models to be detected as a model to be rendered.
Optionally, the detecting whether each of the models to be detected has an intersection with at least one light ray includes:
detecting whether each surface of each model in the models to be detected has an intersection point with at least one ray;
and taking the model with the intersection point of at least one surface and at least one ray as the model with the intersection point of at least one ray.
Optionally, the detecting whether each face of each model in the models to be detected has an intersection with at least one light ray includes:
and combining the parameter equation of any light ray with the parameter equation of any surface, wherein the parameter equation of any light ray is as follows: p=p 0 +ut, the parametric equation for either side is n· (a-P) =0, givingWhen t is largeWhen the value is equal to zero, the surface and the light have an intersection point; wherein P is 0 The initial coordinate of the light ray, u is the direction vector of the light ray, different points of the light ray can be obtained by different t values, n is the normal vector of the plane, P is the normal vector and the coordinate of the focal point of the plane, and a is the coordinate of another point in the plane.
In addition, to achieve the above object, the present invention also provides a model rendering apparatus, including:
the marking module is used for receiving manual marking operation, and determining a model to be detected from models included in the virtual reality scene based on the manual marking operation;
and the rendering module is used for determining a model to be rendered from the models to be detected through light detection and rendering the models to be rendered.
Optionally, the rendering module is configured to:
and controlling a camera to emit a light ray to each pixel point on a screen where the virtual reality scene is located, and taking the model irradiated by the light ray in the model to be detected as the model to be rendered.
Optionally, the rendering module is configured to:
detecting whether each model in the models to be detected has an intersection point with at least one ray;
and taking a model with an intersection point with at least one ray in the models to be detected as a model to be rendered.
Optionally, the rendering module is configured to:
detecting whether each surface of each model in the models to be detected has an intersection point with at least one ray;
and taking the model with the intersection point of at least one surface and at least one ray as the model with the intersection point of at least one ray.
Optionally, the rendering module is configured to:
and combining the parameter equation of any light ray with the parameter equation of any surface, wherein the parameter equation of any light ray is as follows: p=p 0 +ut, the parametric equation for either side is n· (a-P) =0, givingWhen t is greater than or equal to zero, the plane and the light have an intersection point; wherein P is 0 The initial coordinate of the light ray, u is the direction vector of the light ray, different points of the light ray can be obtained by different t values, n is the normal vector of the plane, P is the normal vector and the coordinate of the focal point of the plane, and a is the coordinate of another point in the plane.
In the method, manual marking operation is received, and a model to be detected is determined from models included in a virtual reality scene based on the manual marking operation; and determining a model to be rendered from the model to be detected through light detection, and rendering the model to be rendered. By the method and the device, unnecessary rendering is reduced, the display effect is not affected, the rendering efficiency is improved, and system resources are saved.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating an embodiment of a model rendering method according to the present invention. In one embodiment, a model rendering method includes:
step S10, receiving manual marking operation, and determining a model to be detected from models included in a virtual reality scene based on the manual marking operation;
in this embodiment, a virtual reality scene generally includes a plurality of three-dimensional models, some of which are relatively large and are not occluded by other objects, so default to this type of three-dimensional model is to be rendered. While other three-dimensional models may be blocked by other objects due to their relatively small size, in this case, even if the blocked three-dimensional model is rendered, it cannot be seen by human eyes, and therefore, it is unnecessary to render the blocked three-dimensional model. Therefore, a three-dimensional model which is not an occlusion object needs to be marked by a manual marking mode to carry out further judgment. Namely, marking the three-dimensional model which is not a shielding object through manual marking operation, and taking the marked three-dimensional model as a model to be detected.
And S20, determining a model to be rendered from the models to be detected through light detection, and rendering the models to be rendered.
In this embodiment, through light detection, it is detected whether each model included in the model to be detected is a visible model, and if one or more models are visible models, the one or more models are used as the model to be rendered, and the model to be rendered is rendered. Of course, the large three-dimensional model which is not marked in the step S10 and is not blocked by other objects needs to be rendered, and the larger model is not marked and defaults to be rendered.
Further, in an embodiment, step S20 includes:
and controlling a camera to emit a light ray to each pixel point on a screen where the virtual reality scene is located, and taking the model irradiated by the light ray in the model to be detected as the model to be rendered.
In this embodiment, the virtual reality scene is displayed on the screen, and if the screen has M pixels, the camera is controlled to emit one light ray to each pixel, i.e. M light rays are emitted in total. Assuming that the model to be detected comprises 15 three-dimensional models, if it is detected that 10 three-dimensional models are all irradiated by at least one ray, the 10 three-dimensional models are taken as the model to be rendered.
Further, in an embodiment, the method for using the model irradiated by the light in the model to be detected as the model to be rendered includes:
detecting whether each model in the models to be detected has an intersection point with at least one ray; and taking a model with an intersection point with at least one ray in the models to be detected as a model to be rendered.
In this embodiment, whether each model is illuminated by at least one ray is determined by detecting whether each model has an intersection with at least one ray. If a model has an intersection point with at least one ray, determining that the model is irradiated by at least one ray, and considering the model as a model to be rendered.
Further, in an embodiment, the detecting whether each of the models to be detected has an intersection with at least one ray includes:
detecting whether each surface of each model in the models to be detected has an intersection point with at least one ray; and taking the model with the intersection point of at least one surface and at least one ray as the model with the intersection point of at least one ray.
In this embodiment, since each model is a three-dimensional model, each model has multiple faces, it is necessary to detect, for one model, whether each face of the model has an intersection with at least one ray, and if at least one of the faces included in the model has an intersection with at least one ray, it is determined that the model has an intersection with at least one ray. For example, the model to be detected includes 15 three-dimensional models, respectively, three-dimensional model 1 to three-dimensional model 15, wherein three-dimensional model 1 includes 5 faces, respectively, face 1 to face 5, and then it is detected whether face 1 has an intersection with at least one light ray, face 2 has an intersection with at least one light ray, face 3 has an intersection with at least one light ray, face 4 has an intersection with at least one light ray, and face 5 has an intersection with at least one light ray. If at least one of the surfaces 1 to 5 has an intersection point with at least one light ray, the three-dimensional model 1 is considered to have an intersection point with at least one light ray, that is, the three-dimensional model 1 is considered to be irradiated by at least one light ray, that is, the three-dimensional model 1 is considered to be a model to be rendered. If the three-dimensional model 2 includes 6 surfaces, namely, a surface 1 to a surface 6 respectively, and by detecting that the intersection points between the surface 1 and the surface 6 and the light do not exist, the three-dimensional model 2 is considered to be not irradiated by the light, the three-dimensional model 2 is considered to be an invisible model, and the three-dimensional model 2 does not need to be determined as a model to be rendered. And the like, whether each model included in the model to be detected is the model to be rendered or not can be determined.
Further, in an embodiment, detecting whether each face of each of the models to be detected has an intersection with at least one ray includes:
and combining the parameter equation of any light ray with the parameter equation of any surface, wherein the parameter equation of any light ray is as follows: p=p 0 +ut, the parametric equation for either side is n· (a-P) =0, givingWhen t is greater than or equal to zero, the plane and the light have an intersection point; wherein P is 0 The initial coordinate of the light ray, u is the direction vector of the light ray, different points of the light ray can be obtained by different t values, n is the normal vector of the plane, P is the normal vector and the coordinate of the focal point of the plane, and a is the coordinate of another point in the plane.
In this embodiment, a three-dimensional model of the model to be detected is taken as an example, and if the three-dimensional model is a 6-plane body, the three-dimensional model includes 6 planes, which are denoted as planes 1 to 6. If the screen has M pixels, there are M total rays. Taking an example of detecting whether the plane 1 has an intersection with at least one ray, assuming that the plane 1 has an intersection with any ray, the parameter equation of any ray is p=p 0 +ut, the parametric equation for plane 1 is n· (a-P) =0, where P 0 The initial coordinate of the light ray is u, the direction vector of the light ray is u, different points of the light ray can be obtained by different t values, n is the normal vector of the plane 1, P is the coordinate of the normal vector and the focal point of the plane 1, namely, the coordinate of the intersection point of the plane 1 and any light ray is P, and a is the coordinate of another point in the plane 1. Parameters of any ray of lightEquation and parameter equation of plane 1 can be obtainedBy combining n, a, P 0 The specific terms of u are substituted to obtain the value of t. Wherein each ray has a corresponding P 0 U, respectively, light ray 1 to P of light ray M 0 And u is substituted. When substituted P 0 U is P corresponding to ray x 0 When u, t is greater than or equal to zero, the intersection point exists between the description surface 1 and the light ray x, namely, the intersection point exists between at least one surface of the three-dimensional model and at least one light ray, the three-dimensional model is determined to be irradiated by at least one light ray, and the three-dimensional model is considered to be the model to be rendered. If the light rays 1 to M are respectively P 0 After substituting u, the calculated values of M t are smaller than zero, and if no intersection point exists between the surface 1 and M rays, whether the surface 2 has an intersection point with at least one ray is continuously monitored according to the mode. If the intersection points of the surface 1 and the surface 6 and the M rays are detected, the three-dimensional model is not irradiated by the rays, and the three-dimensional model is considered to be a model which does not need rendering.
Referring to fig. 2, fig. 2 is a schematic diagram of an effect of rendering a virtual reality scene based on the prior art. Referring to fig. 3, fig. 3 is a schematic view illustrating an effect of rendering a virtual reality scene based on the model rendering method of the present invention. As can be seen from the comparison between fig. 2 and fig. 3, the number of the three-dimensional models rendered in fig. 2 is far greater than that of the three-dimensional models rendered in fig. 3, and in fig. 3, some invisible models are selectively not rendered, so that unnecessary rendering is reduced, the display effect is not affected, the rendering efficiency is improved, and the system resources are saved.
Referring to fig. 4, fig. 4 is a schematic view of a scene based on the prior art for rendering a virtual reality scene. As shown in fig. 4, in the overdraw mode, the shielding condition of the object in the scene is checked, and the brightness of the area marked by the rectangular frame is high, which indicates that the object in the area is seriously overlapped, that is, the object which is not irradiated by the light is many. The status information of the scene was quantitatively checked through the status information panel, and the number of batches was 296 (the higher this value, the more stuck) from the status information, the number of triangles was about 42 ten thousand, and the number of vertices was about 62 ten thousand.
Referring to fig. 5, fig. 5 is a schematic view of a scene for rendering a virtual reality scene based on the model rendering method of the present invention. As shown in fig. 5, in the overdraw mode, the brightness of the area marked by the rectangular box is significantly reduced, which indicates that the object overlap in this area has been eliminated, that is, the object that is not irradiated by the light is not rendered. The status information of the scene is quantitatively checked through the status information panel, the number of batches visible from the status information is reduced to 62, the number of triangles is reduced to about 15 ten thousand, and the number of vertices is reduced to about 21 ten thousand. By the model rendering method provided by the invention, the batch number, the triangle number and the vertex number are reduced, the lower the values are, the smoother the work operates, otherwise, the work is blocked.
In the embodiment, receiving a manual marking operation, and determining a model to be detected from models included in a virtual reality scene based on the manual marking operation; and determining a model to be rendered from the model to be detected through light detection, and rendering the model to be rendered. By the embodiment, unnecessary rendering is reduced, the display effect is not affected, the rendering efficiency is improved, and system resources are saved.
Referring to fig. 6, fig. 6 is a schematic functional block diagram of an embodiment of a model rendering apparatus according to the present invention. In an embodiment, a model rendering apparatus includes:
the marking module 10 is used for receiving manual marking operation, and determining a model to be detected from models included in the virtual reality scene based on the manual marking operation;
and the rendering module 20 is configured to determine a model to be rendered from the models to be detected through light detection, and render the models to be rendered.
Further, in an embodiment, the rendering module 20 is configured to:
and controlling a camera to emit a light ray to each pixel point on a screen where the virtual reality scene is located, and taking the model irradiated by the light ray in the model to be detected as the model to be rendered.
Further, in an embodiment, the rendering module 20 is configured to:
detecting whether each model in the models to be detected has an intersection point with at least one ray;
and taking a model with an intersection point with at least one ray in the models to be detected as a model to be rendered.
Further, in an embodiment, the rendering module 20 is configured to:
detecting whether each surface of each model in the models to be detected has an intersection point with at least one ray;
and taking the model with the intersection point of at least one surface and at least one ray as the model with the intersection point of at least one ray.
Further, in an embodiment, the rendering module 20 is configured to:
and combining the parameter equation of any light ray with the parameter equation of any surface, wherein the parameter equation of any light ray is as follows: p=p 0 +ut, the parametric equation for either side is n· (a-P) =0, givingWhen t is greater than or equal to zero, the plane and the light have an intersection point; wherein P is 0 The initial coordinate of the light ray, u is the direction vector of the light ray, different points of the light ray can be obtained by different t values, n is the normal vector of the plane, P is the normal vector and the coordinate of the focal point of the plane, and a is the coordinate of another point in the plane.
The specific embodiment of the model rendering device is basically the same as each embodiment of the model rendering method, and is not described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.